مطالب مرتبط با کلیدواژه

Machine Translation (MT)


۱.

Translation Technology Tools and Professional Translators’ Attitudes toward Them(مقاله علمی وزارت علوم)

کلیدواژه‌ها: CAT Tools Machine Translation (MT) Professional Translators Translation Memory (TM) Technology

حوزه‌های تخصصی:
تعداد بازدید : ۷۲۷ تعداد دانلود : ۴۲۹
Today technology is an integral part of professional translation; and it is generally assumed that translators’ attitudes toward translation technology tools influence their interaction with technology (Bundgaard, 2017). Therefore, the present two-phase study seeks to shed some light on what translation technology tools are and how professional translators feel toward them. The research method used is exploratory in nature, as it tends to discuss issues on which little research has been done and relies on secondary research for its data. The data required for answering the first question have been mined utilizing document analysis from language service providers’ (LSPs) websites, while the data for working out the answer to the second question have been obtained from ProZ.com Quick Polls. Based on our findings, translation technology tools fall into eight broad categories, of which the most commonly used are translation memory (TM) or computer-assisted translation (CAT) tools. In addition, it was found that most translators either do not have a love-hate relationship with technology or love it. This research is envisaged to form the basis of more detailed and conclusive studies.
۲.

Advancing Natural Language Processing with New Models and Applications in 2025(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Natural Language Processing (NLP) transformer models hybrid NLP systems Reinforcement Learning Machine Translation (MT) Sentiment Analysis multilingual data AI applications bias mitigation ethical NLP

حوزه‌های تخصصی:
تعداد بازدید : ۷ تعداد دانلود : ۳
Background: Recent advancements in Natural Language Processing (NLP) have been significantly influenced by transformer models. However, challenges related to scalability, discrepancies between pretraining and finetuning, and suboptimal performance on tasks with diverse and limited data remain. The integration of Reinforcement Learning (RL) with transformers has emerged as a promising approach to address these limitations. Objective: This article aims to evaluate the performance of a transformer-based NLP model integrated with RL across multiple tasks, including translation, sentiment analysis, and text summarization. Additionally, the study seeks to assess the model's efficiency in real-time operations and its fairness. Methods: The hybrid model's effectiveness was evaluated using task-oriented metrics such as BLEU, F1, and ROUGE scores across various task difficulties, dataset sizes, and demographic samples. Fairness was measured based on demographic parity and equalized odds. Scalability and real-time performance were assessed using accuracy and latency metrics. Results: The hybrid model consistently outperformed the baseline transformer across all evaluated tasks, demonstrating higher accuracy, lower error rates, and improved fairness. It also exhibited robust scalability and significant reductions in latency, enhancing its suitability for real-time applications. Conclusion: This article illustrates that the proposed hybrid model effectively addresses issues related to scale, diversity, and fairness in NLP. Its flexibility and efficacy make it a valuable tool for a wide range of linguistic and practical applications. Future research should focus on improving time complexity and exploring the use of deep unsupervised learning for low-resource languages.