مطالب مرتبط با کلیدواژه

Natural Language Processing (NLP)


۱.

Semantic Patent Classification Using Stack Generalization of Deep Models(مقاله علمی وزارت علوم)

نویسنده:

کلیدواژه‌ها: Patent semantic analysis deep learning Patent information retrieval Natural Language Processing (NLP)

تعداد بازدید : ۲۴۱ تعداد دانلود : ۱۰۵
Over the past few years, there has been a significant increase in patent applications, which has resulted in a heavier workload for examination offices in examining and prosecuting these inventions. To adequately perform this legal process, examiners must thoroughly analyze patents by manually identifying the semantic information such as problem description and solutions. The process of manually annotating is both tedious and time-consuming. To solve this issue, we have introduced a deep ensemble model for semantic paragraph-level pattern classification based on the semantic content of patents. Specifically, our proposed model classifies the paragraphs into semantic categories to facilitate the annotation process. The proposed model employs stack generalization as an ensemble method for combining various deep models such as Long Short-Term Memories (LSTM), bidirectional LSTM (BiLSTM), Convolutional Neural Networks (CNN), Gated Recurrent Units (GRU), and the pre-trained BERT model. We compared the proposed model with several baselines and state-of-the-art deep models on the PaSA dataset containing 150000 USPTO patents classified into three classes of 'technical advantages', 'technical problems', and 'other boilerplate text'. The results of extensive experiments show that the proposed model outperforms both traditional and state-of-the-art deep models significantly.
۲.

Advancing Natural Language Processing with New Models and Applications in 2025(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Natural Language Processing (NLP) transformer models hybrid NLP systems Reinforcement Learning Machine Translation (MT) Sentiment Analysis multilingual data AI applications bias mitigation ethical NLP

حوزه‌های تخصصی:
تعداد بازدید : ۵ تعداد دانلود : ۳
Background: Recent advancements in Natural Language Processing (NLP) have been significantly influenced by transformer models. However, challenges related to scalability, discrepancies between pretraining and finetuning, and suboptimal performance on tasks with diverse and limited data remain. The integration of Reinforcement Learning (RL) with transformers has emerged as a promising approach to address these limitations. Objective: This article aims to evaluate the performance of a transformer-based NLP model integrated with RL across multiple tasks, including translation, sentiment analysis, and text summarization. Additionally, the study seeks to assess the model's efficiency in real-time operations and its fairness. Methods: The hybrid model's effectiveness was evaluated using task-oriented metrics such as BLEU, F1, and ROUGE scores across various task difficulties, dataset sizes, and demographic samples. Fairness was measured based on demographic parity and equalized odds. Scalability and real-time performance were assessed using accuracy and latency metrics. Results: The hybrid model consistently outperformed the baseline transformer across all evaluated tasks, demonstrating higher accuracy, lower error rates, and improved fairness. It also exhibited robust scalability and significant reductions in latency, enhancing its suitability for real-time applications. Conclusion: This article illustrates that the proposed hybrid model effectively addresses issues related to scale, diversity, and fairness in NLP. Its flexibility and efficacy make it a valuable tool for a wide range of linguistic and practical applications. Future research should focus on improving time complexity and exploring the use of deep unsupervised learning for low-resource languages.
۳.

Exploring the Synergy between AI and Cybersecurity for Threat Detection(مقاله علمی وزارت علوم)

کلیدواژه‌ها: AI Cybersecurity Threat Detection Machine Learning (ML) Deep Learning (DL) Natural Language Processing (NLP) Advanced Persistent Threats (APT) Cyber-attacks AI-driven Systems Security Infrastructure

حوزه‌های تخصصی:
تعداد بازدید : ۴ تعداد دانلود : ۳
Background : Security has been a major issue of discussion due to increase in the number and sophistication of Cyber threats in the modern era. Conventional approaches to threat identification might face difficulties in a number of things, namely the relevancy and the ability to process new and constantly evolving threats. Machine learning (ML) and deep learning (DL) based Approaches present AI as a potential solution to the problem of efficient threat detection.   Objective : The article aims to compare the RF, SVM, CNNs, and RNNs models’ performance, computational time, and resilience in identifying potential cyber threats, such as malware, phishing, and DoS attacks.   Methods : The proposed models were trained as well as evaluated on the NSL-KDD and CICIDS 2017 datasets. This was done based on common scheme indicators including accuracy, precision, recollection, F1 measure, detection rate of efficiency, AUC-ROC, False Alarm Rate (FAR), and the stability to adversaries. Rating of computational efficiency was defined by training time and memory consumption.   Results : The findings indicate that the CNNs gave the best accuracy (96%) and resisted perturbation better, and the RF showed good performance with little computational load. RNNs have been proved effective in sequential data analysis and SVM also performed fairly well on binary data classification although there is a problem of scalability.   Conclusion : CNNs used in AI models are the best solutions to protection from the threats in the cybersecurity space. Nevertheless, some of them still require computational optimization in order to make those beneficial in scenarios with a limited usage of computational resources. It is suggested that these findings can be used in the context of subsequent research and practical applications.