مطالب مرتبط با کلیدواژه
۶۱.
۶۲.
۶۳.
۶۴.
۶۵.
۶۶.
۶۷.
۶۸.
۶۹.
۷۰.
۷۱.
۷۲.
۷۳.
Machine Learning
منبع:
Journal of Information Technology Management , Volume ۱۷, Special Issue on SI: Intelligent Security and Management, ۲۰۲۵
16 - 31
حوزههای تخصصی:
This paper focuses on a new hybrid machine learning model for classifying eye states from EEG signals by integrating traditional techniques with deep learning methods. Our Hybrid LSTM-KNN architecture employs KNN for classification and uses LSTM networks to extract features temporally. In addition, we perform extensive feature engineering, including statistical Z-test and IQR filtering, dimensionality reduction using PCA, and multivariate analysis to further model the performance. Moreover, an SVM-based unsupervised clustering approach is proposed to partition the EEG feature space, followed by ensemble learning in each cluster to improve accuracy and robustness. Using the EEG Eye State Dataset for the first assessment, the Hybrid LSTM-KNN model recorded an accuracy of 87.2% without PCA. Further improvements through statistical filtering outperformed initial expectations, achieving a 6% rise in performance to 89.1% after outlier removal, 89.1% with Z-test (σ = 3), and 88.3% with IQR (1.5x). After applying PCA along with ensemble learning post clustering, the final model exceeded expectations with an accuracy and F1 score of 96.8%, surpassing Ensemble Cluster-KNN and traditional models based on Ensemble Cluster-KNN, Logistic Regression, SVM, and Random Forest. The outcome demonstrates the robustness and noise-resilience of the model’s performance in practical real-time brain-computer interface and cognitive monitoring systems.
Artificial intelligence in credit risk assessment
منبع:
Socio-Spatial Studies, Vol ۶, Issue ۲, Spring ۲۰۲۲
1 - 14
حوزههای تخصصی:
This study presents a structured literature review on the application of AI in credit risk assessment, synthesizing empirical and conceptual research published between 2016 and 2022. It critically examines a range of AI models, including artificial neural networks (ANN), support vector machines (SVM), fuzzy logic systems, and hybrid architectures, with an emphasis on their predictive accuracy, robustness, and operational applicability. The review highlights that AI-based models consistently outperform traditional statistical techniques in handling nonlinear patterns, imbalanced datasets, and complex borrower profiles. Furthermore, AI enhances the inclusivity of credit evaluation by integrating alternative data sources and adapting to dynamic financial environments. However, the study also identifies ongoing challenges related to model interpretability, fairness, and regulatory compliance. By evaluating model performance metrics and methodological innovations across multiple contexts—including emerging markets, peer-to-peer platforms, and digital banking—the study offers a nuanced understanding of AI's strengths and limitations. The paper concludes with a call for balanced integration of explainable AI tools and ethical governance to ensure responsible deployment in financial institutions.
Synergizing 5G and Artificial Intelligence: Catalyzing the Evolution of Industry 4.0(مقاله علمی وزارت علوم)
منبع:
پژوهشنامه پردازش و مدیریت اطلاعات دوره ۴۰ تابستان ۱۴۰۴ ویژه نامه انگلیسی ۴ (پیاپی ۱۲۵)
495 - 524
حوزههای تخصصی:
Background: The marriage of 5G and Artificial Intelligence (AI) has been brought forward as a key enabler of Industry 4.0 and smart city applications. These technologies solve the problem of latency, scalability, and energy use, providing technology support for real-time decision-making and efficient organization of work. Nevertheless, studies regarding their individual and collective effects in a plethora of industrial and urban contexts are still limited. Objective: The objective of this research is to assess the performance, energy saving, and expansibility of 5G and AI synergies in manufacturing, logistics, healthcare, and smart city applications and highlight their challenges and potential for further exploration. Methods: An experimental data collection, mathematical modeling and comparative analysis approach was employed. Performance indicators including latency, possible and actual throughput, power usage, and predicting achievement were measured in real pilot tests implemented in dense networks and IoT contexts. Available data were compared with other similar studies to gain an understanding of the results. Results: The conjoin with 5G and AI suggested potential optimization of process; the latency has been decreased to more than 90%, its predictive maintenance was sharpened, and its power consumption was decrease to 75%. The feasibility of extending scalability and system reliability of the protocol was confirmed in dense IoT environments, with further potential for emission reduction. Conclusion: The study identifies the use of 5G in Industry 4.0 with AI in addressing dynamic issues but potential drawback includes scalability and security. More studies should be conducted on the novel hybrid architectures and 6G integration concerning more extensive areas.
Advancing Sustainability in IT by Transitioning to Zero-Carbon Data Centers(مقاله علمی وزارت علوم)
منبع:
پژوهشنامه پردازش و مدیریت اطلاعات دوره ۴۰ تابستان ۱۴۰۴ ویژه نامه انگلیسی ۴ (پیاپی ۱۲۵)
583 - 608
حوزههای تخصصی:
Cyber threats are changing constantly and these days more than 560,000 new malware varieties are launched daily, which means that rudimentary measures of protecting networks from attacks cannot be of much help in handling real time threats. Single-static security control and manual intervention are insufficient to address APTs, Zero Day, and high-volume DDoS attacks. This is where the application of AI in network security lays its foundation, where real time threat response programs become possible where they are trained to automatically identify, categorize, and mitigate highly complex attacks without requiring massive amount of time and effort. The changing role of AI in network security is examined in this work since it can contribute to the improvement of threat detection, decrease response time, and minimize reliance on human factors. This research reviews more than 150 AI-based security frameworks, and 25 case studies of different industries including finance, healthcare, telecommunications, to assess the efficiency of machine learning and deep learning algorithms for autonomous threat response. The insights show that in challenging contexts, AI-based solutions provide anomaly detection scores of up to 97%, which are far higher than those obtained by conventional systems with average scores of 80%. The response time increased up to 75% as the AI systems responded under 3 seconds during the large scale cyberattack simulation operations. Significant achievement of scalability was across networks with number of nodes more than ten thousand nodes at 90% reliability in different threat scenarios. These findings underscore the importance of AI as the cornerstone of today’s cybersecurity: delivering accurate and timely threat coverage and demonstrating high resilience to threat evolution. However, issues like, algorithm bias, ethical concerns, and resistance to adversarial perturbation calls the need for research to develop effective measures towards the longevity of banking security systems integrated with AI. This study emphasizes the importance of search for new strategies to strengthen current digital environments against the increasing number of threats.
Leveraging AI for Predictive Maintenance with Minimizing Downtime in Telecommunications Networks(مقاله علمی وزارت علوم)
منبع:
پژوهشنامه پردازش و مدیریت اطلاعات دوره ۴۰ تابستان ۱۴۰۴ ویژه نامه انگلیسی ۴ (پیاپی ۱۲۵)
1117 - 1147
حوزههای تخصصی:
Background: Telecommunications networks are exposed to numerous issues concerning equipment and that causes network outage, which proves very expensive. Basic maintenance methodologies like reactive or even scheduled preventive maintenance cannot cope up with the increasing trends in the facilities of telecom companies. Objective: The article examines how AI is applied to support predictive maintenance so that telecommunication networks can perform as intended with reduced downtime. Methods: The review of existing AI algorithms is presented, focusing on the ML models and deep learning methods. Network operations and maintenance logs are analyzed for data to assess the capabilities of the AI models in terms of prediction. It identifies and analyses such quantifiable parameters as the failure rate prediction accuracy and the response time cut. Results: Computerisation of the forecast maintenance revealed a corresponding decrease in equipment failure incidences and generally reduced time lost due to unscheduled stops. Through the improved network performance, the response to potential threats was quicker than before and services became more reliable and inexpensive to offer. Conclusion: To reduce network outages, reduce network vulnerability, and maximize the efficiency of telecommunications operations, the use of AI-based predictive maintenance can be viewed as a prospect. As technology advances, newer versions of AI algorithms will provide improved predictive strength and incorporation into the telecommunications system.
Optimizing Telecommunications Network Performance through Big Data Analytics: A Comprehensive Evaluation(مقاله علمی وزارت علوم)
منبع:
پژوهشنامه پردازش و مدیریت اطلاعات دوره ۴۰ تابستان ۱۴۰۴ ویژه نامه انگلیسی ۴ (پیاپی ۱۲۵)
1149 - 1177
حوزههای تخصصی:
Background: The telecommunication industry is currently witnessing an unparalleled growth in traffic data with a concomitant growth in the complexity of networks. As operators seek to achieve high availability of the networks, it is almost compulsory to employ the BDA for improved quality of service and increased operational performance. Objective: The study aims to provide a systematic review of the deployment of BDA in enhancing the primary characteristic indicators of telecommunications networks, to include availability of upgraded latency and throughput levels and network dependability. Methods: The research method used was summed up by quantitative analyses of the key performance parameters of the networks, along with the qualitative results of case studies conducted with major telecommunications operators. Information was collected from multiple networks as well as analyzed with the use of machine learning to be able to predict possible performance issues. Results: The study demonstrates that there is the possibility for reducing latency utilizing BDA with enhancements of up to 40%. In addition, the throughput has been raised by an average of 30% and the predictable analytics lead to 25% reducing in network downtime to improve the reliability and satisfaction of the user experience. Conclusion: The information provided in this study highlights the importance of Big Data Analytics for the telecommunication industry, proving that the proper integration can bring tangible improvements to the existing networks. One future development that constitutes the need for innovative analytical technologies is the rise in data traffic and sophisticated network requirements.
AI-Driven Drones for Real-Time Network Performance Monitoring(مقاله علمی وزارت علوم)
منبع:
پژوهشنامه پردازش و مدیریت اطلاعات دوره ۴۰ تابستان ۱۴۰۴ ویژه نامه انگلیسی ۴ (پیاپی ۱۲۵)
1281 - 1308
حوزههای تخصصی:
Background: The growing complexity of telecommunications networks, fueled by advancements like the Internet of Things (IoT) and 5G, necessitates dynamic and real-time network performance monitoring. Traditional static systems often fail to address challenges related to scalability, adaptability, and response speed in high-demand environments. Integrating artificial intelligence (AI) with unmanned aerial vehicles (UAVs) presents a transformative approach to overcoming these limitations. Objective: This study aims to evaluate the effectiveness of AI-driven drones for real-time network performance monitoring, focusing on key metrics such as latency, signal strength, throughput, and anomaly detection. Methods: A comprehensive framework was developed, employing reinforcement learning (RL) for path planning and a hybrid temporal-spectral anomaly detection (HTS-AD) algorithm. Experimental validation was conducted using 10 UAVs across simulated and real-world environments, collecting over 3.2 million data points. Statistical analyses, including MANOVA and Bayesian regression, were used to evaluate performance. Results: The proposed system demonstrated significant improvements over traditional methods, including a 24.6% increase in anomaly detection accuracy, a 30% reduction in energy consumption, and 99.9% network coverage in high-density UAV deployments. Conclusion: AI-driven drones offer a scalable, efficient, and reliable solution for network monitoring. By addressing limitations of traditional systems, this study establishes a foundation for next-generation telecommunications infrastructure. Future research should focus on real-world deployment and hybrid security models.
Trends and Challenges of Autonomous Drones in Enabling Resilient Telecommunication Networks(مقاله علمی وزارت علوم)
منبع:
پژوهشنامه پردازش و مدیریت اطلاعات دوره ۴۰ تابستان ۱۴۰۴ ویژه نامه انگلیسی ۴ (پیاپی ۱۲۵)
1403 - 1443
حوزههای تخصصی:
Background: The advances in use of resilient telecommunication networks have shown the possible use of autonomous drones to support connectivity in unpredictable and complex terrains. Current network infrastructures have limitations in delivering optimized service in areas like traffic congestion, area of sparseness, disasters etc., which requires some form of innovation. Objective: The article is meant to propose a framework for using autonomous drones in practical telecommunication systems, with emphasis on the energy consumption, scalability, dependability, and flexibility of the solution for various situations. Methods: The study also uses other state-of-the-art approaches such as trajectory optimization, swarm coordination, dynamic spectrum management, and machine learning based resource allocation. Various slips were used on urban, rural, and disaster-sensitive scenarios to assess performance indices including energy input, network connectivity, signal strength, and lag time. The simulation results were supported by field experiments providing insights into various circumstances. Results: The simulation results of the actually proposed framework show network scalability enhancements, where coverage area involves up to 50 km² and power saving higher than 15%. The performance improvement included near perfect trajectory anticipation at a rate of 98%, while the utilization of resources was also optimized. Dynamic spectrum management was useful in reducing interference and increasing efficiency especially in areas of high density. Conclusion: The article promotes the use of UAV based telecommunication networks where challenging questions on scalability and reliability are raised and solved. Through the work presented, strong theoretical and empirical assumptions are made to foster concepts that will solidify next generation communication network.
Dashboard‑Driven Machine Learning Analytics and Conceptual LLM Simulations for IIoT Education in Smart Steel Manufacturing(مقاله علمی وزارت علوم)
Through advanced analytical models such as machine learning (ML) and, conceptually, Large Language Models (LLMs), this study explores how Industrial Internet of Things (IIoT) applications can transform educational experiences in the context of smart steel production. To mitigate the shortage of authentic industrial datasets for research, we developed an industry-validated IIoT educational dataset drawn from three months of operational records at a steel plant and enriched with domain-specific annotations—most notably distinct operational phases. Building on this foundation, we propose an IIoT framework for intelligent steel manufacturing that merges ML-driven predictive analytics (employing Lasso regression to optimize energy use) with LLM-based contextualization of data streams within IIoT environments. At its core, this architecture delivers real-time process monitoring alongside adaptive learning modules, effectively simulating the dynamics of a smart factory. By promoting human–machine collaboration and mirroring quality-control workflows, the framework bridges the divide between theoretical instruction and hands-on industrial practice. A key feature is an interactive decision-support dashboard: this interface presents ML model outcomes and elucidates IIoT measurements—such as metallization levels and H 2 /CO ratios—through dynamic visualizations and scenario-based simulations that invite risk-free exploration of energy-optimization strategies. Such tools empower learners to grasp the intricate multivariate dependencies that govern steel manufacturing processes. Our implementation of the Lasso regression model resulted in a 9% reduction in energy consumption and stabilization of metallization levels. Overall, these findings underscore how embedding advanced analytics within IIoT education can cultivate a more engaging, practice-oriented learning environment that aligns closely with real-world industrial operations.
Building Safer Social Spaces: Addressing Body Shaming with LLMs and Explainable AI(مقاله علمی وزارت علوم)
This study tackles body shaming on Reddit using a novel dataset of 8,067 comments from June to November 2024, encompassing external and self-directed harmful discourse. We assess traditional Machine Learning (ML), Deep Learning (DL), and transformer-based Large Language Models (LLMs) for detection, employing accuracy, F1-score, and Area Under the Curve (AUC). Fine-tuned Psycho-Robustly Optimized BERT Pretraining Approach (Psycho-RoBERTa), pre-trained on psychological texts, excels (accuracy: 0.98, F1-score: 0.994, AUC: 0.990), surpassing models like Extreme Gradient Boosting (XG-Boost) (accuracy: 0.972) and Convolutional Neural Network (CNN) (accuracy: 0.979) due to its contextual sensitivity. Local Interpretable Model-agnostic Explanations (LIME) enhance transparency by identifying influential terms like “fat” and “ugly.” A term co-occurrence network graph uncovers semantic links, such as “shame” and “depression,” revealing discourse patterns. Targeting Reddit’s anonymity-driven subreddits, the dataset fills a platform-specific gap. Integrating LLMs, LIME, and graph analysis, we develop scalable tools for real-time moderation to foster inclusive online spaces. Limitations include Reddit-specific data and potential misses of implicit shaming. Future research should explore multi-platform datasets and few-shot learning. These findings advance Natural Language Processing (NLP) for cyberbullying detection, promoting safer social media environments.
Traductions automatiques à l’épreuve des langues genrées Étude de cas : traduction de l’anglais vers le français sur les plateformes Google Traduction et DeepL(مقاله علمی وزارت علوم)
حوزههای تخصصی:
La traduction automatique (TA) occupe une place cruciale dans le paysage des technologies linguistiques modernes dans un contexte de mondialisation accrue. En s’appuyant sur des algorithmes sophistiqués et des réseaux neuronaux profonds, les systèmes de TA tels que Google Traduction et DeepL permettent une traduction rapide et accessible. Cependant, ces systèmes se heurtent à des limites importantes lorsqu’ils traitent des structures avec une complexité grammaticale et culturelle, particulièrement en ce qui concerne les dimensions liées au genre. Nous essayons d’explorer les enjeux de la traduction automatique dans les langues genrées à travers une étude comparative des performances de Google Traduction et de DeepL pour la traduction de l’anglais vers le français en nous basant sur la question des noms de métier, représentatifs des stéréotypes genrés. En se fondant sur une analyse rigoureuse des résultats obtenus à partir de notre corpus conçu intentionnellement pour interroger les capacités de ces plateformes à prendre en compte des indicateurs de genre, cette recherche met en lumière les limites des systèmes actuels, notamment leur incapacité à répondre aux exigences contextuelles et culturelles des langues genrées.
Comparative Analysis of Machine Learning Algorithms in Predicting Jumps in Stock Closing Price: Case Study of Iran Khodro Using NearMiss and SMOTE Approaches(مقاله علمی وزارت علوم)
حوزههای تخصصی:
Predicting stock price fluctuations has always been one of the most important financial challenges due to the complexities of financial data and nonlinear market behavior. This research aimed to analyze and compare the performance of machine learning algorithms in predicting the closing price jump of Iran Khodro Company shares. Two different methods of managing unbalanced data, NearMiss and SMOTE, were used to overcome the challenge of unbalanced data. The results showed that the NearMiss method outperformed SMOTE by balancing precision and recall in machine learning models. The CatBoost model was recognized as the best machine learning model in this study due to its stable performance in NearMiss and SMOTE methods. The CatBoost model showed a perfect balance between evaluation indicators in the NearMiss method, with an accuracy of 91.46% and an F1 score of 91.29%. This model also had high precision (93.18%) and acceptable recall (89.52%), which showed the ability to detect jumps and avoid wrong predictions correctly. On the other hand, in the SMOTE method, the Random Forest model was superior, with an accuracy of 85.08%. These results show that a combination of unbalanced data management methods and advanced machine learning algorithms can significantly improve the accuracy of price volatility prediction. The results of this research can help investors and financial analysts make better decisions in risk management and optimizing investment strategies.
Early Prediction of Students' Academic Performance Using Interaction Data from Virtual Learning Environments(مقاله علمی وزارت علوم)
حوزههای تخصصی:
Online learning programs have gained significant popularity in recent years. However, despite their widespread adoption, completion and success rates for online courses are notably lower than those for traditional in-person education. If students' final academic performance could be predicted early by analyzing their behavior within the virtual learning environment, timely alerts could be issued, and targeted interventions could be recommended to prevent underperformance and course abandonment. Previous studies have predicted academic performance using various features, such as demographic data, academic history, in-term exam results, and assignment assessments. However, many online learning platforms do not provide access to such data, rendering these methods ineffective. This study focuses on the early prediction of students' academic performance by extracting novel behavioral features based on their interactions with the online learning platform. To develop robust predictive models, we utilize an integrated approach combining multiple feature selection methods to extract the most informative interaction patterns, followed by application of advanced machine learning algorithms including ensemble learning techniques and artificial neural networks (ANNs). The evaluation results demonstrate that our proposed approach can predict students' final academic performance with an accuracy of 90.62%, using only data collected during the first third of the online course.