مطالب مرتبط با کلیدواژه

Large Language Models


۱.

Dashboard‑Driven Machine Learning Analytics and Conceptual LLM Simulations for IIoT Education in Smart Steel Manufacturing(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Smart Steel Manufacturing Industry 4.0 IIoT Education Industrial Internet of Things Machine Learning Large Language Models education technology

تعداد بازدید : ۲۲ تعداد دانلود : ۲۰
Through advanced analytical models such as machine learning (ML) and, conceptually, Large Language Models (LLMs), this study explores how Industrial Internet of Things (IIoT) applications can transform educational experiences in the context of smart steel production. To mitigate the shortage of authentic industrial datasets for research, we developed an industry-validated IIoT educational dataset drawn from three months of operational records at a steel plant and enriched with domain-specific annotations—most notably distinct operational phases. Building on this foundation, we propose an IIoT framework for intelligent steel manufacturing that merges ML-driven predictive analytics (employing Lasso regression to optimize energy use) with LLM-based contextualization of data streams within IIoT environments. At its core, this architecture delivers real-time process monitoring alongside adaptive learning modules, effectively simulating the dynamics of a smart factory. By promoting human–machine collaboration and mirroring quality-control workflows, the framework bridges the divide between theoretical instruction and hands-on industrial practice. A key feature is an interactive decision-support dashboard: this interface presents ML model outcomes and elucidates IIoT measurements—such as metallization levels and H 2 /CO ratios—through dynamic visualizations and scenario-based simulations that invite risk-free exploration of energy-optimization strategies. Such tools empower learners to grasp the intricate multivariate dependencies that govern steel manufacturing processes. Our implementation of the Lasso regression model resulted in a 9% reduction in energy consumption and stabilization of metallization levels. Overall, these findings underscore how embedding advanced analytics within IIoT education can cultivate a more engaging, practice-oriented learning environment that aligns closely with real-world industrial operations.
۲.

Risk-Aware Suicide Detection in Social Media: A Domain-Guided Framework with Explainable LLMs(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Suicide Risk Detection Large Language Models Social media analysis Mental Health Monitoring Explainable AI

تعداد بازدید : ۲۲ تعداد دانلود : ۱۳
Nowadays, the close connection between people's lives and social media has led to the emergence of their psychological and emotional states in social media posts. This type of digital footprint creates a rich and novel entry point for early detection of suicide risk. Accurate detection of suicidal ideation is a significant challenge due to the high false negative rate and sensitivity to subtle linguistic features. Current AI-based suicide detection systems are unable to detect linguistic subtleties. These approaches do not consider domain-specific indicators and ignore the dynamic interaction of language, behaviour, and mental health. Identifying lexical and syntactic markers can be a powerful diagnostic lens for diagnosing psychological distress. To address these issues, we propose a new domain-based framework that integrates the specialized frequent-rare suicide vocabulary (FR-SL) into the fine-tuning process of large language models (LLMs). This vocabulary-aware strategy draws the model's attention to common and rare suicide-related phrases and enhances the model's ability to detect subtle signs of distress. In addition to improving performance on various metrics, the proposed framework adds interpretability for understanding and trusting the models' decisions while creating transparency. It also enables the design of a structure that is generalizable to the linguistic and mental health domains. The proposed approach offers clear improvements over baseline methods, especially in terms of reducing false negatives and general interpretability through transparent attribution.
۳.

Building Safer Social Spaces: Addressing Body Shaming with LLMs and Explainable AI(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Body Shaming Reddit Machine Learning deep learning Large Language Models Local Interpretable Model-agnostic Explanations content moderation

تعداد بازدید : ۲۰ تعداد دانلود : ۱۵
This study tackles body shaming on Reddit using a novel dataset of 8,067 comments from June to November 2024, encompassing external and self-directed harmful discourse. We assess traditional Machine Learning (ML), Deep Learning (DL), and transformer-based Large Language Models (LLMs) for detection, employing accuracy, F1-score, and Area Under the Curve (AUC). Fine-tuned Psycho-Robustly Optimized BERT Pretraining Approach (Psycho-RoBERTa), pre-trained on psychological texts, excels (accuracy: 0.98, F1-score: 0.994, AUC: 0.990), surpassing models like Extreme Gradient Boosting (XG-Boost) (accuracy: 0.972) and Convolutional Neural Network (CNN) (accuracy: 0.979) due to its contextual sensitivity. Local Interpretable Model-agnostic Explanations (LIME) enhance transparency by identifying influential terms like “fat” and “ugly.” A term co-occurrence network graph uncovers semantic links, such as “shame” and “depression,” revealing discourse patterns. Targeting Reddit’s anonymity-driven subreddits, the dataset fills a platform-specific gap. Integrating LLMs, LIME, and graph analysis, we develop scalable tools for real-time moderation to foster inclusive online spaces. Limitations include Reddit-specific data and potential misses of implicit shaming. Future research should explore multi-platform datasets and few-shot learning. These findings advance Natural Language Processing (NLP) for cyberbullying detection, promoting safer social media environments.
۴.

Consistent Responses to Paraphrased Questions as Evidence Against Hallucination: A Study on Hallucinations in LLMs(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Large Language Models Hallucination of Large Language Models Inconsistency Detection Paraphrasing

تعداد بازدید : ۱۵ تعداد دانلود : ۹
The increasing adoption of large language models (LLMs) has intensified concerns about hallucinations—outputs that are syntactically fluent but factually incorrect. In this paper, we propose a method for detecting such hallucinations by evaluating the consistency of model responses to paraphrased versions of the same question. The underlying assumption is that if a model produces consistent answers across different paraphrases, the output is more likely to be accurate. To test this method, we developed a system that generates multiple paraphrases of each question and analyzes the consistency of the corresponding responses. Experiments were conducted using two LLMs—GPT-4O and LLaMA 3–70B Chat—on both Persian and English datasets. The method achieved an average accuracy of 99.5% for GPT-4O and 98% for LLaMA 3–70B, indicating the effectiveness of our approach in identifying hallucination-free outputs across languages. Furthermore, by automating the consistency evaluation using an instruction-tuned language model, we enabled scalable and unbiased detection of semantic agreement across paraphrased responses.