مطالب مرتبط با کلیدواژه

Recurrent Neural Networks


۱.

Developing a Stock Market Prediction Model by Deep Learning Algorithms(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Stock Price Prediction Artificial Neural Networks deep learning Long Short-Term Memory Recurrent Neural Networks

حوزه‌های تخصصی:
تعداد بازدید : ۱۷۹ تعداد دانلود : ۱۱۷
For investors, predicting stock market changes has always been attractive and challenging because it helps them accurately identify profits and reduce potential risks. Deep learning-based models, as a subset of machine learning, receive attention in the field of price prediction through the improvement of traditional neural network models. In this paper, we propose a model for predicting stock prices of Tehran Stock Exchange companies using a long-short-term memory (LSTM) deep neural network. The model consists of two LSTM layers, one Dense layer, and two DropOut layers. In this study, using our studies and evaluations, the adjusted stock price with 12 technical index variables was taken as an input for the model. In assessing the model's predictive outcomes, we considered RMSE, MAE, and MAPE as criteria. According to the results, integrating technical indicators increases the model's accuracy in predicting the stock price, with the LSTM model outperforming the RNN model in this task.
۲.

VG-CGARN: Video Generation Using Convolutional Generative Adversarial and Recurrent Networks(مقاله علمی وزارت علوم)

تعداد بازدید : ۱۱ تعداد دانلود : ۹
Generating dynamic videos from static images and accurately modeling object motion within scenes are fundamental challenges in computer vision, with broad applications in video enhancement, photo animation, and visual scene understanding. This paper proposes a novel hybrid framework that combines convolutional neural networks (CNNs), recurrent neural networks (RNNs) with long short-term memory (LSTM) units, and generative adversarial networks (GANs) to synthesize temporally consistent and spatially realistic video sequences from still images. The architecture incorporates splicing techniques, the Lucas-Kanade motion estimation algorithm, and a loop feedback mechanism to address key limitations of existing approaches, including motion instability, temporal noise, and degraded video quality over time. CNNs extract spatial features, LSTMs model temporal dynamics, and GANs enhance visual realism through adversarial training. Experimental results on the KTH dataset, comprising 600 videos of fundamental human actions, demonstrate that the proposed method achieves substantial improvements over baseline models, reaching a peak PSNR of 35.8 and SSIM of 0.96—representing a 20% performance gain. The model successfully generates high-quality, 10-second videos at a resolution of 720×1280 pixels with significantly reduced noise, confirming the effectiveness of the integrated splicing and feedback strategy for stable and coherent video generation.