decision-making process difficult to interpret, which
may pose difficulties in financial decision-making.
Therefore, although DNNs perform well in stock
price prediction, these challenges need to be
addressed to improve the accuracy and stability of
predictions.
2.3.2 LSTM-Based Prediction
Stock price prediction based on Long Short-Term
Memory Networks utilizes the LSTM model in deep
learning to analyses and predict the future movements
of stock prices. LSTM is a special type of recurrent
neural network (RNN) designed to process and learn
long-term dependencies in time-series data, which
overcomes the problems of gradient vanishing and
gradient explosion faced by traditional RNNs in long-
series data. LSTM is able to effectively capture
complex patterns in stock price data, including
seasonal and long-term trends, which makes it
particularly suitable for time series forecasting. Its
memory mechanism allows the network to maintain
and update information over a longer period of time,
thus enhancing the prediction of stock price
movements. By stacking multiple LSTM layers, the
model can learn deeper features in the data to further
improve prediction accuracy.
When applying LSTM for stock price prediction,
it usually includes the following steps: first, the data
preparation stage requires collecting and cleaning
historical stock price data, including processing
missing values and normalization. Next, the LSTM
model is constructed, the network structure is
designed, and appropriate hyperparameters are
selected. When training the model, the network
weights are optimized by historical data to minimize
the prediction error. The trained model needs to be
evaluated to verify its prediction performance using
metrics such as Mean Square Error (MSE) and to
check the generalization ability of the model through
cross-validation. Finally, the model is deployed for
real-time forecasting with continuous monitoring and
retraining to adapt to market changes.
3 DISCUSSIONS
In the current research on stock market forecasting
and algorithmic trading, AI technologies still face
many challenges and limitations despite
demonstrating their potential. First, model
interpretability is a notable issue. Complex AI
models, especially deep learning networks such as
LSTM, are often viewed as "black boxes," making it
difficult for investors and regulators to understand
their decision-making process. This lack of
transparency not only reduces trust in AI-driven
strategies but may also pose challenges in terms of
regulatory compliance. In addition, AI models excel
in laboratory environments, but uncertainty remains
about their applicability in real, highly volatile
financial markets. Models that rely on historical data
for training often perform poorly in the face of
unprecedented market events or volatility, leading to
a lack of model generalization capabilities, thus
limiting their practical application value. While AI
models show promise in controlled environments,
their applicability in real-world, highly volatile
financial markets remains uncertain. Models trained
on historical data may perform poorly in the face of
unprecedented market events or changes. On the
other hand, relying on historical data to train AI
models may lead to overfitting, in which case the
model performs very well on past data but fails to
generalize to new, unseen scenarios. This limitation
hinders the practical application of AI in dynamic
market conditions.
Quantitative Investment (QI) has demonstrated
unique advantages in relying on mathematical models
and algorithms to make investment decisions in a
data-driven manner. However, these models also face
multiple limitations and challenges. Firstly, model
overfitting is a major issue, especially when relying
too much on historical data during training, leading to
unsatisfactory performance in real markets. In
addition, the effectiveness of quantitative investing
relies on the quality of the data, and any errors or
noise in the data may trigger wrong investment
decisions. The changing dynamics of financial
markets also pose a challenge to quantitative models,
as sudden market events or policy changes may
invalidate models based on past data.
Another key challenge is the transparency and
explanatory nature of models, especially when
complex machine learning algorithms are used, which
can make it difficult for investors and regulators to
understand the model's decision-making process.
The popularity of algorithmic trading could also lead
to increased market volatility and even trigger
phenomena such as flash crashes. The high demand
for technological infrastructure, on the other hand,
means that building and maintaining these models
requires powerful computing power and high costs,
which can be challenging for small investment
organizations or individual investors. In addition, as
regulators increase their focus on algorithmic trading,
compliance issues may limit the use of certain
strategies or increase the complexity of
implementation. Finally, quantitative models may
perform well at small scales, but market impact and