executing MI tasks within the 3D Tetris environment
exhibited a significantly greater enhancement in
generating MI-related ERD/ERS. Game score
analysis revealed a clear upward trend in player
scores within the 3D environment, whereas no
significant decreasing trend was observed in the 2D
environment. These findings indicate that an
immersive and control-rich MI environment can
improve relevant mental imagery and enhance MI-
based BCI skills (Li et al., 2017).
Larsen et al. presented a synchronization method
for multimodal physiological data streams,
specifically integrating EEG with eye-tracking within
a VR headset. They implemented a hybrid SSVEP-
based BCI speller within a fully immersive VR
environment as a proof-of-concept use case.
Hardware latency analysis indicated an average offset
of 36 ms and an average jitter of 5.76 ms between the
EEG and eye-tracking data streams. The proposed
VR-BCI speller concept demonstrated its potential
for real-world applications. These results confirm the
feasibility of combining EEG and VR technology for
neuroscientific research, establishing new pathways
for studying brain activity within VR environments.
This work also lays the groundwork for refining
synchronization methods and exploring application
scenarios such as learning and social interaction
(Larsen et al., 2024).
4 CURRENT LIMITATIONS AND
FUTURE OUTLOOK
Game development based on unimodal physiological
signals is relatively mature. Unimodal signals, such
as Electrodermal Activity (EDA) and EEG, provide
valuable insights into player emotions during
gameplay. This approach enables developers to create
games that adapt to players’ emotional responses,
enhancing engagement and immersion. By leveraging
physiological signals, games can dynamically adjust
based on player reactions. For instance, game
difficulty or narrative elements can be modified in
real-time according to a player’s stress or excitement
levels, leading to more personalized gaming
experiences. The commercial viability of games
incorporating physiological signals is growing. The
release of controllers with integrated physiological
sensors, such as Sony’s Dualshock 5, signifies a trend
toward mainstream acceptance of biofeedback in
gaming. This development will likely drive broader
adoption of physiological signals in game design
(Hughes & Jorda, 2021). Conversely, multimodal
integration combines multiple physiological signals
to deliver richer, more accurate gaming experiences,
significantly boosting player interest. Nevertheless,
multimodal game development faces substantial
technical challenges. Effectively integrating and
optimizing these heterogeneous physiological signals
remains a primary hurdle. Future research should
prioritize Optimizing signal utilization to enhance
player experience and therapeutic outcomes and
exploring the full potential of multimodal approaches
to overcome current limitations.
5 CONCLUSIONS
As games evolve from entertainment products toward
intelligent affective interaction media, integrating
physiological signals (EEG, ECG, GSR, eye tracking,
EMG, etc.) into game design is becoming a critical
technological pathway to enhance user immersion,
engagement, and interest. This paper systematically
reviews the sensing mechanisms of common
physiological signals and examines the distinct
contributions and applications of uni-modal versus
multi-modal physiological signals in game
development.
The research first explains the principles and
game interaction potential of EEG (including SSVEP,
P300, MI), eye tracking, EOG, and EMG signals. It
then focuses on two key dimensions: applications of
uni-modal physiological signals in games, and game
development under multi-modal physiological
signals. In the unimodal analysis, highlight its
technical simplicity through case studies including
brain-controlled games (focus/blink-controlled ball
movement, MI-based running games) and EMG-
driven VR rehabilitation training, demonstrating its
effectiveness in enhancing immersion and enabling
specific functional control. For multi-modal fusion,
explore techniques such as EEG+EOG for MI
intention recognition and EEG+eye tracking
integration in VR, establishing multimodal
approaches’ significant value in delivering richer
adaptive experiences, improving interaction
robustness (especially in VR scenarios), and
facilitating user skill acquisition.
This study constructs a methodological
framework for physiological signal selection and
fusion design, summarizing key technologies. This
paper concludes that while unimodal approaches
offer simplicity, they provide limited experiential
dimensions; multimodal integration substantially
enhances experiential richness and accuracy but faces
challenges in technical integration. Future research