capture the actual impact of dynamic explainability
on trust and system control.
The results presented here illustrate that
explainable, self-adaptive AI systems in real-time
data contexts are not just a theoretical vision but a
practically implementable reality—provided that
methodological robustness, system scalability, and
human-centered perspectives are given equal
consideration.
REFERENCES
Abbas, T., & Eldred, A. (2025). AI-Powered Stream
Processing: Bridging Real-Time Data Pipelines with
Advanced Machine Learning Techniques.
ResearchGate Journal of AI & Cloud Analytics.
https://doi.org/10.13140/RG.2.2.26674.52167
Adadi, A., & Berrada, M. (2018). Peeking inside the black-
box: a survey on explainable artificial intelligence
(XAI). IEEE access, 6, 52138-52160. https://doi.org/
10.1109/ACCESS.2018.2870052
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot,
A., Tabik, S., Barbado, A., ... & Herrera, F. (2020).
Explainable Artificial Intelligence (XAI): Concepts,
taxonomies, opportunities and challenges toward
responsible AI. Information fusion, 58, 82-115.
https://doi.org/10.1016/j.inffus.2019.12.012
Azeroual, O. (2024). Can generative AI transform data
quality? a critical discussion of ChatGPT’s capabilities.
Academia Engineering, 1(4). https://doi.org/10.20
935/AcadEng7407
Bifet, A., Gavalda, R., Holmes, G., & Pfahringer, B. (2023).
Machine learning for data streams: with practical
examples in MOA. MIT press. https://doi.org/10.75
51/mitpress/10654.001.0001
Cacciarelli, D., & Kulahci, M. (2024). Active learning for
data streams: a survey. Machine Learning, 113(1), 185-
239. https://doi.org/10.1007/s10994-023-06454-2
Carbone, P., Katsifodimos, A., Ewen, S., Markl, V., Haridi,
S., & Tzoumas, K. (2015). Apache flink: Stream and
batch Processing in a single engine. The Bulletin of the
Technical Committee on Data Engineering, 38(4).
http://sites. computer.org/debull/A15dec/p28.pdf
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous
science of interpretable machine learning. arXiv
preprint. https://doi.org/10.48550/arXiv.1702.08608
Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., &
Bouchachia, A. (2014). A survey on concept drift
adaptation. ACM computing surveys (CSUR), 46(4), 1-
37. https://doi.org/10.1145/2523813
Gomes, H. M., Read, J., Bifet, A., Barddal, J. P., & Gama,
J. (2019). Machine learning for streaming data: state of
the art, challenges, and opportunities. ACM SIGKDD
Explorations Newsletter, 21(2), 6-22. https://doi.org/1
0.1145/3373464.3373470
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F.,
Giannotti, F., & Pedreschi, D. (2018). A survey of
methods for explaining black box models. ACM
computing surveys (CSUR), 51(5), 1-42. https://doi.
org/10.1145/3236009
Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S.,
Gani, A., & Khan, S. U. (2015). The rise of “big data”
on cloud computing: Review and open research issues.
Information systems, 47, 98-115. https://doi.org/
10.1016/j.is.2014.07.006
He, X., Zhao, K., & Chu, X. (2021). AutoML: A survey of
the state-of-the-art. Knowledge-based systems, 212,
106622. https://doi.org/10.48550/arXiv.1908.00709
Kiran, B. R., Sobh, I., Talpaert, V., Mannion, P., Al Sallab,
A. A., Yogamani, S., & Pérez, P. (2021). Deep
reinforcement learning for autonomous driving: A
survey. IEEE Transactions on intelligent
transportation systems, 23(6), 4909-4926. https://doi.
org/10.1109/TI TS.2021.3054625
Krawczyk, B. (2016). Learning from imbalanced data: open
challenges and future directions. Progress in artificial
intelligence, 5(4), 221-232. https://doi.org/10.1007/
s13748-016-0094-0
Kreps, J., Narkhede, N., & Rao, J. (2011). Kafka: A
distributed messaging system for log processing. In
Proceedings of the NetDB (Vol. 11, No. 2011, pp. 1-7).
https://api.semanticscholar.org/CorpusID:18534081
Liu, C., Peng, G., Kong, Y., Li, S., & Chen, S. (2021). Data
quality affecting Big Data analytics in smart factories:
research themes, issues and methods. Symmetry, 13(8),
1440. https://doi.org/10.3390/sym13081440
Lu, J., Liu, A., Dong, F., Gu, F., Gama, J., & Zhang, G.
(2018). Learning under concept drift: A review. IEEE
transactions on knowledge and data engineering,
31(12), 2346-2363. https://doi.org/10.48550/arXiv.
2004.05785
Lundberg, S. M., & Lee, S. I. (2017). A unified approach to
interpreting model predictions. Advances in neural
information processing systems, 30. https://doi.org/
10.48550/arXiv.1705.07874
Mohammadi, M., Al-Fuqaha, A., Sorour, S., & Guizani, M.
(2018). Deep learning for IoT big data and streaming
analytics: A survey. IEEE Communications Surveys &
Tutorials, 20(4), 2923-2960. https://doi.org/10.1109/
COMST.2018.2844341
Molnar, C. (2020). Interpretable machine learning. Lulu.
com. https://christophm.github.io/interpretable-ml-book/
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why
should i trust you?" Explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
International Conf. on Knowledge Discovery and data
Mining (pp. 1135-1144). https://doi.org/10.1145/
2939672.293977
Rudin, C. (2019). Stop explaining black box machine
learning models for high stakes decisions and use
interpretable models instead. Nature machine
intelligence, 1(5), 206-215. https://doi.org/10.1038/s4
2256-019-0048-x
Russell, S. J., & Norvig, P. (2016). Artificial intelligence: a
modern approach. pearson. https://elibrary.pearson.de/
book/99.150005/9781292401171