ing maze. helping people with learning analytics and
chatbots to find personal career paths. Interna-
tional journal of information and education technol-
ogy, 13(3):423–429.
Ahmad, A., Schneider, J., Griffiths, D., Biedermann, D.,
Schiffner, D., Greller, W., and Drachsler, H. (2022).
Connecting the dots–a literature review on learn-
ing analytics indicators from a learning design per-
spective. Journal of Computer Assisted Learning,
n/a(n/a):1–39.
Ahmad., A., Schneider., J., Weidlich., J., Di Mitri., D.,
Yau., J. Y., Schiffner., D., and Drachsler., H. (2022).
What indicators can i serve you with? an evaluation of
a research-driven learning analytics indicator reposi-
tory. In Proceedings of the 14th International Confer-
ence on Computer Supported Education - Volume 1:
CSEDU, pages 58–68. INSTICC, SciTePress.
Al-Emran, M. and Grani
´
c, A. (2021). Is it still valid or
outdated? a bibliometric analysis of the technology
acceptance model and its applications from 2010 to
2020. In Al-Emran, M. and Shaalan, K., editors, Re-
cent Advances in Technology Acceptance Models and
Theories, pages 1–12. Springer International Publish-
ing, Cham.
Al-Emran, M., Mezhuyev, V., and Kamaludin, A. (2018).
Technology acceptance model in m-learning con-
text: A systematic review. Computers & Education,
125:389–412.
Bakharia, A., Corrin, L., de Barba, P., Kennedy, G.,
Ga
ˇ
sevi
´
c, D., Mulder, R., Williams, D., Dawson, S.,
and Lockyer, L. (2016). A conceptual framework
linking learning design with learning analytics. In
Proceedings of the Sixth International Conference on
Learning Analytics and Knowledge, LAK ’16, page
329–338, New York, NY, USA. Association for Com-
puting Machinery.
Bangor, A., Kortum, P. T., and Miller, J. T. (2008). An
empirical evaluation of the system usability scale. Intl.
Journal of Human–Computer Interaction, 24(6):574–
594.
Baviskar, D., Ahirrao, S., Potdar, V., and Kotecha, K.
(2021). Efficient automated processing of the unstruc-
tured documents using artificial intelligence: A sys-
tematic literature review and future directions. IEEE
Access, 9:72894–72936.
Brooke, J. (1986). System usability scale (sus): a quick-
and-dirty method of system evaluation user informa-
tion. Reading, UK: Digital equipment co ltd, 43:1–7.
Calle-Alonso, F., Cuenca-Guevara, A., de la Mata Lara, D.,
S
´
anchez-G
´
omez, J. M., Vega-Rodr
´
ıguez, M. A., and
S
´
anchez, C. J. P. (2017). Neurok: A collaborative
e-learning platform based on pedagogical principles
from neuroscience. In CSEDU (1), pages 550–555,
Porto, Portugal. scitepress.
Ceylan, C. (2022). Application of Natural Language Pro-
cessing to Unstructured Data: A Case Study of Cli-
mate Change. PhD thesis, Massachusetts Institute of
Technology.
Chatti, M. A., Lukarov, V., Th
¨
us, H., Muslim, A., Yousef,
A. M. F., Wahid, U., Greven, C., Chakrabarti, A., and
Schroeder, U. (2014). Learning analytics: Challenges
and future research directions. eleed, 10(1).
Chuttur, M. (2009). Overview of the technology acceptance
model: Origins, developments and future directions.
elibrary, 6:24.
Clavi
´
e, B. and Gal, K. (2019). Edubert: Pretrained deep
language models for learning analytics. arXiv, n/a:4.
Collins, M. (2013). The naive bayes model, maximum-
likelihood estimation, and the em algorithm. Lecture
Notes, n/a:21.
Davis, F. D. (1985). A technology acceptance model for
empirically testing new end-user information systems:
Theory and results. PhD thesis, Massachusetts Insti-
tute of Technology.
Drachsler, H. and Goldhammer, F. (2020). Learning ana-
lytics and eassessment—towards computational psy-
chometrics by combining psychometrics with learn-
ing analytics. In Burgos, D., editor, Radical Solutions
and Learning Analytics: Personalised Learning and
Teaching Through Big Data, pages 67–80. Springer
Singapore, Singapore.
Drew, M. R., Falcone, B., and Baccus, W. L. (2018). What
does the system usability scale (sus) measure? In
Marcus, A. and Wang, W., editors, Design, User Ex-
perience, and Usability: Theory and Practice, pages
356–366, Cham. Springer International Publishing.
Ferguson, R. and Clow, D. (2017). Where is the evidence?
a call to action for learning analytics. In Proceedings
of the Seventh International Learning Analytics and
Knowledge Conference, LAK ’17, page 56–65, New
York, NY, USA. Association for Computing Machin-
ery.
Fiorini, N., Canese, K., Starchenko, G., Kireev, E., Kim, W.,
Miller, V., Osipov, M., Kholodov, M., Ismagilov, R.,
Mohan, S., et al. (2018). Best match: new relevance
search for pubmed. PLoS biology, 16(8):e2005343.
Fonferko-Shadrach, B., Lacey, A. S., Roberts, A., Akbari,
A., Thompson, S., Ford, D. V., Lyons, R. A., Rees,
M. I., and Pickrell, W. O. (2019). Using natural lan-
guage processing to extract structured epilepsy data
from unstructured clinic letters: development and val-
idation of the exect (extraction of epilepsy clinical
text) system. BMJ Open, 9(4):7.
G
¨
ultekin, G. and Bayat, O. (2022). A na
¨
ıve bayes prediction
model on location-based recommendation by integrat-
ing multi-dimensional contextual information. Multi-
media Tools and Applications, 81(5):6957–6978.
G
¨
unther, S. A. (2021). The impact of social norms on
students’ online learning behavior: Insights from two
randomized controlled trials. In LAK21: 11th Interna-
tional Learning Analytics and Knowledge Conference,
LAK21, page 12–21, New York, NY, USA. Associa-
tion for Computing Machinery.
Heldens, S., Sclocco, A., Dreuning, H., van Werkhoven,
B., Hijma, P., Maassen, J., and van Nieuwpoort, R. V.
(2022). litstudy: A python package for literature re-
views. SoftwareX, 20:101207.
Hindriks, S. (2020). A study on the user experience of the
asreview software tool for experienced and unexperi-
enced users. M.S. thesis, Utrecht University.
KDIR 2023 - 15th International Conference on Knowledge Discovery and Information Retrieval
238