Towards a Trace-based Evaluation Model for Knowledge Acquisition and Training Resource Adaption

Soraya Chachoua, Nouredine Tamani, Jamal Malki, Pascal Estraillier

2016

Abstract

e-Assessment in an e-learning system is aimed at evaluating learners regarding their knowledge acquisition. Available assessment methods are usually used at the end of a training activity in order to state if a given learner has either passed or failed a training unit or level, based on the grading results obtained. Most of grading processes follow the SCORM norm in the matter (Scorm, 2006) and make use of duration and number of attempts to compute the scores. These information are valuable in grading but they can also be exploited to capture the learner bahaviour during a training activity, and then assess both learner knowledge acquisition and training resource quality in terms of adequacy. Therefore, we consider in this paper duration and number of attempts as modeled traces, upon which we build a theoretical model for automated evaluation of learners’ knowledge acquisition evolution as a training activity progresses. The values obtained can be used to adapt training strategies and resources to improve both learner’s knowledge level and e-learning platform quality.

References

  1. Amelung, M., Krieger, K., and Rosner, D. (2011). Eassessment as a service. Learning Technologies, IEEE Transactions on, 4(2):162-174.
  2. Andrews, J. H. (1998). Testing using log file analysis: tools, methods, and issues. In Automated Software Engineering, 1998. Proceedings. 13th IEEE International Conference on, pages 157-166. IEEE.
  3. Burstein, J., Leacock, C., and Swartz, R. (2001). Automated evaluation of essays and short answers.
  4. Cohen, J. (1977). Statistical power analysis for the behavioral sciences (rev. Lawrence Erlbaum Associates, Inc.
  5. Crisp, G. (2009). Interactive e-assessment: moving beyond multiple-choice questions. Centre for Learning and Professional Development. Adelaide: University of Adelaide, 3:12-31.
  6. Crooks, T. J. (1988). The impact of classroom evaluation practices on students. Review of educational research, 58(4):438-481.
  7. Djouad, T., Settouti, L. S., Prié, Y., Reffay, C., and Mille, A. (2010). Un système à base de traces pour la modélisation et lélaboration dindicateurs dactivités éducatives individuelles et collectives. mise à lépreuve sur moodle. Mise à lépreuve sur Moodle. Technique et Science Informatiques.
  8. Guo, P. J. (2013). Online Python Tutor: Embeddable webbased program visualization for CS education. In Proceedings of the 44th ACM Technical Symposium on Computer Science Education, SIGCSE 7813, pages 579-584, New York, NY, USA. ACM.
  9. Kozma, R. (2009). Transforming education: Assessing and teaching 21st century skills. The transition to computer-based assessment, 13.
  10. Kumar, R., Chung, G. K., Madni, A., and Roberts, B. (2015). First evaluation of the physics instantiation of a problem-solving-based online learning platform. In Artificial Intelligence in Education, pages 686-689. Springer.
  11. Laflaquiere, J., Settouti, L. S., Pri é, Y., and Mille, A. (2006). Trace-based framework for experience management and engineering. In Knowledge-Based Intelligent Information and Engineering Systems, pages 1171-1178. Springer.
  12. Lebis, A., Lefevre, M., Guin, N., and Luengo, V. (2015). Capitaliser les processus danalyses de traces dapprentissage indépendamment des plates-formes danalyses de traces. Technical report, LIG-LIRIS. ANR Project HUBBLELEARN.
  13. Martin, R. (2008). New possibilities and challenges for assessment through the use of technology. Towards a research agenda on computer-based assessment, page 6.
  14. Mille, A., Champin, P.-A., Cordier, A., Georgeon, O., and Lefevre, M. (2013). Trace-based reasoning-modeling interaction traces for reasoning on experiences. In The 26th International FLAIRS Conference, pages 1-15.
  15. Nicol, D. (2007). E-assessment by design: using multiplechoice tests to good effect. Journal of Further and Higher Education, 31(1):53-64.
  16. Nicol, D. J. and Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in higher education, 31(2):199-218.
  17. Papamitsiou, Z. and Economides, A. A. (2015). Temporal learning analytics visualizations for increasing awareness during assessment. RUSC. Universities and Knowledge Society Journal, 12(3):129-147.
  18. Patelis, T. (2000). An overview of computer-based testing.
  19. Scheuermann, F. and Björnsson, J. (2009). The transition to computer-based assessment. Luxembourg: Office for Official Publications of the European Communities.
  20. Scorm (2006). SCORM 2004 Handbook. The e-Learning Consortium, Japan. Version 1.04.
  21. Settouti, L. S., Prié, Y., Champin, P.-A., Marty, J.-C., and Mille, A. (2009a). A trace-based systems framework: Models, languages and semantics.
  22. Settouti, L. S., Prié, Y., Cram, D., Champin, P.-A., and Mille, A. (2009b). A trace-based framework for supporting digital object memories. In Workshops Proceedings of the 5th International Conference on Intelligent Environments, Barcelona, Spain, 19th of July, 2009, pages 39-44.
  23. Settouti, L. S., Prié, Y., Marty, J., and Mille, A. (2009c). A trace-based system for technology-enhanced learning systems personalisation. In The 9th IEEE International Conference on Advanced Learning Technologies, ICALT 2009, Riga, Latvia, July 15-17, 2009, pages 93-97. IEEE Computer Society.
  24. Sireci, S. and Luecht, R. M. (2012). A review of models for computer-based testing.
  25. Thompson, N. and Wiess, D. (2009). Computerised and adaptive testing in educational assessment. The transition to computer-based assessment. New approaches to skills assessment and implications for large-scale testing, pages 127-133.
  26. Wandall, J. (2011). National tests in denmark-cat as a pedagogic tool. Association of Test Publishers, 12(1).
  27. Wilhelm, O. (2009). Issues in computerized ability measurement: Getting out of the jingle and jangle jungle. The transition to computer-based assessment, pages 145-150.
  28. Williamson, D. M., Xi, X., and Breyer, F. J. (2012). A framework for evaluation and use of automated scoring. Educational Measurement: Issues and Practice, 31(1):2-13.
  29. Yang, Y., Buckendahl, C. W., Juszkiewicz, P. J., and Bhola, D. S. (2002). A review of strategies for validating computer-automated scoring. Applied Measurement in Education, 15(4):391-412.
Download


Paper Citation


in Harvard Style

Chachoua S., Tamani N., Malki J. and Estraillier P. (2016). Towards a Trace-based Evaluation Model for Knowledge Acquisition and Training Resource Adaption . In Proceedings of the 8th International Conference on Computer Supported Education - Volume 2: CSEDU, ISBN 978-989-758-179-3, pages 121-128. DOI: 10.5220/0005859801210128


in Bibtex Style

@conference{csedu16,
author={Soraya Chachoua and Nouredine Tamani and Jamal Malki and Pascal Estraillier},
title={Towards a Trace-based Evaluation Model for Knowledge Acquisition and Training Resource Adaption},
booktitle={Proceedings of the 8th International Conference on Computer Supported Education - Volume 2: CSEDU,},
year={2016},
pages={121-128},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005859801210128},
isbn={978-989-758-179-3},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 8th International Conference on Computer Supported Education - Volume 2: CSEDU,
TI - Towards a Trace-based Evaluation Model for Knowledge Acquisition and Training Resource Adaption
SN - 978-989-758-179-3
AU - Chachoua S.
AU - Tamani N.
AU - Malki J.
AU - Estraillier P.
PY - 2016
SP - 121
EP - 128
DO - 10.5220/0005859801210128