Semi-Automatic Assessment Approach to Programming Code for Novice Students

Selim Buyrukoglu, Firat Batmaz, Russell Lock

Abstract

Programming languages have been an integral element of the taught skills of many technical subjects in Higher Education for the last half century. Moreover, secondary school students have also recently started learning programming languages. This increase in the number of students learning programming languages makes the efficient and effective assessment of student work more important. This research focuses on one key approach to assessment using technology: the semi-automated marking of novice students’ program code. The open-ended, flexible nature of programming ensures that no two significant pieces of code are likely to be the same. However, it has been observed that there are a number of common code fragments within these dissimilar solutions. This observation forms the basis of our proposed approach. The initial research focuses on the ‘if’ structure to evaluate the theory behind the approach taken, which is appropriate given its commonality across programming languages. The paper also discusses the results of real world analysis of novice students’ programming code on ‘if’ structures. The paper concludes that the approach taken could form a more effective and efficient method for the assessment of student coding assignments.

References

  1. Ala-Mutka, K.M. (2005). A survey of automated assessment approaches for programming assignments. Computer science education, 15(2), 83-102.
  2. Cheang, B., Kurnia, A., Lim, A., & Oon, W. C. (2003). On automated grading of programming assignments in an academic institution. Computers & Education, 41(2), 121-131.
  3. Clark, I. (2011). Formative Assessment: Policy, Perspectives and Practice. Florida Journal of Educational Administration & Policy, 4(2), 158-180.
  4. Foxley, E., Higgins, C., Hegazy, T., Symeonidis, P. & Tsintsifas, A. (2001). The CourseMaster CBA system: Improvements over Ceilidh.
  5. Jackson, D. (2000). A semi-automated approach to online assessment. ACM SIGCSE Bulletin ACM, 164.
  6. Joy, M., Griffiths, N. & Boyatt, R. (2005). The boss online submission and assessment system. Journal on Educational Resources in Computing (JERIC), 5(3), 2.
  7. Kaczmarczyk, L.C., Petrick, E.R., East, J.P. & Herman, G.L. (2010). "Identifying student misconceptions of programming", Proceedings of the 41st ACM technical symposium on Computer science educationACM, pp. 107.
  8. Meerbaum-Salant, O., Armoni, M. & Ben-Ari, M. (2013). Learning computer science concepts with scratch. Computer Science Education, 23(3), 239-264.
  9. Melmer, R., Burmaster, E., & James, T. K. (2008). Attributes of effective formative assessment. Washington, DC: Council of Chief State School Officers. Retrieved October, 7, 2008.
  10. Palmer, J., Williams, R., & Dreher, H. (2002). Automated essay grading system applied to a first year university subject - How can we do it better?. In Proceedings of Informing Science 2002 Conference, Cork, Ireland, June (pp. 19-21).
  11. Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K. & Kafai, Y. (2009). Scratch: programming for all. Communications of the ACM, 52(11), 60-67.
  12. Rubio-Sánchez, M., Kinnunen, P., Pareja-Flores, C. & Velázquez-Iturbide, Á. (2014). Student perception and usage of an automated programming assessment tool. Computers in Human Behavior, 31, 453-460.
  13. Saikkonen, R., Malmi, L. & Korhonen, A. (2001). Fully automatic assessment of programming exercises. ACM Sigcse Bulletin ACM, pp. 133.
  14. Schmidt, D.A. (2012). [Online] Programming Language Semantics. Kansas State University. Available from: http://people.cis.ksu.edu/schmidt/705a/Lectures/chap ter.pdf [Accessed 02 September 2015]
  15. Scriven, M.S. (1967). The methodology of evaluation (Perspectives of Curriculum Evaluation, and AERA monograph Series on Curriculum Evaluation, No. 1. Chicago: Rand NcNally.
  16. Sharma, K., Banerjee, K., Vikas, I. & Mandal, C. (2014). Automated checking of the violation of precedence of conditions in else-if constructs in students' programs. MOOC, Innovation and Technology in Education (MITE), 2014 IEEE International Conference on IEEE, 201.
  17. Soloway, E. & Spohrer, J. C. (2013). Studying the novice programmer. Psychology Press.
  18. Wang, T., Su, X., Wang, Y. & Ma, P. (2007). Semantic similarity-based grading of student programs. Information and Software Technology, 49(2), 99-107.
Download


Paper Citation


in Harvard Style

Buyrukoglu S., Batmaz F. and Lock R. (2016). Semi-Automatic Assessment Approach to Programming Code for Novice Students . In Proceedings of the 8th International Conference on Computer Supported Education - Volume 1: CSEDU, ISBN 978-989-758-179-3, pages 289-297. DOI: 10.5220/0005789802890297


in Bibtex Style

@conference{csedu16,
author={Selim Buyrukoglu and Firat Batmaz and Russell Lock},
title={Semi-Automatic Assessment Approach to Programming Code for Novice Students},
booktitle={Proceedings of the 8th International Conference on Computer Supported Education - Volume 1: CSEDU,},
year={2016},
pages={289-297},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005789802890297},
isbn={978-989-758-179-3},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 8th International Conference on Computer Supported Education - Volume 1: CSEDU,
TI - Semi-Automatic Assessment Approach to Programming Code for Novice Students
SN - 978-989-758-179-3
AU - Buyrukoglu S.
AU - Batmaz F.
AU - Lock R.
PY - 2016
SP - 289
EP - 297
DO - 10.5220/0005789802890297