loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Alfonso Piscitelli ; Mattia De Rosa ; Vittorio Fuccella and Gennaro Costagliola

Affiliation: Department of Informatics, University of Salerno, Fisciano (SA), Italy

Keyword(s): Programming Education, Large Language Models, Automatic Code Evaluation.

Abstract: The improved capabilities of Large Language Models (LLMs) enable their use in various fields, including education. Teachers and students already use LLMs to support teaching and learning. In this study, we measure the accuracy of LLMs gpt-3.5, gpt-4o, claude-sonnet-20241022, and llama3 in correcting and evaluating students’ programming assignments. Seven assessments carried out by 50 students were assessed using three different prompting strategies for each of the LLMs presented. Then we compared the generated grades with the grades assigned by the teacher, who corrected them manually throughout the year. The results showed that models such as llama3 and gpt-4o obtained low percentages of generated evaluations, while gpt-3.5 and claude-sonnet-20241022 obtained interesting results if they received at least one example of evaluation.

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.17.4.144

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Piscitelli, A., De Rosa, M., Fuccella, V. and Costagliola, G. (2025). Large Language Models for Student Code Evaluation: Insights and Accuracy. In Proceedings of the 17th International Conference on Computer Supported Education - Volume 2: CSEDU; ISBN 978-989-758-746-7; ISSN 2184-5026, SciTePress, pages 534-544. DOI: 10.5220/0013287500003932

@conference{csedu25,
author={Alfonso Piscitelli and Mattia {De Rosa} and Vittorio Fuccella and Gennaro Costagliola},
title={Large Language Models for Student Code Evaluation: Insights and Accuracy},
booktitle={Proceedings of the 17th International Conference on Computer Supported Education - Volume 2: CSEDU},
year={2025},
pages={534-544},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013287500003932},
isbn={978-989-758-746-7},
issn={2184-5026},
}

TY - CONF

JO - Proceedings of the 17th International Conference on Computer Supported Education - Volume 2: CSEDU
TI - Large Language Models for Student Code Evaluation: Insights and Accuracy
SN - 978-989-758-746-7
IS - 2184-5026
AU - Piscitelli, A.
AU - De Rosa, M.
AU - Fuccella, V.
AU - Costagliola, G.
PY - 2025
SP - 534
EP - 544
DO - 10.5220/0013287500003932
PB - SciTePress