LLMs as Code Generators for Model-Driven Development

Yoonsik Cheon

2025

Abstract

Model-Driven Development (MDD) aims to automate code generation from high-level models but traditionally relies on platform-specific tools. This study explores the feasibility of using large language models (LLMs) for MDD by evaluating ChatGPT’s ability to generate Dart/Flutter code from UML class diagrams and OCL constraints. Our findings show that LLM-generated code accurately implements OCL constraints, automates repetitive scaffolding, and maintains structural consistency with human-written code. However, challenges remain in verifying correctness, optimizing performance, and improving modular abstraction. While LLMs show strong potential to reduce development effort and better enforce model constraints, further work is needed to strengthen verification, boost efficiency, and enhance contextual awareness for broader adoption in MDD workflows.

Download


Paper Citation


in Harvard Style

Cheon Y. (2025). LLMs as Code Generators for Model-Driven Development. In Proceedings of the 20th International Conference on Software Technologies - Volume 1: ICSOFT; ISBN 978-989-758-757-3, SciTePress, pages 386-393. DOI: 10.5220/0013580300003964


in Bibtex Style

@conference{icsoft25,
author={Yoonsik Cheon},
title={LLMs as Code Generators for Model-Driven Development},
booktitle={Proceedings of the 20th International Conference on Software Technologies - Volume 1: ICSOFT},
year={2025},
pages={386-393},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013580300003964},
isbn={978-989-758-757-3},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 20th International Conference on Software Technologies - Volume 1: ICSOFT
TI - LLMs as Code Generators for Model-Driven Development
SN - 978-989-758-757-3
AU - Cheon Y.
PY - 2025
SP - 386
EP - 393
DO - 10.5220/0013580300003964
PB - SciTePress