Authors:
Boubou Niang
;
Ilyes Alili
;
Benoit Verhaeghe
;
Nicolas Hlad
and
Anas Shatnawi
Affiliation:
Berger-Levrault, Limonest, France
Keyword(s):
Model-Based Reverse Engineering, Code Generation, Large Language Models, Prompt Engineering, Context Enrichment.
Abstract:
Large Language Models (LLMs) have shown considerable promise in automating software development tasks such as code completion, understanding, and generation. However, producing high-quality, contextually relevant code remains a challenge, particularly for complex or domain-specific applications. This paper presents an approach to enhance LLM-based code generation by integrating model-driven reverse engineering to provide richer contextual information. Our findings indicate that incorporating unit tests and method dependencies significantly improves the accuracy and reliability of generated code in industrial projects. In contrast, simpler strategies based on method signatures perform similarly in open-source projects, suggesting that additional context is less critical in such environments. These results underscore the importance of structured input in improving LLM-generated code, particularly for industrial applications.