The use of large language models for program repair
Large Language Models (LLMs) have emerged as a promising approach for automated program repair, offering code comprehension and generation capabilities that can address software bugs. Several program repair models based on LLMs have been developed recently. However, findings and insights from these efforts are scattered across various studies, lacking a systematic overview of LLMs' utilization in program repair. Therefore, this Systematic Literature Review (SLR) was conducted to investigate the current landscape of LLM utilization in program repair. This study defined seven research questions and thoroughly selected 41 relevant studies from scientific databases to explore these questions. The results showed the diverse capabilities of LLMs for program repair. The findings revealed that Encoder-Decoder architectures emerged as the most common LLM design for program repair tasks and that mostly open-access datasets were used. Several evaluation metrics were applied, primarily consisting of accuracy, exact match, and BLEU scores. Additionally, the review investigated several LLM fine-tuning methods, including fine-tuning on specialized datasets, curriculum learning, iterative approaches, and knowledge-intensified techniques. These findings pave the way for further research on utilizing the full potential of LLMs to revolutionize automated program repair.
Other Information
Published in: Computer Standards & Interfaces
License: http://creativecommons.org/licenses/by/4.0/
See article on publisher's website: https://dx.doi.org/10.1016/j.csi.2024.103951
Funding
Open Access funding provided by the Qatar National Library.
History
Language
- English
Publisher
ElsevierPublication Year
- 2025
License statement
This Item is licensed under the Creative Commons Attribution 4.0 International License.Institution affiliated with
- Qatar University
- College of Engineering - QU