Software development in the age of intelligence: embracing large language models with the right approach
信息与电子工程前沿(英文)2023年24卷第11期 页码:1513-1519
Affiliations:
School of Computer Science, Fudan University, Shanghai 200438, China
Author bio:
[
"Xin PENG is Professor and Deputy Dean of School of Computer Science at Fudan University, China. He received his PhD in Computer Science from Fudan University in 2006. He is Deputy Director of CCF (China Computer Federation) Technical Committee on Sof-tware Engineering. He is Co-Editor-in-Chief of Journal of Software: Evolution and Process and serves on the editorial boards of reputable journals, such as ACM Transactions on Software Engineering and Methodology, Empirical Software Engineering, and Chinese Journal of Software. His research interests include intelligent software development, cloud native and artificial intelligence for IT operations (AIOps), and software development and testing for smart vehicle. His works won the Best Paper Award of ICSM 2011, the ACM SIGSOFT Distinguished Paper Award of ASE 2018/2021 and ICPC 2022, the IEEE TCSE Distinguished Paper Award of ICSME 2018/2019/2020 and SANER 2023, and IEEE Transactions on Software Engineering Best Paper Award for 2018."
]
XIN PENG. Software development in the age of intelligence: embracing large language models with the right approach. [J]. Frontiers of information technology & electronic engineering, 2023, 24(11): 1513-1519.
XIN PENG. Software development in the age of intelligence: embracing large language models with the right approach. [J]. Frontiers of information technology & electronic engineering, 2023, 24(11): 1513-1519. DOI: 10.1631/FITEE.2300537.
JrBrooks FP, 1987. No silver bullet essence and accidents of software engineering. Computer, 20(4):10-19. https://doi.org/10.1109/MC.1987.1663532https://doi.org/10.1109/MC.1987.1663532
Dou SH, Shan JJ, Jia HX, et al., 2023. Towards understanding the capability of large language models on code clone detection: a survey. https://arxiv.org/abs/2308.01191https://arxiv.org/abs/2308.01191
Du XY, Liu MW, Wang KX, et al., 2023. ClassEval: a manually-crafted benchmark for evaluating LLMs on class-level code generation. https://arxiv.org/abs/2308.01861https://arxiv.org/abs/2308.01861
Hou XY, Zhao YJ, Liu Y, et al., 2023. Large language models for software engineering: a systematic literature review. https://arxiv.org/abs/2308.10620https://arxiv.org/abs/2308.10620
Liu JW, Xia CS, Wang YY, et al., 2023. Is your code generated by ChatGPT really correct? Rigorous evaluation of large language models for code generation. http://arxiv.org/abs/2305.01210http://arxiv.org/abs/2305.01210
Meyer B, 2023. AI does not help programmers. Commun ACM, early access.
Open AI, 2023. GPT-4 technical report. https://arxiv.org/abs/2303.08774https://arxiv.org/abs/2303.08774
Wang JJ, Huang YC, Chen CY, et al., 2023. Software testing with large language model: survey, landscape, and vision. https://arxiv.org/abs/2307.07221https://arxiv.org/abs/2307.07221
Welsh M, 2023. The end of programming. Commun ACM, 66(1):34-35. https://doi.org/10.1145/3570220https://doi.org/10.1145/3570220
Wu QY, Bansal G, Zhang JY, et al., 2023. AutoGen: enabling next-Gen LLM applications via multi-agent conversation. https://arxiv.org/abs/2308.08155https://arxiv.org/abs/2308.08155
Yuan ZQ, Liu JW, Zi QC, et al., 2023a. Evaluating instruction-tuned large language models on code comprehension and generation. https://arxiv.org/abs/2308.01240https://arxiv.org/abs/2308.01240
Yuan ZQ, Lou YL, Liu MW, et al., 2023b. No more manual tests? Evaluating and improving ChatGPT for unit test generation. https://arxiv.org/abs/2305.04207https://arxiv.org/abs/2305.04207
Zhao WX, Zhou K, Li JY, et al., 2023. A survey of large language models. https://arxiv.org/abs/2303.18223https://arxiv.org/abs/2303.18223
Zheng ZB, Ning KW, Chen JC, et al., 2023. Towards an understanding of large language models in software engineering tasks. https://arxiv.org/abs/2308.11396https://arxiv.org/abs/2308.11396