R. Ramler, P. Staudinger, R. Plösch, D. Winkler: Unit Testing Past vs. Present - Examining LLMs' Impact on Defect Detection and Efficiency, 18th IEEE International Conference on Software Testing, Verification and Validation (ICST) 2025, Naples, Italy, March 31 - April 4, 2025. accepted poster, available via Arxiv
The integration of Large Language Models (LLMs), such as ChatGPT and GitHub Copilot, into software engineering workflows has shown potential to enhance productivity, particularly in software testing. This paper investigates whether LLM support improves defect detection effectiveness during unit testing. Building on prior studies comparing manual and tool-supported testing, we replicated and extended an experiment where participants wrote unit tests for a Java-based system with seeded defects within a time-boxed session, supported by LLMs. Comparing LLM supported and manual testing, results show that LLM support significantly increases the number of unit tests generated, defect detection rates, and overall testing efficiency. These findings highlight the potential of LLMs to improve testing and defect detection outcomes, providing empirical insights into their practical application in software testing.