M. Feneberger, S. Palmetshofer, R. Plösch: Can Engineered GenAI Prompts Help Students Write Better Software Requirements?, Proceedings of the 18th International Conference on the Quality of Information and Communications Technology (QUATIC) 2025, Lisbon, Portugal, September 3-5, 2025 (accepted for publication).


When writing software requirements, there are many best practices to consider. The INCOSE is maintaining a set of writing guidelines, including syntactical, semantic and other rules. Today’s GenAI models are known to perform well in general natural language processing. However, for checking highly nuanced quality measures like the INCOSE rules, the models’ full potential depends on mindfully tweaked prompts. In this paper, we investigate how much a list of issues uncovered by such prompts helps students (the tool group) improve the software requirements of their programming projects, compared to those ad-hoc prompting ChatGPT (the control group). The control group was also allowed to generate improvements. We evaluated the improvements by the number of issues before and after the revision. While the control group produced more improvement suggestions with the average correctness increased by a higher amount (5 % vs. 2 %), their results also show a considerably higher standard deviation and many cases of high negative change of correctness. We did not observe this for the tool group. The tool group
students additionally rated the initial requirements’ issues. We analysed whether the participants had prioritised fixing highly rated issues. The results confirm that within an improvement suggestion, the probability of a highly-rated issue being fixed is more than three times greater than for other ratings. The probability of a requirement being improved is also weakly correlated (r = 0.23) with the number of highly-rated issues of the requirement.

Can Engineered GenAI Prompts Help Students Write Better Software Requirements?