Note: This subject can be adapted to the student’s interests. Recent years saw a significant increase in the capabilities of natural language and speech interfaces, as proved by tools such as Siri, Alexa, ChatGPT, Bard, etc. These applications not only


Note: This subject can be adapted to the student’s interests.

Recent years saw a significant increase in the capabilities of natural language and speech interfaces, as proved by tools such as Siri, Alexa, ChatGPT, Bard, etc.
These applications not only increase convenience and usability for regular users but also provide never-before-seen potential for people with special needs.
Nonetheless, a problem arises when it comes to adapting these capabilities for specific domains and/or use cases.

Model-driven engineering (MDE) is a widely-used engineering pattern that lifts system development to a higher level of abstraction using modelling languages (e.g. UML, SysML, etc.).
In this MSc project, the aim is to study the enhancement of MDE-developed systems and adding speech-control capabilities.
The research goal is to develop, integrate and evaluate natural language and speech-based capabilities into a system.
In this respect, the system should become capable of answering questions about the system and interact in a similar ways to online chat-bots

At the current stage, we foresee two approach methods that pose two ends of a spectrum:
a) Parsing of speech transcripts that allows identification of user commands that are already available in the model editor’s default functionality. (e.g. following a rule-based methodology). (Video by Jamarayan et al [1])
b) The use of AI (e.g. LLMs/ChatGPT’s Speech IO) to operate on a textual rendering of the UML diagrams that “externally” parses, modifies and returns the resulting models.

Following the implementation of both methods, we expect that a combination of both will maximise usability and provide the best results.

Further reading material
[1]  Jayaraman, Lehner, Klikovits,  Wimmer. Towards Generating Model-Driven Speech Interfaces for Digital Twins (2023)

Betreuer: Manuel Wimmer und Stefan Klikovits

Developing Speech-based Chatbots for Model-Based Systems