LISA: LLM
Implementation,
Security & Adaptation

Generative AI is on the rise, with large language models (LLMs) leading the way. These technologies offer innovative possibilities for implementing existing functionalities, such as natural language processing (NLP) tasks like entity extraction, sentiment analysis, and summarisation. Additionally, they enable the creation of entirely new user experiences, such as chat-based interfaces and agents. It is anticipated that many software developers will equip their products with LLMs in the coming years.

While building a proof of concept with LLMs is relatively straightforward thanks to supportive technologies such as Langchain, Ollama, or Crew.ai, deploying these systems in a secure, robust, and cost-effective manner is far from trivial. How do you unit test an application where a crucial component (the LLM) behaves non-deterministically? Or how do you ensure that the outputs of your system are permissible at all times?

In response to these challenges, Sirris and DistriNet Research Unit at KU Leuven have initiated the collective research project LISA (LLM Implementation, Security & Adaptation). LISA aims to consolidate knowledge on integrating LLMs into existing or new digital applications. The project targets Flemish software companies (SMEs) that develop and operate digital products. The focus of the research is on ensuring robustness, cybersecurity, performance, cost-efficiency, and regulatory compliance when integrating LLMs into proprietary software.