What LISA Covers
Security, reliability, cost, and compliance.
The project focuses on the operational realities of integrating large language models into existing software products.
Collective Research Project
LISA helps Flemish software companies build AI products that are testable, secure, observable, and practical to operate beyond the demo stage.
Generative AI is on the rise, with large language models leading the way. These technologies offer innovative possibilities for implementing existing functionality, including natural language processing tasks such as entity extraction, sentiment analysis, and summarization. They also enable new user experiences, such as chat-based interfaces and agents. Many software developers are expected to equip their products with LLMs in the coming years.
While building a proof of concept with LLMs is relatively straightforward thanks to supportive technologies such as LangChain, Ollama, or CrewAI, deploying these systems in a secure, robust, and cost-effective manner is far from trivial. How do you unit test an application where a crucial component behaves non-deterministically? How do you ensure system outputs remain permissible and reliable over time?
In response to these challenges, Sirris and the DistriNet Research Unit at KU Leuven initiated LISA: LLM Implementation, Security & Adaptation. The project consolidates knowledge on integrating LLMs into new and existing digital applications for Flemish software companies, with emphasis on robustness, cybersecurity, performance, cost-efficiency, and regulatory compliance.
What LISA Covers
The project focuses on the operational realities of integrating large language models into existing software products.
For Whom
LISA is aimed at Flemish SMEs building and running digital services that need robust AI capabilities, not just prototypes.
How It Works
Sirris and KU Leuven DistriNet gather lessons from the field and share them through workshops, webinars, and reusable material.