Webinar on Observability of AI-based Applications
Date:
December 16, 2025
Time:
From 13:00 to 14:00
Calendar Event:
Download ICS
Recording:
Download video
Testing during development has long been the de facto standard for ensuring software quality. This approach works well for systems that deterministically process structured data. With generative AI (large language models in particular) software can now handle unstructured data, often in a non-deterministic way. This shift makes it harder to guarantee quality during development. Hence, the need to incorporate observability in production, which is the focus of this webinar.
Participants gain an understanding of state-of-the-art approaches to monitoring AI-based software. In particular, the webinar will explore the following aspects:
- Evals: Currently considered best practice for specifying the expected behavior of AI-based software. They go beyond simple unit tests to handle unstructured data and non-deterministic outputs.
- Debugging: Observability is not only about detecting failures but also about providing valuable information to fix them. For example, agentic systems may produce unexpected task decompositions or autonomously follow erroneous workflows.
- Cost monitoring: Prompting LLMs in the cloud can quickly become expensive. Monitoring usage helps identify opportunities to reduce costs.
- Latency monitoring: Whether running in the cloud or on-premise, latency can impact time-sensitive tasks, especially when multiple prompts are required.
The webinar will feature three main parts:
- 20 min: General introduction to observability of AI-based software
- 20 min: Demonstration of LangFuse by Lotte, Developer Relations Engineer at LangFuse.
- 20 min: Q&A and live discussion