Langfuse just got faster →
DocsEvaluationOverview

Evaluation Overview

Evals give you a repeatable check of your LLM application's behavior. You replace guesswork with data.

They also help you catch regressions before you ship a change. You tweak a prompt to handle an edge case, run your eval, and immediately see if it affected the behavior of your application in unintended ways.

Watch this walkthrough of Langfuse Evaluation and how to use it to improve your LLM application.

Getting Started

Follow the Get Started guide to set up your first evaluation. It helps you pick the right approach — automated monitoring, structured experiments, or human review — and walks you through the setup step by step.

If you're new to LLM evaluation concepts, explore the Core Concepts page first for background on scores, evaluation methods, and experiments.

Looking for something specific? Take a look under Evaluation Methods and Experiments for guides on specific topics.

GitHub Discussions


Was this page helpful?