These tools integrates with

LangfusevsDeepEval

OSS LLM engineering platform versus LLM evaluation framework — 14+ metrics

Compare interactively in Explore →

Choose Langfuse when…

  • You want open-source LLM observability
  • Self-hosting your tracing stack is important
  • You need cost tracking across models and users

Choose DeepEval when…

  • You want a pytest-style framework for LLM testing
  • Unit-test-like evals for LLM outputs fit your workflow
  • You need RAG-specific metrics like faithfulness and relevancy

Side-by-side comparison

Field
Langfuse
DeepEval
Category
LLM Infrastructure
Prompt & Eval
Type
Open Source
Open Source
Free Tier
✓ Yes
✓ Yes
Pricing Plans
Cloud: $59/mo
GitHub Stars
7,000
5,500

Langfuse

Open-source platform for tracing, evaluations, and prompt management. Self-hostable alternative to LangSmith with clean UX.

DeepEval

Open-source evaluation framework with 14+ metrics including faithfulness, relevancy, and hallucination detection. Integrates with CI/CD.

Shared Connections3 tools both integrate with

Only Langfuse (24)

CursorClaude CodeOpenHandsCrewAIAutoGenLangGraphLangChainLlamaIndexDifyMastra

Only DeepEval (1)

Langfuse

Explore the full AI landscape

See how Langfuse and DeepEval fit into the bigger picture — 123 tools, 304 relationships, all mapped.

Open in Explore →