Prompt & EvalCommercial✦ Free Tier

Galileo

Real-time LLM evaluation with sub-200ms guardrail models

App Infrastructure

About

LLM evaluation platform with evaluation models that run in under 200ms — fast enough to use as production guardrails, not just offline eval. Covers hallucination detection, RAG quality, and safety scoring. Distinct from Galileo AI (the UI design tool).

Choose Galileo when…

  • You need real-time LLM guardrails in your production pipeline
  • You want eval models fast enough (<200ms) to run inline with inference
  • You need hallucination and RAG quality scoring at production latency

Builder Slot

How do you know it's working?Optional for most stacks

Tests, evals, and experiment tracking to measure and improve your AI output quality

Dev Tools
Not applicable
App Infra
Recommended
Hybrid
Optional

Other tools in this slot:

Stack Genome Detection

AIchitect's Genome scanner detects Galileo in your project via these signals:

pip packages
galileo
env vars
GALILEO_API_KEY

Integrates with (2)

LangChainPipelines & RAG
Compare →
OpenAI APILLM Infrastructure
Compare →

Alternatives to consider (3)

Pricing

✦ Free tier available
Free$0
ProUsage-based

Badge

Add to your GitHub README

Galileo on AIchitect[![Galileo](https://aichitect.dev/badge/tool/galileo-eval)](https://aichitect.dev/tool/galileo-eval)

Explore the full AI landscape

See how Galileo fits into the bigger picture — browse all 207 tools and their relationships.

Explore graph →