LLM InfrastructureOpen Source✦ Free Tier

Qdrant

High-performance vector DB with filtering

20,000 stars● Health 80ActiveApp Infrastructure

About

Rust-based vector database optimized for filtering. Supports named vectors, payloads, and hybrid search. Self-hostable or cloud.

Choose Qdrant when…

  • You need high-performance vector search in production
  • You want OSS with Rust-level performance
  • Filtering alongside vector search is important

Builder Slot

What knowledge does your AI have?Optional for most stacks

The memory layer — stores and retrieves vector embeddings for RAG and semantic search

Dev Tools
Not applicable
App Infra
Optional
Hybrid
Optional

Other tools in this slot:

Stack Genome Detection

AIchitect's Genome scanner detects Qdrant in your project via these signals:

npm packages
@qdrant/js-client-rest@qdrant/qdrant-js
pip packages
qdrant-client
env vars
QDRANT_URLQDRANT_API_KEY

Integrates with (5)

LangChainPipelines & RAG

LangChain has a native Qdrant vectorstore integration — pass a Qdrant client and it handles embedding storage and similarity search.

Semantic retrieval inside any LangChain chain or agent without writing custom retrieval code.

Compare →
LlamaIndexPipelines & RAG

LlamaIndex stores and retrieves document embeddings from Qdrant via its QdrantVectorStore adapter inside a VectorStoreIndex.

Production-grade semantic retrieval with Qdrant's filtered search and payload metadata inside LlamaIndex pipelines.

Compare →
HaystackPipelines & RAG

Haystack has a native Qdrant document store integration — Qdrant becomes a retrieval backend in Haystack pipelines.

Production-grade vector retrieval inside Haystack pipelines using Qdrant's filtered search and payload storage.

Compare →
DifyPipelines & RAG

Dify connects to a self-hosted Qdrant instance as its knowledge base vector store — documents are chunked, embedded, and stored in Qdrant.

Self-hosted knowledge retrieval inside Dify workflows, keeping document data on your own infrastructure.

Compare →
Vercel AI SDKLLM Infrastructure

Apps built with the Vercel AI SDK call Qdrant directly for retrieval in RAG endpoints, fetching context before passing it to the SDK's generate function.

Semantic retrieval in Vercel AI SDK streaming endpoints — context from Qdrant enriches every generation without breaking streaming.

Compare →

Often paired with (1)

Alternatives to consider (6)

Pricing

✦ Free tier available
CloudUsage-based

In 7 stacks

Badge

Add to your GitHub README

Qdrant on AIchitect[![Qdrant](https://aichitect.dev/badge/tool/qdrant)](https://aichitect.dev/tool/qdrant)

Explore the full AI landscape

See how Qdrant fits into the bigger picture — browse all 207 tools and their relationships.

Explore graph →