Fine-tuningOpen Source✦ Free Tier

Unsloth

2× faster, 70% less memory LoRA fine-tuning

32,000 starsApp Infrastructure

About

Dramatically speeds up LoRA and QLoRA fine-tuning by rewriting GPU kernels. Compatible with HuggingFace and works with Llama, Mistral, Gemma, and more. No accuracy loss.

Choose Unsloth when…

  • You want the fastest OSS LoRA fine-tuning with minimal GPU memory
  • You're fine-tuning Llama, Mistral, or Gemma models
  • Memory constraints are the bottleneck in your training setup

Builder Slot

How do you adapt models to your domain?Optional for most stacks

Fine-tuning frameworks and platforms for training custom model adaptations with LoRA, QLoRA, or full fine-tuning

Dev Tools
Not applicable
App Infra
Optional
Hybrid
Optional

Other tools in this slot:

Stack Genome Detection

AIchitect's Genome scanner detects Unsloth in your project via these signals:

pip packages
unslothunsloth_zoo

Integrates with (1)

vLLMLLM Infrastructure
Compare →

Alternatives to consider (4)

Pricing

✦ Free tier available
Pro$29/mo

Badge

Add to your GitHub README

Unsloth on AIchitect[![Unsloth](https://aichitect.dev/badge/tool/unsloth)](https://aichitect.dev/tool/unsloth)

Explore the full AI landscape

See how Unsloth fits into the bigger picture — browse all 207 tools and their relationships.

Explore graph →