MagicAF is a modular, production-grade Rust framework that provides the foundational building blocks for AI-powered systems: embeddings, vector search, LLM orchestration, and RAG workflows.
It is designed from the ground up for air-gapped, on-premises environments — no cloud dependencies, no vendor lock-in. MagicAF is a defense-grade, HIPAA-compliant AI toolkit suitable for classified, healthcare, and regulated deployments.
Getting Started →
Install MagicAF, set up local services, and run your first RAG pipeline in under 5 minutes.
Core Concepts →
Understand the architecture, layered design, and trait-based extensibility model.
Guides →
Step-by-step tutorials for building custom adapters, structured output parsing, and more.
API Reference →
Complete reference for every trait, struct, configuration option, and error type.
Deployment →
Docker Compose, air-gapped setup, edge/mobile deployment, observability, and scaling.
Examples →
Working code for minimal RAG, document Q&A, and multi-source analysis pipelines.
Design Philosophy
| Principle | Rationale |
|---|---|
| Extensibility over cleverness | Clean trait boundaries; domain logic lives in adapters, not the framework. |
| Clarity over abstraction | Flat DTO structs, explicit error codes, straightforward module layout. |
| Interface stability over optimization | Public API surface is small and versioned; internals can change freely. |
| Local-first | Every component assumes a local endpoint — no cloud SDK required. |
| FFI-ready | Flat structs + numeric error codes prepare the surface for C / Swift / Python / Java bindings. |
What’s in the Box
| Crate | Purpose |
|---|---|
magicaf-core | Traits, DTOs, config, error types, RAG engine, adapter interfaces, in-memory vector store |
magicaf-qdrant | Qdrant vector store implementation (REST API) |
magicaf-local-llm | OpenAI-compatible local LLM client |