Ante
An ai-native, cloud-native, local-first agent runtime, built by Antigma Labs. A precursor to our mission of building substrate for self-organizing intelligence.
Built from the ground up in native Rust — a single self-contained binary with no external dependencies.
Key features
A single ~15MB binary with zero runtime dependencies. Built for minimal overhead and maximum throughput — the ideal runtime for orchestrating agents at cellular scale.
Run models entirely on your machine with built-in llama.cpp integration. No API keys, no internet, no data leaving your device. Ante handles engine installation, model discovery, and memory management automatically.
Bring your own API key, subscription, or local model. Switch between 12+ providers freely — Anthropic, OpenAI, Gemini, Grok, Open Router, and more. No account required, no telemetry gate, no walled garden. Not even ours.
Why Ante?
- Obsessed with simplicity and quality. In a world of abundance, maintaining quality and a high level of trust is more valuable than ever.
- Self-contained with offline mode. Your data stays on your machine. No phone-home, no telemetry gate.
- Multi-agent orchestration. Coordinate multiple agents that self-organize around complex tasks.
- Benchmark proven. Public, reproducible evals with downloadable builds. See Benchmarks.
How it works
Ante runs as a client-daemon architecture. The daemon manages structured turn lifecycles — prompt, tool calls, confirmation, execution — channeling model power through a safe, predictable loop. The client can be an interactive TUI, a headless CLI, or an external process.
Architecturally, Ante is moving toward an agent-centric runtime where long-lived agents coordinate through message passing — an actor model with hierarchical supervision, isolated state, and clear ownership of resources per agent.
Ante can also run as a long-lived server (ante serve) that external clients drive through a structured JSONL protocol — ideal for building editor plugins, web UIs, and custom integrations.
Ante is currently in preview and under active development. Expect breaking changes, experimental features, and incomplete functionality. Currently only macOS and Linux are supported.
Next steps
Install Ante and run your first prompt in under a minute.
The product and engineering principles we optimize for.
Learn to use the rich terminal interface.
Visual, step-by-step guides for common TUI workflows.
Pick and configure your model provider.
Understand sessions, tasks, turns, and the protocol.
Run Ante fully offline with local models.
How Ante self-organizes agents for complex tasks.
See how Ante performs on Terminal Bench.