Protocol Reference
Ante uses a typed message-passing protocol between the client and daemon. Messages are exchanged over bounded async channels (in-process) or as JSON Lines over stdin/stdout (external clients).
Wire format
External clients communicate with the daemon using JSON Lines (JSONL) — one JSON object per line over stdin/stdout.
- Client → Daemon: Send
OpMsgobjects as JSON lines to stdin - Daemon → Client: Receive
EventMsgobjects as JSON lines from stdout
OpMsg envelope
Every operation is wrapped in an OpMsg:
{
"op": { "StartSession": { "model": "claude-sonnet-4-5", "provider": "anthropic", "streaming": true } },
"id": "op_01ARZ3NDEKTSV4RRFFQ69G5FAV"
}
EventMsg envelope
Every event is wrapped in an EventMsg:
{
"timestamp": "2025-06-01T12:00:00Z",
"id": "evt_01ARZ3NDEKTSV4RRFFQ69G5FAV",
"event": { "AgentMessage": "Here is the result..." },
"parent": "op_01ARZ3NDEKTSV4RRFFQ69G5FAV"
}
The parent field links an event back to the operation that triggered it. It is null when not applicable.
Message IDs
Every message has a typed Id consisting of a prefix (up to 4 bytes) and a ULID. The string format is {prefix}_{ulid}.
| Prefix | Usage |
|---|---|
op_ | Operations (client → daemon) |
evt_ | Events (daemon → client) |
ses_ | Session identifiers |
step_ | Step identifiers |
Example: op_01J5A3B7C9D0E1F2G3H4J5K6M7
Operations (Client → Daemon)
StartSession
Initialize a new session with model, provider, and configuration.
{
"op": {
"StartSession": {
"model": "claude-sonnet-4-5",
"provider": "anthropic",
"policy": null,
"streaming": true,
"system_prompt": null,
"append_system_prompt": null,
"allowed_tools": null,
"disallowed_tools": null,
"cwd": null,
"thinking": null
}
},
"id": "op_..."
}
SessionConfig fields:
| Field | Type | Description |
|---|---|---|
model | string | Model name (e.g. "claude-sonnet-4-5") |
provider | string | Provider name (e.g. "anthropic", "openai", "gemini") |
policy | Policy? | Tool approval policy. null uses default |
streaming | bool | Enable streaming deltas (MessageDelta, ThinkingDelta) |
system_prompt | string? | Override the default system prompt entirely |
append_system_prompt | string? | Append content to the default system prompt |
allowed_tools | string[]? | Whitelist — only these tools are available |
disallowed_tools | string[]? | Blacklist — these tools are removed |
cwd | string? | Working directory. Defaults to daemon's process directory |
thinking | Thinking? | Thinking level override: "Disabled", "Enabled", "Deep", or "Max" |
UpdateSession
Update the active session without restarting it (e.g. switch models mid-session).
{
"op": {
"UpdateSession": {
"model": { "name": "gpt-5.4", "temperature": 0.2 }
}
},
"id": "op_..."
}
SessionUpdate fields:
| Field | Type | Description |
|---|---|---|
model | ModelSpec | New model specification to use |
Steer
Provide additional guidance to the agent during an active turn without starting a new one.
{ "op": { "Steer": "focus on the auth module first" }, "id": "op_..." }
UserInput
Submit user text input to the agent.
{ "op": { "UserInput": "explain what this project does" }, "id": "op_..." }
ApprovalResponse
Respond to a tool approval request (sent after receiving a TurnPause event with Approval reason).
{
"op": {
"ApprovalResponse": {
"turn_id": "step_01ARZ...",
"responses": [
["tool_use_abc123", "Accept"],
["tool_use_def456", "Skip"]
]
}
},
"id": "op_..."
}
ReviewDecision values:
| Decision | Description |
|---|---|
Accept | Allow this tool call |
Skip | Skip this tool call |
AcceptForSession | Allow this tool for the rest of the session |
Abort | Abort the current task |
SlashCommand
Invoke a skill by name.
{ "op": { "SlashCommand": { "name": "commit", "args": "-m 'fix bug'" } }, "id": "op_..." }
Interrupt
Abort whatever is currently running.
{ "op": "Interrupt", "id": "op_..." }
Shutdown
Request a graceful shutdown.
{ "op": "Shutdown", "id": "op_..." }
OfflineMode
Offline mode operations for local model management. See the Offline Mode operations section.
{ "op": { "OfflineMode": "Init" }, "id": "op_..." }
Events (Daemon → Client)
Session events
SessionStart
Emitted when a session is initialized. Contains metadata about the active model, provider, session ID, and working directory. Skills and subagents are delivered separately via ExtensionRefreshed.
{
"event": {
"SessionStart": {
"model": { "name": "claude-sonnet-4-5", "max_tokens": 8192 },
"provider": "Anthropic",
"session_id": "ses_01ARZ...",
"cwd": "/home/user/project"
}
}
}
SessionInitialized fields:
| Field | Type | Description |
|---|---|---|
model | ModelSpec | Active model specification |
provider | ApiProvider | Active provider |
session_id | Id | Unique session identifier |
cwd | string | Working directory path |
SessionUpdated
Emitted when the active session is updated in place (e.g. model changed via UpdateSession).
{
"event": {
"SessionUpdated": {
"model": { "name": "gpt-5.4" },
"provider": "OpenAI",
"session_id": "ses_01ARZ...",
"cwd": "/home/user/project"
}
}
}
SessionEnd
Emitted when the session terminates.
{ "event": "SessionEnd" }
Turn lifecycle events
TurnStart
Emitted when a new turn begins processing.
{ "event": { "TurnStart": { "turn_id": "step_01ARZ..." } } }
TurnPause
Emitted when a turn is paused waiting for user input (e.g. tool approval).
{
"event": {
"TurnPause": {
"turn_id": "step_01ARZ...",
"reason": {
"Approval": {
"tools": [
{ "id": "tool_use_abc123", "name": "Bash", "input": { "command": "ls -la" } }
],
"message": "Allow running shell command?"
}
}
}
}
}
TurnPauseReason variants:
| Variant | Fields | Description |
|---|---|---|
Approval | tools: ToolUse[], message: string | Waiting for tool approval |
TurnEnd
Emitted when a turn completes.
{ "event": { "TurnEnd": { "turn_id": "step_01ARZ...", "status": "Completed" } } }
TurnEndStatus variants:
| Variant | Fields | Description |
|---|---|---|
Completed | — | Turn finished successfully |
Interrupted | reason?: string | Turn was interrupted |
Error | message: string | Turn ended with an error |
Message streaming events
AgentMessage
Complete agent text response (non-streaming).
{ "event": { "AgentMessage": "The project is a web server that..." } }
Thinking
Complete chain-of-thought block (non-streaming).
{ "event": { "Thinking": "Let me analyze the codebase structure..." } }
MessageDelta
Streaming chunk of the agent's message. Concatenate all deltas to build the full message.
{ "event": { "MessageDelta": "The project" } }
ThinkingDelta
Streaming chunk of the agent's thinking. Concatenate all deltas to build the full thinking block.
{ "event": { "ThinkingDelta": "Let me" } }
Tool events
ToolStart
Emitted when a tool invocation begins.
{
"event": {
"ToolStart": {
"id": "tool_use_abc123",
"name": "Read",
"input": { "file_path": "/src/main.rs" }
}
}
}
ToolUpdate
Progress update during tool execution.
{
"event": {
"ToolUpdate": {
"tool_use_id": "tool_use_abc123",
"seq": 0,
"message": "Reading file..."
}
}
}
| Field | Type | Description |
|---|---|---|
tool_use_id | string | Tool call identifier (matches ToolStart.id) |
seq | u64 | Monotonically increasing sequence number |
message | string | Progress message |
ToolEnd
Emitted when a tool execution completes.
{
"event": {
"ToolEnd": {
"tool_use_id": "tool_use_abc123",
"status": "Completed",
"result_json": { "content": "fn main() { ... }" },
"is_error": false
}
}
}
ToolEndStatus variants:
| Variant | Description |
|---|---|
Completed | Tool ran successfully |
Cancelled | Tool execution was cancelled |
Denied | Tool was denied by the user |
Failed | Tool execution failed |
Compaction events
CompactStart
Emitted when dialog compaction begins.
{ "event": "CompactStart" }
CompactEnd
Emitted when dialog compaction completes.
{ "event": "CompactEnd" }
Extension events
ExtensionRefreshed
Emitted when skills and subagents are refreshed. This is also sent after SessionStart to deliver the initial set of skills and subagents.
{
"event": {
"ExtensionRefreshed": {
"session_id": "ses_01ARZ...",
"skills": [
{ "name": "commit", "description": "Create a git commit", "scope": "user", "argument_hint": "-m 'message'" }
],
"subagents": [
{ "name": "explore", "description": "Explore the codebase", "scope": "project" }
]
}
}
}
Informational events
UsageUpdate
Token usage statistics for the session.
{ "event": { "UsageUpdate": { "usage": { "input_tokens": 1500, "output_tokens": 300 } } } }
Info
General informational message.
{ "event": { "Info": "Compacting conversation history..." } }
Error
Error message.
{ "event": { "Error": "Authentication failed: invalid API key" } }
Goodbye
Final message before the daemon disconnects. After receiving this, no more events will be sent.
{ "event": "Goodbye" }
Offline mode events
See the OfflineModeEvt section below.
Offline mode types
OfflineModeOp
Operations for managing local/offline models.
| Variant | Fields | Description |
|---|---|---|
Init | — | Check llama.cpp status and discover local models |
InstallEngine | — | Download and install llama.cpp |
UpgradeEngine | — | Upgrade llama.cpp to a newer version |
SetModelDirectory | path: string | Set the local models directory |
LoadModel | model: OfflineModel, prefs: ModelPreferences | Load an offline model with preferences |
AttachServer | port: u16, model_name: string | Attach to a running llama server on a local port |
StopServer | — | Stop the offline inference server |
KillLlamaServer | — | Kill any running llama server (owned or external) on default ports |
OfflineModeEvt
Events reporting offline mode status.
| Variant | Fields | Description |
|---|---|---|
Init | engine_status, system_caps, local_models, verified_models, upgrade_available?, running_servers | Initialization status report |
InstallProgress | progress: u8, message: string | Engine installation progress (0–100) |
Installed | path: string | Engine installation complete |
ModelLoading | model_name: string, file_size_bytes: u64 | Model download in progress |
ServerReady | port: u16, model_name: string, server_pid?: u32 | Offline server is ready to accept requests |
LlamaServerKilled | — | A llama server was killed (owned or external) |
Error | message: string | Offline mode error |
Transport
In-process channels
When the client and daemon run in the same process (TUI or headless mode), they communicate via bounded Tokio mpsc channels:
| Channel | Direction | Buffer size |
|---|---|---|
| Op channel | Client → Daemon | 256 messages |
| Event channel | Daemon → Client | 4096 messages |
Stdio transport (JSONL)
For external clients using ante serve (or ante serve --stdio), StdioTransport bridges JSON Lines over stdin/stdout to the internal channel pair:
- stdin → Parse each line as
OpMsg→ forward to daemon - daemon events → Serialize as JSON → write to stdout (one line per event)
- EOF on stdin → Automatically sends
Op::Shutdown Evt::Goodbyereceived → Transport exits- JSON parse errors → An
Evt::Erroris sent back on stdout
WebSocket transport
For networked or browser-based clients using ante serve --ws <ADDR>, WsTransport exchanges the same JSONL protocol over WebSocket frames:
- Each WebSocket connection gets its own daemon instance
- Messages are the same
OpMsgandEventMsgJSON objects, sent as text frames - Client disconnect → Daemon instance shuts down, server accepts next connection
Op::Shutdownreceived → Connection and server both exit- The server loops accepting new connections until a
Shutdownis received
Complete flow example
A full session lifecycle from start to shutdown:
Client Daemon
│ │
│─── OpMsg { StartSession(...) } ──────▶│
│◀── EventMsg { SessionStart(...) } ────│
│ │
│─── OpMsg { UserInput("fix bug") } ───▶│
│◀── EventMsg { TurnStart { turn_id } } │
│◀── EventMsg { ThinkingDelta("...") } │
│◀── EventMsg { ThinkingDelta("...") } │
│◀── EventMsg { Thinking("...") } │
│◀── EventMsg { MessageDelta("...") } │
│◀── EventMsg { ToolStart(ToolUse) } │
│◀── EventMsg { TurnPause(Approval) } │
│ │
│─── OpMsg { ApprovalResponse(...) } ──▶│
│◀── EventMsg { ToolUpdate(...) } │
│◀── EventMsg { ToolEnd(...) } │
│◀── EventMsg { MessageDelta("...") } │
│◀── EventMsg { AgentMessage("...") } │
│◀── EventMsg { UsageUpdate(...) } │
│◀── EventMsg { TurnEnd(Completed) } │
│ │
│─── OpMsg { Shutdown } ───────────────▶│
│◀── EventMsg { SessionEnd } │
│◀── EventMsg { Goodbye } │
│ │