Skip to main content

Protocol Reference

Ante uses a typed message-passing protocol between the client and daemon. Messages are exchanged over bounded async channels (in-process) or as JSON Lines over stdin/stdout (external clients).

Wire format

External clients communicate with the daemon using JSON Lines (JSONL) — one JSON object per line over stdin/stdout.

  • Client → Daemon: Send OpMsg objects as JSON lines to stdin
  • Daemon → Client: Receive EventMsg objects as JSON lines from stdout

OpMsg envelope

Every operation is wrapped in an OpMsg:

{
"op": { "StartSession": { "model": "claude-sonnet-4-5", "provider": "anthropic", "streaming": true } },
"id": "op_01ARZ3NDEKTSV4RRFFQ69G5FAV"
}

EventMsg envelope

Every event is wrapped in an EventMsg:

{
"timestamp": "2025-06-01T12:00:00Z",
"id": "evt_01ARZ3NDEKTSV4RRFFQ69G5FAV",
"event": { "AgentMessage": "Here is the result..." },
"parent": "op_01ARZ3NDEKTSV4RRFFQ69G5FAV"
}

The parent field links an event back to the operation that triggered it. It is null when not applicable.

Message IDs

Every message has a typed Id consisting of a prefix (up to 4 bytes) and a ULID. The string format is {prefix}_{ulid}.

PrefixUsage
op_Operations (client → daemon)
evt_Events (daemon → client)
ses_Session identifiers
step_Step identifiers

Example: op_01J5A3B7C9D0E1F2G3H4J5K6M7

Operations (Client → Daemon)

StartSession

Initialize a new session with model, provider, and configuration.

{
"op": {
"StartSession": {
"model": "claude-sonnet-4-5",
"provider": "anthropic",
"policy": null,
"streaming": true,
"system_prompt": null,
"append_system_prompt": null,
"allowed_tools": null,
"disallowed_tools": null,
"cwd": null,
"thinking": null
}
},
"id": "op_..."
}

SessionConfig fields:

FieldTypeDescription
modelstringModel name (e.g. "claude-sonnet-4-5")
providerstringProvider name (e.g. "anthropic", "openai", "gemini")
policyPolicy?Tool approval policy. null uses default
streamingboolEnable streaming deltas (MessageDelta, ThinkingDelta)
system_promptstring?Override the default system prompt entirely
append_system_promptstring?Append content to the default system prompt
allowed_toolsstring[]?Whitelist — only these tools are available
disallowed_toolsstring[]?Blacklist — these tools are removed
cwdstring?Working directory. Defaults to daemon's process directory
thinkingThinking?Thinking level override: "Disabled", "Enabled", "Deep", or "Max"

UpdateSession

Update the active session without restarting it (e.g. switch models mid-session).

{
"op": {
"UpdateSession": {
"model": { "name": "gpt-5.4", "temperature": 0.2 }
}
},
"id": "op_..."
}

SessionUpdate fields:

FieldTypeDescription
modelModelSpecNew model specification to use

Steer

Provide additional guidance to the agent during an active turn without starting a new one.

{ "op": { "Steer": "focus on the auth module first" }, "id": "op_..." }

UserInput

Submit user text input to the agent.

{ "op": { "UserInput": "explain what this project does" }, "id": "op_..." }

ApprovalResponse

Respond to a tool approval request (sent after receiving a TurnPause event with Approval reason).

{
"op": {
"ApprovalResponse": {
"turn_id": "step_01ARZ...",
"responses": [
["tool_use_abc123", "Accept"],
["tool_use_def456", "Skip"]
]
}
},
"id": "op_..."
}

ReviewDecision values:

DecisionDescription
AcceptAllow this tool call
SkipSkip this tool call
AcceptForSessionAllow this tool for the rest of the session
AbortAbort the current task

SlashCommand

Invoke a skill by name.

{ "op": { "SlashCommand": { "name": "commit", "args": "-m 'fix bug'" } }, "id": "op_..." }

Interrupt

Abort whatever is currently running.

{ "op": "Interrupt", "id": "op_..." }

Shutdown

Request a graceful shutdown.

{ "op": "Shutdown", "id": "op_..." }

OfflineMode

Offline mode operations for local model management. See the Offline Mode operations section.

{ "op": { "OfflineMode": "Init" }, "id": "op_..." }

Events (Daemon → Client)

Session events

SessionStart

Emitted when a session is initialized. Contains metadata about the active model, provider, session ID, and working directory. Skills and subagents are delivered separately via ExtensionRefreshed.

{
"event": {
"SessionStart": {
"model": { "name": "claude-sonnet-4-5", "max_tokens": 8192 },
"provider": "Anthropic",
"session_id": "ses_01ARZ...",
"cwd": "/home/user/project"
}
}
}

SessionInitialized fields:

FieldTypeDescription
modelModelSpecActive model specification
providerApiProviderActive provider
session_idIdUnique session identifier
cwdstringWorking directory path

SessionUpdated

Emitted when the active session is updated in place (e.g. model changed via UpdateSession).

{
"event": {
"SessionUpdated": {
"model": { "name": "gpt-5.4" },
"provider": "OpenAI",
"session_id": "ses_01ARZ...",
"cwd": "/home/user/project"
}
}
}

SessionEnd

Emitted when the session terminates.

{ "event": "SessionEnd" }

Turn lifecycle events

TurnStart

Emitted when a new turn begins processing.

{ "event": { "TurnStart": { "turn_id": "step_01ARZ..." } } }

TurnPause

Emitted when a turn is paused waiting for user input (e.g. tool approval).

{
"event": {
"TurnPause": {
"turn_id": "step_01ARZ...",
"reason": {
"Approval": {
"tools": [
{ "id": "tool_use_abc123", "name": "Bash", "input": { "command": "ls -la" } }
],
"message": "Allow running shell command?"
}
}
}
}
}

TurnPauseReason variants:

VariantFieldsDescription
Approvaltools: ToolUse[], message: stringWaiting for tool approval

TurnEnd

Emitted when a turn completes.

{ "event": { "TurnEnd": { "turn_id": "step_01ARZ...", "status": "Completed" } } }

TurnEndStatus variants:

VariantFieldsDescription
CompletedTurn finished successfully
Interruptedreason?: stringTurn was interrupted
Errormessage: stringTurn ended with an error

Message streaming events

AgentMessage

Complete agent text response (non-streaming).

{ "event": { "AgentMessage": "The project is a web server that..." } }

Thinking

Complete chain-of-thought block (non-streaming).

{ "event": { "Thinking": "Let me analyze the codebase structure..." } }

MessageDelta

Streaming chunk of the agent's message. Concatenate all deltas to build the full message.

{ "event": { "MessageDelta": "The project" } }

ThinkingDelta

Streaming chunk of the agent's thinking. Concatenate all deltas to build the full thinking block.

{ "event": { "ThinkingDelta": "Let me" } }

Tool events

ToolStart

Emitted when a tool invocation begins.

{
"event": {
"ToolStart": {
"id": "tool_use_abc123",
"name": "Read",
"input": { "file_path": "/src/main.rs" }
}
}
}

ToolUpdate

Progress update during tool execution.

{
"event": {
"ToolUpdate": {
"tool_use_id": "tool_use_abc123",
"seq": 0,
"message": "Reading file..."
}
}
}
FieldTypeDescription
tool_use_idstringTool call identifier (matches ToolStart.id)
sequ64Monotonically increasing sequence number
messagestringProgress message

ToolEnd

Emitted when a tool execution completes.

{
"event": {
"ToolEnd": {
"tool_use_id": "tool_use_abc123",
"status": "Completed",
"result_json": { "content": "fn main() { ... }" },
"is_error": false
}
}
}

ToolEndStatus variants:

VariantDescription
CompletedTool ran successfully
CancelledTool execution was cancelled
DeniedTool was denied by the user
FailedTool execution failed

Compaction events

CompactStart

Emitted when dialog compaction begins.

{ "event": "CompactStart" }

CompactEnd

Emitted when dialog compaction completes.

{ "event": "CompactEnd" }

Extension events

ExtensionRefreshed

Emitted when skills and subagents are refreshed. This is also sent after SessionStart to deliver the initial set of skills and subagents.

{
"event": {
"ExtensionRefreshed": {
"session_id": "ses_01ARZ...",
"skills": [
{ "name": "commit", "description": "Create a git commit", "scope": "user", "argument_hint": "-m 'message'" }
],
"subagents": [
{ "name": "explore", "description": "Explore the codebase", "scope": "project" }
]
}
}
}

Informational events

UsageUpdate

Token usage statistics for the session.

{ "event": { "UsageUpdate": { "usage": { "input_tokens": 1500, "output_tokens": 300 } } } }

Info

General informational message.

{ "event": { "Info": "Compacting conversation history..." } }

Error

Error message.

{ "event": { "Error": "Authentication failed: invalid API key" } }

Goodbye

Final message before the daemon disconnects. After receiving this, no more events will be sent.

{ "event": "Goodbye" }

Offline mode events

See the OfflineModeEvt section below.

Offline mode types

OfflineModeOp

Operations for managing local/offline models.

VariantFieldsDescription
InitCheck llama.cpp status and discover local models
InstallEngineDownload and install llama.cpp
UpgradeEngineUpgrade llama.cpp to a newer version
SetModelDirectorypath: stringSet the local models directory
LoadModelmodel: OfflineModel, prefs: ModelPreferencesLoad an offline model with preferences
AttachServerport: u16, model_name: stringAttach to a running llama server on a local port
StopServerStop the offline inference server
KillLlamaServerKill any running llama server (owned or external) on default ports

OfflineModeEvt

Events reporting offline mode status.

VariantFieldsDescription
Initengine_status, system_caps, local_models, verified_models, upgrade_available?, running_serversInitialization status report
InstallProgressprogress: u8, message: stringEngine installation progress (0–100)
Installedpath: stringEngine installation complete
ModelLoadingmodel_name: string, file_size_bytes: u64Model download in progress
ServerReadyport: u16, model_name: string, server_pid?: u32Offline server is ready to accept requests
LlamaServerKilledA llama server was killed (owned or external)
Errormessage: stringOffline mode error

Transport

In-process channels

When the client and daemon run in the same process (TUI or headless mode), they communicate via bounded Tokio mpsc channels:

ChannelDirectionBuffer size
Op channelClient → Daemon256 messages
Event channelDaemon → Client4096 messages

Stdio transport (JSONL)

For external clients using ante serve (or ante serve --stdio), StdioTransport bridges JSON Lines over stdin/stdout to the internal channel pair:

  • stdin → Parse each line as OpMsg → forward to daemon
  • daemon events → Serialize as JSON → write to stdout (one line per event)
  • EOF on stdin → Automatically sends Op::Shutdown
  • Evt::Goodbye received → Transport exits
  • JSON parse errors → An Evt::Error is sent back on stdout

WebSocket transport

For networked or browser-based clients using ante serve --ws <ADDR>, WsTransport exchanges the same JSONL protocol over WebSocket frames:

  • Each WebSocket connection gets its own daemon instance
  • Messages are the same OpMsg and EventMsg JSON objects, sent as text frames
  • Client disconnect → Daemon instance shuts down, server accepts next connection
  • Op::Shutdown received → Connection and server both exit
  • The server loops accepting new connections until a Shutdown is received

Complete flow example

A full session lifecycle from start to shutdown:

Client                                  Daemon
│ │
│─── OpMsg { StartSession(...) } ──────▶│
│◀── EventMsg { SessionStart(...) } ────│
│ │
│─── OpMsg { UserInput("fix bug") } ───▶│
│◀── EventMsg { TurnStart { turn_id } } │
│◀── EventMsg { ThinkingDelta("...") } │
│◀── EventMsg { ThinkingDelta("...") } │
│◀── EventMsg { Thinking("...") } │
│◀── EventMsg { MessageDelta("...") } │
│◀── EventMsg { ToolStart(ToolUse) } │
│◀── EventMsg { TurnPause(Approval) } │
│ │
│─── OpMsg { ApprovalResponse(...) } ──▶│
│◀── EventMsg { ToolUpdate(...) } │
│◀── EventMsg { ToolEnd(...) } │
│◀── EventMsg { MessageDelta("...") } │
│◀── EventMsg { AgentMessage("...") } │
│◀── EventMsg { UsageUpdate(...) } │
│◀── EventMsg { TurnEnd(Completed) } │
│ │
│─── OpMsg { Shutdown } ───────────────▶│
│◀── EventMsg { SessionEnd } │
│◀── EventMsg { Goodbye } │
│ │