Define goals, give it skills, and let your agent handle the rest. Runs on your laptop or across a Kubernetes cluster — with the same config and the safeguards that let you sleep at night.
Everything you need to deploy, manage, and scale autonomous AI agents — with security built into every layer, not bolted on as an afterthought.
Every AI agent runs in an isolated sandbox — no network access, no credential leaks, no escape hatches. Seatbelt, nsjail, bwrap, Docker, or Kubernetes pods with gVisor.
Every piece of external content is tagged at the source. We trace it through the entire pipeline so you always know what's user-generated and what isn't.
Multi-layer scanning catches injection attempts before they reach your LLM. Pattern matching, guardian models, and canary tokens — belt, suspenders, and a backup belt.
Extend ax with third-party providers via the Provider SDK. Integrity-verified, process-isolated, and lockfile-pinned. Because "npm install trust-me" isn't a security strategy.
Persistent, semantic memory with embedding search, LLM-powered extraction, and proactive context recall. Multi-user scoped. Your agent remembers what matters.
Real-time event bus (in-process or NATS) with SSE streaming, OpenTelemetry tracing, and Langfuse integration. Watch your agent think — or plug into your existing stack.
API keys never enter the sandbox. OS keychain integration, credential-injecting proxy, and host-side isolation. Your secrets stay where they belong.
47 swappable providers across LLM, image, memory, scanner, channel, web, browser, credentials, skills, audit, sandbox, scheduler, database, storage, eventbus, and screener.
Drop-in /v1/chat/completions with SSE streaming plus /v1/files/ for persistent artifacts. Point your existing tools at ax and get security for free.
Same framework, same config format. A personal assistant on your machine, or a fleet of agents across Kubernetes. You choose the scale — we handle the rest.
# Your personal agent
profile: standard
models:
default:
- anthropic/claude-sonnet-4-20250514
image:
- openai/gpt-image-1.5
providers:
memory: cortex
sandbox: seatbelt
skills: git
npx ax init# Production Kubernetes deployment
replicaCount: 3
config:
models:
default:
- anthropic/claude-sonnet-4-20250514
- groq/llama-3.3-70b-versatile
providers:
memory: cortex
database: postgresql
sandbox: k8s
eventbus: nats
audit: database
postgresql:
enabled: true
nats:
enabled: true
ax agents don't just answer questions — they break down complex tasks, use tools, check their work, and iterate. Extended thinking models (Anthropic, OpenAI o-series, DeepSeek R1) stream their reasoning in real time so you can watch the gears turn.
Every LLM call, every tool invocation, every decision — logged and queryable. The streaming event bus emits typed events via SSE or NATS. Plug into OpenTelemetry or Langfuse for production-grade traces. When you need to debug, the full picture is right there.
Every piece of ax is a TypeScript interface. Swap Anthropic for OpenAI, SQLite for PostgreSQL, local sandbox for Kubernetes pods. Install third-party plugins with ax plugin add — integrity-verified and process-isolated.
Organize models by task type — default, fast, thinking, coding, image — each with its own fallback chain. The router handles failover with exponential backoff and circuit breakers. Your agent picks the right model for each job automatically.
Install, configure, and start chatting with your agent.
# Install and run
npm install
export ANTHROPIC_API_KEY=your-key-here
npm start
# Or use the CLI
ax configure # interactive setup wizard
ax serve # start the server
ax chat # interactive chat session
ax plugin add @ax/web # install a provider plugin
ax is free and open source under the MIT license. No paid tiers, no gated features. Just a framework you can use, fork, extend with plugins, and build on.