open source

Let your agent cook.
In a fireproof kitchen.

Define goals, give it skills, and let your agent handle the rest. Runs on your laptop or across a Kubernetes cluster — with the same config and the safeguards that let you sleep at night.

$ npx ax init

Security you won't notice until it matters

Everything you need to deploy, manage, and scale autonomous AI agents — with security built into every layer, not bolted on as an afterthought.

Sandboxed Execution

Every AI agent runs in an isolated sandbox — no network access, no credential leaks, no escape hatches. Seatbelt, nsjail, bwrap, Docker, or Kubernetes pods with gVisor.

Taint Tracking

Every piece of external content is tagged at the source. We trace it through the entire pipeline so you always know what's user-generated and what isn't.

Prompt Injection Scanning

Multi-layer scanning catches injection attempts before they reach your LLM. Pattern matching, guardian models, and canary tokens — belt, suspenders, and a backup belt.

Plugin Ecosystem

Extend ax with third-party providers via the Provider SDK. Integrity-verified, process-isolated, and lockfile-pinned. Because "npm install trust-me" isn't a security strategy.

Cortex Memory

Persistent, semantic memory with embedding search, LLM-powered extraction, and proactive context recall. Multi-user scoped. Your agent remembers what matters.

Streaming & Observability

Real-time event bus (in-process or NATS) with SSE streaming, OpenTelemetry tracing, and Langfuse integration. Watch your agent think — or plug into your existing stack.

Secure Credentials

API keys never enter the sandbox. OS keychain integration, credential-injecting proxy, and host-side isolation. Your secrets stay where they belong.

18 Provider Categories

47 swappable providers across LLM, image, memory, scanner, channel, web, browser, credentials, skills, audit, sandbox, scheduler, database, storage, eventbus, and screener.

OpenAI-Compatible API

Drop-in /v1/chat/completions with SSE streaming plus /v1/files/ for persistent artifacts. Point your existing tools at ax and get security for free.

From laptop to cluster

Same framework, same config format. A personal assistant on your machine, or a fleet of agents across Kubernetes. You choose the scale — we handle the rest.

personal
ax.yaml
# Your personal agent
profile: standard

models:
  default:
    - anthropic/claude-sonnet-4-20250514
  image:
    - openai/gpt-image-1.5

providers:
  memory: cortex
  sandbox: seatbelt
  skills: git
  • SQLite — zero infrastructure
  • Local sandbox (seatbelt / nsjail)
  • One command: npx ax init
  • Persistent memory out of the box
enterprise
helm values.yaml
# Production Kubernetes deployment
replicaCount: 3

config:
  models:
    default:
      - anthropic/claude-sonnet-4-20250514
      - groq/llama-3.3-70b-versatile
  providers:
    memory: cortex
    database: postgresql
    sandbox: k8s
    eventbus: nats
    audit: database

postgresql:
  enabled: true
nats:
  enabled: true
  • PostgreSQL + NATS JetStream
  • K8s pods with gVisor isolation
  • Helm chart + Flux CD gitops
  • Network policies & RBAC

Multi-Step Reasoning

ax agents don't just answer questions — they break down complex tasks, use tools, check their work, and iterate. Extended thinking models (Anthropic, OpenAI o-series, DeepSeek R1) stream their reasoning in real time so you can watch the gears turn.

See Everything

Every LLM call, every tool invocation, every decision — logged and queryable. The streaming event bus emits typed events via SSE or NATS. Plug into OpenTelemetry or Langfuse for production-grade traces. When you need to debug, the full picture is right there.

14:23:01llm.startok
14:23:02llm.thinkingstream
14:23:03tool.call: bashok
14:23:04scan.outboundtaint:0.3
14:23:05scan.inboundblocked

Build It Your Way

Every piece of ax is a TypeScript interface. Swap Anthropic for OpenAI, SQLite for PostgreSQL, local sandbox for Kubernetes pods. Install third-party plugins with ax plugin add — integrity-verified and process-isolated.

LLM
Image
Memory
Scanner
Sandbox
Scheduler
Database
EventBus
Storage
Skills
Audit
Plugins

Smart Model Routing

Organize models by task type — default, fast, thinking, coding, image — each with its own fallback chain. The router handles failover with exponential backoff and circuit breakers. Your agent picks the right model for each job automatically.

defaultclaude-sonnet → llama-3.3fallback
fastclaude-haikuscreening
thinkingclaude-opusplanning
codingclaude-sonnetcode gen
imagegpt-image-1.5 → seedreamfallback

Get started in 30 seconds

Install, configure, and start chatting with your agent.

terminal
# Install and run
npm install
export ANTHROPIC_API_KEY=your-key-here
npm start

# Or use the CLI
ax configure          # interactive setup wizard
ax serve              # start the server
ax chat               # interactive chat session
ax plugin add @ax/web # install a provider plugin
0
Provider Categories
0
Implementations
0
Test Files
0
Lines of TypeScript

Built in the open

ax is free and open source under the MIT license. No paid tiers, no gated features. Just a framework you can use, fork, extend with plugins, and build on.