open source

Day one, it's useful.
Day thirty, it's indispensable.

ax is an open source framework for building AI agents that remember context, adapt to your workflows, and actually get better over time. We're figuring out agentic systems together — come build with us.

$ npx ax init

What your agents can actually do

Not another chatbot wrapper. ax agents learn your context, use tools, work across models, and keep getting better at the things your team does every day.

Persistent Memory

Your agent remembers what matters. Semantic memory with embedding search, LLM-powered extraction, and proactive context recall — so you don't have to re-explain everything every session.

Runs Anywhere

Same config on your laptop or across a Kubernetes cluster. Local sandboxing for personal use, K8s pods for your whole team. Scale when you're ready, not before.

Smart Model Routing

Organize models by task — fast, thinking, coding — each with fallback chains. The router picks the right model for each job so you get speed where it matters and power where you need it.

Plugin Ecosystem

Extend ax with community-built providers via the Provider SDK. Give your agent new skills, connect new services, add new models. If you can write TypeScript, you can build a plugin.

Always On

Agents that work while you don't. Scheduled tasks, proactive hints, and a heartbeat system that keeps things running. Set it up on Friday, find the results on Monday.

Real-Time Streaming

Watch your agent think. SSE event streaming, OpenTelemetry tracing, Langfuse integration. When something goes sideways, you'll know exactly what happened and why.

Connects Your Stack

Drop-in OpenAI-compatible API, Slack integration, MCP servers, and channels for wherever your team already works. Point your existing tools at ax and they just work.

Swappable Everything

15 provider categories — LLM, memory, channel, database, sandbox, and more. Every piece is a TypeScript interface. Don't like something? Swap it. Want something new? Build it.

Safe by Default

Sandboxed execution, taint tracking, prompt injection scanning, credential isolation. The guardrails are there so you can let your agents loose without losing sleep.

From side project to team infrastructure

Same framework, same config. Start with a personal agent on your laptop. When the rest of your team wants in, scale to Kubernetes without rewriting anything.

personal
ax.yaml
# Your personal agent
profile: standard

models:
  default:
    - anthropic/claude-sonnet-4-20250514
providers:
  memory: cortex
  sandbox: seatbelt
  skills: git
  • SQLite — zero infrastructure
  • Runs locally, learns your patterns
  • One command: npx ax init
  • Persistent memory out of the box
team
helm values.yaml
# Same agent, now for the whole team
replicaCount: 3

config:
  models:
    default:
      - anthropic/claude-sonnet-4-20250514
      - groq/llama-3.3-70b-versatile
  providers:
    memory: cortex
    database: postgresql
    sandbox: k8s
    eventbus: postgres
    audit: database

postgresql:
  enabled: true
  • PostgreSQL for shared state
  • Multi-user memory scoping
  • Helm chart — deploy in minutes
  • Agents that learn from the whole team

Thinks, then does

ax agents don't just autocomplete — they break down problems, use tools, check their work, and iterate. Extended thinking models stream their reasoning in real time. It's weirdly satisfying to watch.

Know what's happening

Every decision your agent makes is logged and queryable. Stream events in real time, plug into OpenTelemetry, or just tail the logs. When something goes weird — and it will, this is AI — you'll know exactly where to look.

14:23:01llm.startok
14:23:02llm.thinkingstream
14:23:03tool.call: bashok
14:23:04memory.recall3 hits
14:23:05task.completedone

Your stack, your rules

Every piece of ax is a TypeScript interface. Swap Anthropic for OpenAI, SQLite for PostgreSQL, local process for Kubernetes pods. Build your own provider, publish it as a plugin, share it with the community. That's kind of the whole point.

LLM
Memory
Channel
Sandbox
Scheduler
Database
EventBus
Storage
MCP
Audit
Auth
Plugins

Right model for the job

Not every task needs your most expensive model. Organize by role — default, fast, thinking, coding — each with its own fallback chain. The router handles failover automatically. Your API bill will thank you.

defaultclaude-sonnet → llama-3.3fallback
fastclaude-haikutriage
thinkingclaude-opusplanning
codingclaude-sonnetcode gen

Try it in 30 seconds

Seriously, that's not marketing. It's three commands.

terminal
# Install and run
npm install
export ANTHROPIC_API_KEY=your-key-here
npm start

# Or use the CLI
ax configure          # walks you through setup
ax serve              # start the server
ax chat               # talk to your agent
ax plugin add @ax/web # give it new skills
0
Provider Categories
0
Implementations
0
Test Files
0
Lines of TypeScript

Come build with us

ax is MIT-licensed and evolving fast. We're exploring the frontier of agentic systems in the open — new provider categories, better memory, smarter routing. If that sounds fun, jump in.