A AZMX AI
v0.14.1 GitHub, Kubernetes & AWS panels now built in

The AI terminal that reads the room.

A native terminal for macOS, Linux and Windows. It watches your screen, runs your shells, edits your code, and talks to any model you choose — or none at all.

~7 MB on disk No account No telemetry Bring your own key
~/projects/azmx — zsh — 96 × 28

Speaks to the models you already pay for — or runs fully offline

OpenAIAnthropicGoogle GroqxAICerebras OpenAI-compatibleLM Studio · offline

Context-aware

It reads your screen, not just your commands.

Stuck on an error you can't even describe? Don't write the perfect prompt. Just say what's happening. AZMX AI pulls the live terminal buffer, the open file, and your selection into the conversation — then tells you exactly what broke and where.

You approve the fix. It applies it. The flow never leaves the terminal.

you i'm stuck on a build error, no idea where to start
azmx reading your terminal buffer…
· matched TS2304 · src/modules/tabs/useTabs.ts:148
An import for Tab was dropped last commit but is still referenced on line 148. Add it back, or remove the reference?
⏎ accept · ⌘. redo

Always on, never loud

Keeps watch while you work.

Tail the prod log. Watch the type-checker. Notice when the build slows down. Ask once, and AZMX AI remembers — quietly. If nothing's wrong, you won't hear from it. If something is, it surfaces in the smallest possible voice, the first time only.

It reads the room. It doesn't break it.

pane 2 · tail prod.log
14:11:08 200 · 89ms
14:11:14 200 · 110ms
14:11:22 200 · 1.4s
14:11:30 200 · 97ms
azmx · status bar
⌬ watching pane 2
matched: ERROR|FATAL
count: 0
noticed: 1 slow (1.4s)
quiet unless >3/60s

Your data, your call

Your model. Or no model at all.

AZMX AI is bring-your-own-key. OpenAI, Anthropic, Google, Groq, xAI, Cerebras, any OpenAI-compatible endpoint — the agent loop, tools, and per-hunk diffs don't care which. Charges hit your account, on your terms. We never see the traffic.

Or point it at LM Studio and never leave the machine. Same agent. Same diffs. Zero round-trips.

Settings → AI
Provider   OpenAI-compatible
· Base URL   http://localhost:1234/v1
· Key       stored in OS keychain
✓ offline · nothing leaves this machine
Native, not a browser in a trench coat

Real desktop app. Real PTY. Real keychain.

No Electron tax. No Chromium ghost in the background. AZMX AI uses the OS window, the OS keychain, and the OS shell — a Rust core under a fast React UI, signed and notarized where the OS asks for it.

Bundle
~7 MB
Platforms
3
Telemetry
none
Account
none
Everything in one window

A terminal that grew up.

Multi-tab terminal

xterm.js + WebGL, native PTY (zsh/bash/pwsh), split panes, shell integration, true-color, link detection.

Code editor

CodeMirror 6 with vim mode, inline AI autocomplete, and per-hunk AI edit diffs you accept or reject.

File explorer

Material icons, fuzzy ⌘P quick-open, keyboard nav, inline rename, per-file git status.

AI agent & sub-agents

Multi-step agent with approval-gated tools, named sub-agents, voice input, and MCP servers.

Connections

GitHub PRs/issues, Kubernetes pods, AWS EC2, SSH hosts — using your existing auth, never a key paste.

Project memory

AZMX.md for config + a local memory tree the agent recalls from.

On trust

You'll let it read your files. So let's be honest.

i.

Keys live in the OS keychain

Written through keyring, service azmx-ai. Not on disk, not in localStorage, not in logs.

ii.

It won't touch the obvious traps

A built-in deny-list refuses .env*, .ssh/, and credential paths — on both reads and writes.

iii.

It pauses before it acts

Reads auto-run. Writes, deletes, and shell commands always wait for an in-UI approval. You see it before it happens.

iv.

It doesn't phone home

No analytics, no crash reports, no account. The binary hits one URL on its own: the update manifest. Block it if you like.

Open it up. It's listening.

The download is small. The setup is short. The first useful exchange takes about a minute.