Terminal coding agent with multi-model routing and specialized subagents for complex infrastructure tasks.
| Tier | Price | Includes |
|---|---|---|
Individual | Pure pass-through LLM pricing with zero markup. Pay actual provider API costs (Anthropic, OpenAI, etc.). $5 minimum credit purchase. No subscription, no commitment. | — |
Team | Same pass-through pricing as Individual with shared workspace, threads, and centralized billing. | — |
Enterprise | Contact sales | — |
What it does
Amp is a frontier coding agent from Sourcegraph that runs as a CLI and editor extension. It uses multi-model routing across Claude Opus, GPT-5, and fast models, supports three task modes (smart, rush, deep), and connects to Sourcegraph's Librarian for cross-repo code search and context. Threads are saved and shareable.
Who it's for Engineering teams that already trust agentic coding (Cursor/Claude Code adopters) and want pass-through pricing without a per-seat subscription. Particularly strong for platform and SRE teams that value cross-repo context — Amp's integration with the Sourcegraph Librarian gives the agent visibility into private repos beyond the working directory.
How platform engineers use it
Install the Amp CLI (@sourcegraph/amp on npm) or the editor extension. Configure GitHub access so the Librarian can index private repos. Issue tasks via the CLI: "explain how new doc versions are deployed", "investigate the foo service for recent API changes." Amp picks the right model per task, runs tools (file edits, shell commands, web search), and produces a thread you can share with teammates. Enterprise workspaces add SSO, ZDR for LLM inputs, MCP registry allowlists, and per-user cost controls.
Strengths
smart/rush/deep modes; you control cost-vs-quality trade-offs per taskLimitations
AI maturity Genuinely AI-native and unusually opinionated. Sourcegraph dogfoods Amp internally and ships changes weekly; the product is on the bleeding edge of model capabilities and tool-use patterns. The decision to charge pure pass-through cost rather than per-seat is a deliberate bet that frontier model usage will grow fast enough to make subscription pricing irrelevant.