Your API has a new kind of user.
Agent Experience (AX) is the quality of interaction between AI agents and your API platform.
It covers everything from how agents discover your API, to how they understand it, execute against it, recover from failures, and how humans maintain oversight of what agents do. These five dimensions give API platform teams a shared language for evaluating and improving how well their APIs work with AI agents.
The five dimensions
Distribution
How agents discover, evaluate, and select your API. Covers registries, content strategy, search optimization, and machine-readable discovery surfaces.
Comprehension
How agents learn what your API does and use it correctly. Covers specifications, tool definitions, conventions, and task-oriented playbooks.
Implementation
The execution surfaces and patterns that let agents build and run against your API. Covers SDKs, MCP servers, composability, and interoperability.
Autonomy
The signals and design decisions that let agents operate reliably without human intervention. Covers error recovery, state introspection, and context efficiency.
Governance
The mechanisms that let humans control, observe, and safely evolve agent behavior. Covers scoped auth, approval workflows, traceability, and change management.
A maturity model, not a checklist
These dimensions are not requirements to satisfy. They are lenses for evaluating where your API platform stands today and where to invest next. Most APIs already do well in one or two dimensions without conscious effort.
The dimensions are peers, not a hierarchy. There is no prescribed order. An API with strong governance but weak distribution is not worse than one with the reverse; it simply has different gaps.
The goal is enough capability across the system that agents can reliably fall into the pit of success.
Frequently asked questions
Is AX replacing DX?
No. Agent experience builds on top of developer experience. Good DX remains essential; AX extends it to cover scenarios where the consumer is an AI agent rather than (or in addition to) a human developer. Many AX improvements, like better error messages and clearer schemas, also improve DX.
Do I need to address all five dimensions?
These dimensions are not a checklist. They are a maturity model. Start wherever your biggest gap is. Most APIs already do well in one or two dimensions without trying. The framework helps you identify where the leverage is for your specific situation.
Is this only relevant for APIs used by LLM-based agents?
The principles apply broadly to any automated consumer of your API, whether that is an LLM-based agent, a code generation tool, a workflow automation platform, or a traditional integration. The more automated the consumer, the more these dimensions matter.
How is this different from just having a good OpenAPI spec?
A good OpenAPI spec covers part of the Comprehension dimension, but AX goes much further. Distribution addresses how agents find your API in the first place. Implementation covers the execution surfaces you offer. Autonomy deals with error recovery and operational resilience. Governance addresses human oversight of agent behavior. A spec is necessary but not sufficient.
Where did this framework come from?
This framework emerged from working with hundreds of API teams at Stainless and observing what separates APIs that work well with AI agents from those that do not. It synthesizes patterns we have seen across API platforms of all sizes.