Agent Experience
Share on
Skills
- LLMs
- TypeScript
- React
- WebSocket
- Ruby on Rails
Build Agent Experience for your product: discovery optimization, agent-friendly APIs and CLIs, and security guardrails against abuse.
80% of databases on Neon are created by agents, not humans. 30% of new Supabase signups come from AI builder platforms. 10% of Vercel signups arrive via ChatGPT. The next wave of developer tool growth is driven by AI agents that discover, provision, and operate infrastructure on behalf of developers.
This isn’t about building agents. It’s about building for agents. Agent Experience (AX)—a term coined by Matt Billmann at Netlify—extends Developer Experience into a world where agents increasingly act on developers’ behalf. We help devtools, infrastructure, and security products become first-class citizens in agentic workflows.
Discovery optimization
Agents don’t watch demo videos, respond to ads, or care about your brand. They search the web, read docs, scan repos—and pick whichever tool has the clearest technical content. If your docs are rendered client-side, gated behind auth, or buried in PDFs, you’re invisible. Mintlify reports that nearly half of docs traffic now comes from AI agents.
Discovery happens two ways. Your buyers—founders, engineering leads, developers—ask LLMs directly: “best WebSocket library for Rails,” “who can help us scale Kubernetes.” And agents acting on their behalf evaluate tools programmatically, parsing docs and benchmarks to make provisioning decisions. Both paths reward the same thing: clear, structured, machine-readable content.
We start with your ICP: who’s searching for tools like yours, what stacks do they use, and what will they ask? We research your target market—engineering blogs, job postings, open source activity, founder profiles—then align your content so LLMs surface your product for those queries. This means structured content formats (llms.txt, per-page .md endpoints, JSON-LD schema, ai.txt) but also the content strategy itself: tech stack mentions calibrated to your market, client stories with quantified outcomes, and descriptions that read naturally to humans but signal clearly to machines.
We built this methodology for evilmartians.com. We researched 90+ target companies across AI agents, developer tools, infrastructure, and cybersecurity. We aligned every page—services, client stories, blog posts—to the queries their buyers ask. Our llms.txt serves structured content that LLMs consume directly. Every page has a .md version. We wrote the playbook by shipping it ourselves.
Agent-friendly CLIs and APIs
If your product requires a browser to sign up, create resources, or check logs, agents will use a competitor that doesn’t. Supabase, Railway, Vercel, Stripe—the companies winning agent-driven growth all share one pattern: programmatic access from signup to production. We build that layer.
- Programmatic signup and provisioning — account creation, API key generation, resource provisioning, all via CLI. No browser tab required
- CLI-complete workflows — we audit existing CLIs for gaps where agents hit a wall and fall back to “please visit the dashboard”
- Structured output and feedback loops —
--jsonflags, machine-readable error codes, webhooks and health checks that let agents verify and self-correct - MCP server integrations — expose your product’s capabilities directly to agent workflows so they can invoke your tool without leaving the editor
We proved this on our own projects. AnyCable+ CLI, built with Terminalwire, lets users sign up, create AnyCable instances, manage credentials, and check status—all from the terminal.
Security guardrails
Everything that makes a product agent-friendly is also what bad actors exploit at machine speed. Heroku, GitLab, Travis CI, and Fly.io all killed or restricted free tiers after automated abuse overwhelmed them. Railway spent a third to half of all engineering cycles fighting fraud. Companies that skip fraud prevention end up fighting it reactively. The reactive response damages their own product. We build the guardrails that prevent this.
- Agent-aware authentication — token-based auth where every agent action is traceable to a human identity, without browser redirects or CAPTCHAs
- Least-privilege authorization — agent permissions scoped to the minimum required for each operation, enforced as code
- Abuse detection and blocking — WAF configuration, rate limiting, and bot detection tuned for agent traffic patterns
- Free tier verification — identity verification before granting compute access, keeping the acquisition funnel open while filtering out abuse
AI Harnesses
AI coding tools produce generic code by default. An AI harness encodes your architecture, conventions, and constraints so agent output matches what your team would actually write. Here’s what that looks like in practice.
- Layered Rails skills — architecture-aware rules that enforce service objects, query objects, and policy patterns
- Inertia Rails skills — integration-specific constraints for Inertia.js and Rails
- SAST rules tuned for LLM patterns — catching the mistakes agents make most: SQL injection via string interpolation, path traversal, SSRF, hardcoded secrets
- CI guardrails with TestProf — agents generate tests prolifically. Without constraints, they bloat your suite and CI time. TestProf keeps it fast
- Dev Containers for isolated execution — agents build, test, and iterate in sandboxed environments, not on your machine or in production
CLAUDE.mdandAGENTS.mdrules — project-level instructions that shape every AI interaction
The harness approach works because it meets agents where they are: define constraints upfront, let the agent operate within them. At the SF Bay Area Ruby Meetup, Vladimir Dementyev showed that an agent with a proper harness changed 33 files with zero unprotected actions. Without one, it changed 52 files and left 7 unprotected.
Agent observability
When agents act on behalf of your users, you need visibility into what they’re doing. Traditional APM wasn’t designed for multi-step agent workflows where one user action triggers dozens of LLM calls and branching decisions.
We built AgentPrism, an open source React component library for visualizing AI agent traces. Debug agent behavior without reading raw logs.
AgentPrism visualizes agent traces: before raw logs, after structured trace view
For real-time AI, we built AnyCable support for LLM streaming, handling token-by-token delivery over WebSockets at scale.
