The early validation lesson: designing Quotient’s prompt sandbox

Cover for The early validation lesson: designing Quotient’s prompt sandbox

We work with founders who embrace ambition. This was certainly the case when Quotient’s team approached us with their vision: a prompt playground where AI-native developers could write production prompts, test with variables, evaluate outputs, and manage datasets. We knew we were working with builders who were looking into the future. But how to mitigate the risks of time travel?

Indeed, what happens when brilliant, driven founders move fast into uncharted territory? This is the story of what we discovered together about the delicate balance between vision and validation. And the key to this whole story is early user feedback.

Racing towards an ambitious vision

Quotient’s founders had identified a real gap in the AI development workflow. As AI applications become more sophisticated, developers need better tools for prompt engineering. This workflow demands something robust but more accessible than building everything from scratch.

Working together, we kicked off a focused, two-week design sprint. We built a clickable prototype with variables, output handling, and the core workflow logic in place. The energy was infectious. After all, here was a team tackling a problem that would only become more important as AI adoption accelerated.

Our Figjam collaboration board: early flow sketches, planning artifacts

Our Figjam collaboration board: early flow sketches, planning artifacts

The prototype looked sharp. The user flow made sense. Most importantly, it embodied Quotient’s vision of bringing software engineering discipline to prompt development. We were truly excited about what we’d built together.

IQ surfaces inline edits with reasoning — users can review, understand, and apply improvements without leaving context.

Yet, as we moved from prototype to development planning, we got caught up in the excitement of the vision—and who wouldn’t? The founders’ conviction was compelling. They understood the problem space better than anyone we’d worked with.

We began expanding the concept: standalone dataset features, import/export functionality, advanced playground UI improvements. Each addition felt logical, building toward a comprehensive platform that would serve sophisticated AI development teams.

But in our enthusiasm to support their vision, we had collectively glossed over a crucial step: putting the prototype in front of the people who would actually use it day-to-day.

Reality hits

Eventually, we started sharing the prototype with people in the founders’ networks: academics, enterprise developers, individual consultants. We got lots of thoughtful feedback, but honestly, we kept waiting for someone to say “I need this now!“. But that moment never came.

Separately from this process, our designer conducted interviews with developers who were newer to prompt engineering. Surprisingly, this group responded positively to the playground concept, although they wanted different features: more flexibility, better onboarding, simpler workflows.

This created an interesting tension: the sophisticated features that excited experienced AI developers weren’t resonating with early adopters, while the early adopters wanted accessibility improvements that might not serve advanced users.

Anonymized Airtable table with user interview data pieces

Anonymized Airtable table with user interview data pieces

Shipping into uncertainty

We were all convinced we knew where the market was heading. Looking back, that conviction might have been our blind spot. So we proceeded to build an MVP that included everything we’d envisioned: a playground with prompt runs, datasets, prompt templates, history, settings, and authentication.

Technically speaking, everything worked absolustely beautifully. When it went public, users discovered it through events and word-of-mouth. They explored the features, clicked around… but didn’t stick around.

The market timing challenge became clear: we had built a tool for tomorrow’s AI development teams, but most developers were still figuring out the basics of prompt engineering today.

Eventually, Quotient made the wise decision to pivot. They recognized the market signal and shifted focus, temporarily shelving the playground and dataset features to pursue a direction with clearer product-market fit.

What this collaboration taught us

Working with Quotient’s visionary team gave us a lot to think about, especially around when and how to bring users into the process.

Early validation is everything. If we had tested with target users right after our sprint, we might have simplified the feature set, or decided to wait until the market matured.

Prototypes aren’t product direction—they’re learning tools. We treated our prototype as a step toward delivery rather than as a chance to test our assumptions. The prototype successfully captured the vision, but we should have used it to validate whether that vision matched what developers actually needed right now.

Advisor feedback isn’t user validation. The smart people in our networks gave thoughtful input that helped refine the concept, but they weren’t the developers who would use this tool daily. We needed to distinguish between “this makes sense” and “I would pay for this.”

Market timing is everything. Even the right solution for the right users might not find traction if the ecosystem isn’t ready. Quotient was building for AI-native development teams when most companies were still figuring out their first AI integrations.

The founders who impress us most aren’t the ones who never make risky bets—they’re the ones who test their assumptions quickly, learn from market signals, and adapt without losing sight of their vision. Quotient’s team demonstrated exactly this kind of disciplined innovation.

However, it’s a path of high risk. Target user feedback is one of the tools, product teams rely heavily on, all to mitigate the risks involved in development process. Now, when we work on early-stage products, we push target user conversations as early as feasibly possible.

Schedule call

Irina Nazarova CEO at Evil Martians

Hire Evil Martians to validate early, test products that users can’t live without, and shape the future with confidence.