Contract shock therapy: the way to API-first documentation bliss

Cover for Contract shock therapy: the way to API-first documentation bliss

Learn how to build a dedicated API documentation repository that becomes your team’s single source of truth, enabling true contract-first development. We’ll focus on the frontend tech stack approach and demonstrate exactly how I set up a contract-first environment.

TL;DR: Hell yeah! We’re going to write entire Swagger schemas by hand! Cool, right? …Huh? Guys? Where are you going? Come back! I promise it will be fun!

In the previous episode of our API development saga, we dissected the difference between code-first and contract-first approaches. We discussed why your current API development might feel like herding cats and how a contract-first approach can turn that chaos into coordinated parallel development.

But, of course, simply knowing contract-first is better doesn’t automatically make it happen.

You still need to build the actual infrastructure. You need a place where contracts live, breathe, and evolve.

Book a call

Hire Evil Martians

We’ll focus on fixing the workflow gaps, while your engineers focus on the core product.

Contract documentation can take many forms: from simple Google Docs with request/response examples to formal OpenAPI specs and GraphQL schemas.

As mentioned, today we’re focusing on the frontend tech stack approach: tooling that will generate OpenAPI specs, Swagger UI documentation, and so on. I’ll show you exactly how I set up a contract-first documentation environment.

The complete working example is available at github.com/mikhin/openapi-modular-docs: feel free to clone it, explore the structure, and adapt it for your own projects! You can also check out the live deployed documentation at martian-hotel-booking-api.vercel.app.

Handwritten specs in a separate repository? Why?

Before we dive into the code, let’s talk about why this setup matters.

Reason one: Essentially, when API documentation lives inside your backend repository, it becomes backend documentation. Backend developers control it. Backend assumptions drive it. Frontend needs get ignored until integration time.

Reason two: Auto-generation amplifies this problem. When you generate docs from code, you get backend documentation dressed up as API specs. The structure follows your database schema. Field names match internal models. Response shapes optimize for server convenience, not client needs.

But here are where handwritten specs make a difference:

  • Specs get written before implementation meaning you end up with a contract-first workflow.
  • Both teams have shared ownership, equal access, and responsibility.
  • You get a single source of truth; one place where the API contract lives.
  • Independent evolution: documentation changes don’t require backend deployments

Every field exists for a reason. You can’t accidentally expose internal implementation details because you have to consciously choose what goes in the contract.

In other words, this kind of manual approach forces intentional design.

Think of it like this: you wouldn’t auto-generate your user interface from database tables, and the same principle applies to API design.

The payoff?

When both teams build in accordance with a human-designed contract, integration actually works.

And critically, this extra upfront effort eliminates weeks of integration chaos later.

Still think separate repos and manual specs are overkill? Go re-read those 3 AM debugging stories and three-week endpoint waits from. Then tell me if writing by hand seems like the real problem here. Now, let’s dig in.

Table of contents:

  1. Building the foundation
  2. Writing the contract: source files
  3. The auto-refresh system: building a YAML import plugin

Building the foundation

Since we’re describing the frontend way of doing this, we’re going to build this on Vite, React and TypeScript First, let’s create a new Vite project.

$ pnpm create vite api-docs --template react-ts
$ cd api-docs
$ pnpm install

Now comes the fun part: integrating Swagger UI. If you want to get up and running quickly, we can use a React component that does most of the work:

$ pnpm add swagger-ui-react

Then, create a simple React root component to render the Swagger UI:

import SwaggerUI from "swagger-ui-react";
import "swagger-ui-react/swagger-ui.css";

export const App = () => <SwaggerUI url="/output.yml" />;

That’s it! This gives you a working Swagger UI that loads your API spec from /output.yml—our compiled OpenAPI file that we’ll build from modular sources. The swagger-ui-react package handles all the configuration and rendering automatically.

And now here’s the next question: how do you actually write these API specs without losing your mind?

Writing the contract: source files that scale

Now that we have the Vite foundation, let’s create the actual source files for our API specification.

You could write these in JSON, YAML, or any format that can represent OpenAPI schemas. That said, I’m choosing YAML for this guide because I find it more comfortable for writing and editing: fewer quotes, cleaner nesting, easier to read.

Now, the entire setup we’ll build is based on YAML, but the principles work with any format.

We’ll cover the file structure and organization first, then connect it to the auto-refresh system that watches and compiles everything.

For our example case, we’ll use a booking API for a hotel on Mars. With hotels, bookings, and authentication, this is enough complexity to show real patterns without drowning in business logic.

You can see the complete implementation at github.com/mikhin/openapi-modular-docs and a live preview at martian-hotel-booking-api.vercel.app

Starting with the basics: your first OpenAPI building blocks

Let’s start simple. Every API spec has a few core ingredients that work together like building blocks. We’ll begin by writing everything in one file to understand the concepts, then see why splitting things up makes sense.

Models are your data shapes. Think of them as defining what your resources look like. Here’s a hotel when someone wants to create or update one—just the essential business fields, no system-generated stuff yet:

# Everything starts in input.yml
components:
  schemas:
    HotelUpsert:
      type: object
      required:
        - name
        - location
        - status
      properties:
        name:
          type: string
          minLength: 1
        location:
          type: string
          description: Where on Mars this hotel sits
        status:
          $ref: "#/components/schemas/HotelStatus"

Once you have your base model, you extend it to get the complete picture with system fields:

Hotel:
  allOf:
    - $ref: "#/components/schemas/HotelUpsert"
    - type: object
      required:
        - id
        - createdAt
      properties:
        id:
          type: string
          format: uuid
        createdAt:
          type: string
          format: date-time

This approach shows up everywhere: define your business fields once in the “Upsert” version, then extend with IDs and timestamps. No duplication, easy maintenance.

Notice that status field? Instead of scattering magic strings throughout your API, give your values structure with enums:

HotelStatus:
  type: string
  enum:
    - active
    - maintenance
    - closed

Everyone now uses the same status values. Change the options in one place, they update everywhere.

Building your endpoints

With your data shapes defined, we’ll look at how they’re actually used. Let’s set up JWT token authentication as an example:

# In your main file
components:
  securitySchemes:
    bearerAuth:
      type: http
      scheme: bearer
      bearerFormat: JWT

Then in your endpoints:

paths:
  /hotels:
    get:
      security:
        - bearerAuth: []

Most endpoints need authentication. The few that don’t (like login) use security: [] to explicitly allow public access.

Every resource typically needs two types of endpoints. Collection endpoints handle lists and creation:

paths:
  /hotels:
    get:
      summary: Get all hotels
      security:
        - bearerAuth: []
      parameters:
        - name: page
          in: query
          required: true
          schema:
            type: integer
            minimum: 1
      responses:
        "200":
          content:
            application/json:
              schema:
                allOf:
                  - $ref: "#/components/schemas/PaginatedResponse"
                properties:
                  items:
                    type: array
                    items:
                      $ref: "#/components/schemas/Hotel"

    post:
      summary: Create a new hotel
      security:
        - bearerAuth: []
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/HotelUpsert"

Item endpoints handle specific resources:

/hotels/{id}:
  get:
    summary: Get a hotel by ID
    security:
      - bearerAuth: []
    parameters:
      - name: id
        in: path
        required: true
        schema:
          type: string
          format: uuid

  put:
    summary: Update a hotel
    security:
      - bearerAuth: []
    requestBody:
      required: true
      content:
        application/json:
          schema:
            $ref: "#/components/schemas/HotelUpsert"

  delete:
    summary: Delete a hotel
    security:
      - bearerAuth: []
    responses:
      "204":
        description: Hotel deleted

The flow is consistent: GET for reading, POST for creating, PUT for updating, DELETE for removing.

Speaking of lists, they all need pagination. Here’s a reusable base:

PaginatedResponse:
  type: object
  required:
    - totalItems
    - totalPages
    - currentPage
    - pageSize
  properties:
    totalItems:
      type: integer
    totalPages:
      type: integer
    currentPage:
      type: integer
    pageSize:
      type: integer

Every list response extends this base model, giving you consistent pagination across your entire API.

With these building blocks in place, you have everything needed for a working API spec. But as your API grows, keeping everything in one file becomes painful. Let’s fix that.

When one file becomes everyone’s nightmare

These patterns work great when everything lives in one file. But picture this: you start with a single OpenAPI file. Clean, simple, everything in one place. But then, fast-forward six months, and you’re looking at 2,000 lines of YAML. The kind of stuff that makes grown developers cry.

Want to add a simple status enum? Time to play “find the needle in the haystack” as you scroll through dozens of unrelated models. Two team members trying to work on different features? Merge conflict city. That one file has become the bottleneck everyone dreads touching.

How to break free from this? Organize by what actually makes sense to humans:

specs/
├── input.yml              # Your main index
├── hotels/                # Everything about hotels lives here
│   ├── enum.HotelStatus.yml
│   ├── model.Hotel.yml
│   ├── model.HotelUpsert.yml
│   └── path.hotels.yml
├── bookings/              # All booking stuff together
│   ├── enum.BookingStatus.yml
│   ├── model.Booking.yml
│   └── path.bookings.yml
└── auth/                  # Authentication in its own corner
    ├── model.LoginRequest.yml
    └── path.auth-login.yml

Now, Sarah can work on the hotels domain while Mike tackles bookings; no stepping on each other’s toes. Need to change how hotel status works? Everything you need is in one folder.

The beauty is in the predictability.

But how can split files be combined back into one spec? We can use our main file as an index, with tools that support import syntax to combine everything into a single output:

components:
  schemas:
    Hotel: !!import/single hotels/model.Hotel.yml
    Booking: !!import/single bookings/model.Booking.yml

paths:
  /hotels: !!import/single hotels/path.hotels.yml
  /bookings: !!import/single bookings/path.bookings.yml

The !!import/single syntax tells the yaml-import library to load each file as a complete definition. This means you can organize by domain, creating a folder for new features, editing one folder for changes—and your API can grow from 10 endpoints to 500 without anyone losing their sanity.

Now you have all these modular YAML files, but they need to become one unified OpenAPI spec that Swagger UI can actually use. Here’s how we automate that process.

The auto-refresh system: building a YAML import plugin

Remember that /output.yml file we referenced earlier? Time to build the system that creates it. We’re going to write a Vite plugin that watches your YAML files, combines them into one spec, validates the result, and auto-refreshes your browser when changes happen.

First, create vite-yaml-import-plugin.ts:

The core concept

The plugin does four main things:

  1. Combines multiple YAML files into one OpenAPI spec
  2. Validates the result against OpenAPI schema standards
  3. Serves the compiled spec to your Swagger UI
  4. Watches for changes and auto-refreshes

Let’s break down the key parts:

Setting up the plugin structure

interface YamlPluginOptions {
  inputFile: string;      // Main entry point (like index.yml)
  outputFile: string;     // Where the compiled spec goes
}

export const yamlImportPlugin = (options: YamlPluginOptions): Plugin => {
  const { inputFile, outputFile } = options;
  // Plugin logic here...
}

This gives us flexibility. Point the plugin at any folder structure, specify your main file, and choose where the output goes.

The YAML processing heart

const processYaml = async (root: string): Promise<string> => {
  try {
    const absoluteInputPath = path.resolve(root, inputFile);

    // This is where the magic happens - yaml-import combines files
    const processedYaml = await read(absoluteInputPath, {
      extensions: [".yml", ".yaml"],
    });

    // Convert back to YAML string for serving
    const yamlString = dump(processedYaml, {
      noRefs: true,
      lineWidth: -1,
      quotingType: '"',
    });

    return yamlString;
  } catch (error) {
    console.error("Error processing YAML:", error);
    throw error;
  }
};

The yaml-import library does the heavy lifting here. It follows the !!import/single syntax in your YAML files and combines everything into one document. Think of it as a bundler for YAML files.

OpenAPI validation

Combining files is only half the battle. You also need to catch schema errors before they break your Swagger UI. There’s nothing worse than making a small change and discovering your entire documentation is broken because you missed a required field or used the wrong data type.

We’ll use the openapi-schema-validator library to validate our compiled spec against the official OpenAPI 3.0 schema. This catches structural issues, missing required fields, and incorrect data types before they reach your browser.

// Validate the compiled spec
const validator = new OpenapiSchemaValidator.default({ version: 3 });
const data = yaml.load(yamlString);
const result = validator.validate(data);

if (result.errors.length > 0) {
  console.log("❌ OpenAPI Schema is invalid");
  result.errors.forEach((error) => {
    console.log(`Path: ${error.instancePath || "/"}`);
  });
  process.exit(1);
} else {
  console.log("✅ OpenAPI Schema is valid");
}

File watching and auto-refresh

We want to edit a YAML file, save it, and immediately see the changes in our browser …without manually refreshing anything. So, in the watch function, we point at our specs directory and tell it to watch recursively for any changes:

fsWatcher = fs.watch(watchDir, { recursive: true }, async (_, filename) => {
  if (filename) {
    const ext = path.extname(filename);
    if (ext === ".yml" || ext === ".yaml") {
      await processYaml(root);
      await writeYamlToFile(root);
      // Tell Vite to reload the browser
      server.ws.send({ type: "full-reload" });
    }
  }
});

Serving the compiled spec

Remember that our Swagger UI is looking for /specs/output.yml? We need to intercept that request and serve our compiled YAML content. Instead of always writing to disk, we can serve directly from memory during development, which is faster and cleaner.

Vite lets us add custom middleware to handle specific routes. We check if the incoming request matches our output file path, and if so, serve the compiled YAML content:

server.middlewares.use(async (req, res, next) => {
  const requestPath = decodeURIComponent(req.url?.split("?")[0] ?? "");
  const cleanOutputPath = outputFile.startsWith("/") ? outputFile : `/${outputFile}`;

  if (requestPath === cleanOutputPath) {
    try {
      const yamlContent = lastProcessedYaml || (await processYaml(root));
      res.statusCode = 200;
      res.setHeader("Content-Type", "text/yaml");
      res.setHeader("Cache-Control", "no-cache");
      return res.end(yamlContent);
    } catch (error) {
      res.statusCode = 500;
      return res.end(JSON.stringify({ error: "Failed to process YAML" }));
    }
  }
  return next();
});

This intercepts requests to /output.yml and serves the compiled content directly from memory. The Cache-Control: no-cache header ensures the browser always fetches the latest version. If anything goes wrong during processing, we return a 500 error instead of serving broken content.

Using the plugin

Now, in your vite.config.ts:

import { yamlImportPlugin } from "./vite-yaml-import-plugin";

export default defineConfig({
  plugins: [
    react(),
    yamlImportPlugin({
      inputFile: "index.yml",
      outputFile: "/output.yml"
    })
  ]
});

This setup watches the src/specs/index.yml file and outputs to /output.yml-exactly what our Swagger UI expects.

The result: a development environment where YAML editing feels as smooth as code editing, with validation feedback and instant visual updates.

Putting it all together

You now have a complete contract-first documentation system: modular YAML files that scale with your team, automatic compilation and validation, and instant browser refresh when you make changes.

The key insight is treating API contracts as executable code, not static documentation. When your contract drives both frontend mocks and backend implementation, integration becomes predictable instead of painful.

What you’ve built:

  • Source files organized by business domain
  • Automated validation that catches errors immediately
  • Development workflow where changing a field definition instantly updates your documentation
  • A foundation that works for both small APIs and enterprise systems

Frontend developers can build features in parallel using contract-generated mocks, as backend implements to the same specification in parallel.

This system eliminates the “backend isn’t ready” bottleneck we discussed in the first article.

Get your hands dirty and get to it!

Ready to try it yourself? Clone the complete working example at github.com/mikhin/openapi-modular-docs and adapt it for your project.

Start with the Mars hotel structure, then replace it with your own business domains. You can see the live documentation at martian-hotel-booking-api.vercel.app

Remember this: the hard part isn’t the tooling. It’s getting your team to write contracts before code. But once you’ve experienced development without integration chaos, there’s no going back.

So, what’s next? This infrastructure is just the beginning!

In upcoming articles, we’ll cover how to integrate these contracts into your actual development workflow—generating TypeScript types and API clients for React SPAs, and setting up automatic validation and route generation for Node.js Fastify backends.

Book a call

Irina Nazarova CEO at Evil Martians

We’ll focus on fixing the workflow gaps, while your engineers focus on the core product.