# From React to native web with nanotags: a migration that saved 100 KB

> Most marketing sites ship a SPA framework just to toggle a sidebar. Here's how we migrated an Astro site from React and Ark UI to native Web Components: 100 KB less JavaScript, no functionality lost, and a tiny library called nanotags that makes Custom Elements enjoyable to write.

- Date: 2026-05-06T00:00:00.000Z
- Authors: Pavel Grinchenko, Travis Turner
- Categories: Performance, Open Source
- URL: https://evilmartians.com/chronicles/from-react-to-native-web-with-nanotags-a-migration-that-saved-100kb

---

A while ago, I shipped a marketing site built with Astro, React, and Ark UI. It worked fine, the deadline was tight, and React was the right call at the time. But I always knew I'd come back to it. Months later, I did. The result: **100 KB less JavaScript and no functionality lost**. In this post, I'll share how I migrated a site from React to native Web Components, why that worked better than I expected, and how the patterns I used along the way grew into a small library called **nanotags**. This is also a story about choosing the right tool for the job, when the default tool isn't the one.

React has become the default for everything on the web. Marketing sites, documentation portals, landing pages. Need a website? `npx create-next-app`, add a UI library, deploy. 

But most marketing sites are fundamentally static. You have a few dropdowns, maybe a dialog, some scroll-driven animations. That's it.

So why are we shipping a full SPA framework just to toggle a sidebar?

---

*Shipping a React SPA when a static site would do? We help teams pick the right tool and cut JavaScript bundles without losing features.* [Contact Evil Martians](https://evilmartians.com/contact-us)

---

## The problem with defaults

The default React stack is built for SPAs, not marketing sites. You get a virtual DOM, a component model, and a massive ecosystem. In return, you ship a runtime that needs to be downloaded, parsed, and executed before any of your interactive UI can wake up.

For an app with complex state, forms, and real-time updates, that's a fair trade. For a marketing site whose most demanding interaction is a dialog? It's a tax you don't need to pay.

Take a typical marketing page: a hero with a CTA, a feature grid, a pricing table with a toggle, an FAQ accordion, and a contact form. Maybe four components need any JavaScript at all. Everything else is just HTML and CSS that React still has to mount and reconcile on every navigation.

Frameworks like Astro are already a better fit for these cases. Astro ships zero JavaScript by default, its runtime is minimal, and it doesn't force React on you. But even with Astro, the moment you need interactivity, it's tempting to reach for React components and pull the entire framework back in.

## Lighter framework, or no framework?

The deadline was strict, the team knew React, and Ark UI gave us accessible primitives out of the box. The choice was easy. But we made it with a plan: Ark UI ships for multiple targets (React, Solid, Vue), so picking it was also a hedge. If we ever wanted to swap frameworks, the UI layer wouldn't have to be rewritten from scratch.

Read more about how Evil Martians created the first version here:
*Read also Evil Martians' article*: https://evilmartians.com/chronicles/super-speed-super-quality-lessons-from-the-aptos-network-site-launch

Months later, after launch, I came back to that plan. The site was running, the dust had settled, and the bundle size had been quietly bothering me since day one. Time to evaluate alternatives.

Svelte and SolidJS were on my list. Both are lighter than React, both have mature ecosystems, both would let us reuse most of our Ark UI markup. The migration would have been straightforward.

But then I asked a different question: why add another framework at all? **The web platform already has a built-in component model**. Custom Elements, standard DOM APIs, regular CSS. What if that's enough?

It was. 

Migration went faster than I expected because there wasn't much to migrate in the first place: a mobile menu, a theme toggle, a dropdown menu, a modal side-panel, the main navigation, plus a few smaller pieces. Most of the site was already plain HTML just waiting to be hydrated. The bundle dropped by 100 KB, plus the a11y story actually got better (more on that later).

## Why Web Components?

Skipping the framework layer altogether sounded radical at first. Svelte and SolidJS each have their own runtime, scheduler, and reactivity system. Lightweight, sure, but still a layer between your code and the browser. Custom Elements skip that layer entirely.

The argument relates to more than just bundle sizes. Frameworks evolve through major versions, breaking APIs, ecosystem rewrites. Web Components are part of the platform. The Custom Elements API has been stable in every modern browser since 2018. It won't get a major version bump, it won't deprecate `connectedCallback` next year, and it won't ask you to rewrite your components for the next reactivity model.

For a static-first Astro site, this combination is especially clean. Astro renders HTML at build time. Web Components hydrate that HTML on the client with no framework runtime in the middle. The browser parses the markup, upgrades the custom elements, and the rest is just standard DOM, events, and CSS, all things every browser already does on its own.

## What raw Web Components feel like

That's the theory. In practice, writing Web Components by hand hurts. Here's a simple counter:

```js
class MyCounter extends HTMLElement {
  static observedAttributes = ["count"]

  connectedCallback() {
    this.display = this.querySelector("span")
    this.button = this.querySelector("button")
    this.button.addEventListener("click", this.#handleClick)
  }

  disconnectedCallback() {
    this.button.removeEventListener("click", this.#handleClick)
  }

  attributeChangedCallback(name, old, val) {
    if (name === "count") this.#render()
  }

  #handleClick = () => {
    const next = Number(this.getAttribute("count")) + 1
    this.setAttribute("count", String(next))
  }

  #render() {
    if (!this.display) return
    this.display.textContent = this.getAttribute("count") ?? "0"
  }
}
customElements.define("my-counter", MyCounter)
```

That's a counter. Look at the `#render` method: it has to bail out if `this.display` is missing, because `attributeChangedCallback` can fire before `connectedCallback` runs. That's the kind of detail you only learn after debugging it once.

Now imagine a tabs component with keyboard navigation, roving focus, and ARIA state synced across panels. Every one of those concerns adds another callback, another cleanup, another set of corner cases.

The list of things you handle by hand: declaring `observedAttributes`, parsing every attribute value yourself, removing every listener in `disconnectedCallback`, guarding against early lifecycle calls, and getting zero help from TypeScript along the way. 

Forget to remove a listener? Memory leak. Typo in an attribute name? Silent failure. Read a missing attribute? `null` ends up in the DOM as the string `"null"`.

At this point you might ask: why not Lit? Lit is a solid choice if you want a full Web Components framework with template-based rendering, reactive properties through decorators, and a complete component model. But Lit and my library, nanotags, solve different problems. Lit renders templates from state. With Astro, the markup is already on the page when the component upgrades. I didn't need a template engine; I needed a thin layer that wires reactivity and behavior on top of static HTML—and I wanted that reactivity to come from nanostores, the tiny state library by [Andrey Sitnik](https://github.com/ai) that many Astro projects rely on.

After porting a handful of components from React, I had pasted the same scaffolding over and over. The patterns were obvious, the boilerplate was numbing, and I wanted my Web Components to feel as nice to write as a Solid component. So I extracted what I had into a library.

## nanotags: making Web Components enjoyable

[nanotags](https://nanotags.psdcoder.dev/) is a thin wrapper around Custom Elements. It doesn't replace anything; it just removes the tedious parts of writing Web Components. Here's a counter, hydrating the same markup:

```html
<my-counter count="0">
  0
  <button data-ref="button">+1</button>
</my-counter>
```

```js
import { define } from "nanotags"

define("my-counter")
  .withProps(p => ({ count: p.number(0) }))
  .withRefs(r => ({ display: r.one("span"), button: r.one("button") }))
  .setup(ctx => {
    ctx.on(ctx.refs.button, "click", () => {
      ctx.props.$count.set(ctx.props.$count.get() + 1)
    })

    ctx.effect(ctx.props.$count, val => {
      ctx.refs.display.textContent = String(val)
    })
  })
```

That's less code, but more importantly, a fundamentally different experience.

Every prop is a nanostores atom. Change the `count` attribute in HTML, and `ctx.props.$count` updates. Call `.set()` in JavaScript, and the attribute updates. Two-way sync between DOM and state, with no extra wiring.

Refs are typed and validated. `r.one("button")` queries the button, infers its type as `HTMLButtonElement`, and throws at component initialization if the element is missing. No null checks scattered through the code, no TypeScript narrowing dance.

Everything registered through `ctx`—event listeners, effects, bindings—gets cleaned up automatically when the component disconnects. You don't write `disconnectedCallback`, you don't track listeners to remove, and subscriptions can't leak.

## Declarative, type-safe props and refs

Props in nanotags are not just attributes you read with `getAttribute`. They are validated, coerced, and reactive—and you declare them up front, so TypeScript knows their shape without any annotations.

Four built-in validators cover most cases: `string`, `number`, `boolean`, and `oneOf` for enums. They parse raw attribute strings into typed values automatically:

```js
define("my-card")
  .withProps(p => ({
    title: p.string("Untitled"),
    count: p.number(0),
    open:  p.boolean(),
    size:  p.oneOf(["s", "m", "l"]),
  }))
  .setup(ctx => { /* ... */ })
```

Each prop is exposed as a nanostores atom—a small reactive store you can read with `.get()`, update with `.set()`, or subscribe to via `ctx.effect()`. 

And because each validator infers its own type, `ctx.props.$count` is automatically typed as `WritableAtom<number>`—no manual annotations needed. For complex data that doesn't fit into a string attribute, there's `p.json`, which accepts any [Standard Schema](https://standardschema.dev/) compatible validator (Valibot, Zod, ArkType).

Refs follow the same declarative pattern. Instead of scattering `querySelector` calls across the component, you declare every element you need up front:

```js
define("x-dropdown")
  .withRefs(r => ({
    trigger: r.one("button"),                              // HTMLButtonElement
    menu:    r.one("[role=menu]"),                         // Element
    items:   r.many("[role=menuitem]"), // HTMLAnchorElement[]
  }))
  .setup(ctx => {
    ctx.on(ctx.refs.trigger, "click", () => { /* ... */ })
  })
```

Pass a tag name and you get the matching element type, automatically. The selector is also enforced at runtime: if any declared ref is missing from the DOM, you get an explicit error at component initialization, not a silent null three function calls later.

## Typed events and the builder chain

The same type-aware design runs through the rest of the API. Event listeners are typed against the correct event map for each target. Attach to a `button` and `ctx.on` gives you `HTMLElementEventMap`. Attach to `document` and you get `DocumentEventMap`. Full autocomplete on event names, fully typed event objects in the callback.

Custom events are typed end-to-end too. Augment `HTMLElementEventMap` once with the events your component emits, and every `ctx.emit` and every listener gets the right `detail` shape across the codebase—without casts or runtime checks.

For form controls and stateful elements, `ctx.bind` two-way binds an atom to a DOM property. Set `ctx.bind($name, ctx.refs.input)` and the input value stays in sync with the store: type in the input, the atom updates; call `$name.set(...)`, the input updates. No event handlers, no manual reads, just one line.

The fluent builder chain is what makes all of this possible. Each step (`.withProps`, `.withRefs`, `.withContexts`) carries its type information forward, and by the time you reach `.setup`, the `ctx` argument knows everything: which props exist, what types the refs are, what shape each event carries. You write a component and TypeScript follows along, instead of demanding you spell things out for it.

## Accessibility

DX wins are great, but they don't matter if accessibility regresses. In fact, this was my biggest concern before the migration. The React ecosystem has excellent component libraries like Ark UI, Radix, and React Aria that handle accessibility out of the box: keyboard navigation, focus management, ARIA attributes, screen reader support. You get all of that for free.

In the vanilla Web Components world, I couldn't find anything comparable. So I had to implement it myself, which turned out to be much more approachable than I expected.

The W3C [ARIA Authoring Practices Guide](https://www.w3.org/WAI/ARIA/apg/patterns/) is an incredibly detailed resource. It describes every common pattern (tabs, dialogs, menus, accordions) with expected keyboard behavior, ARIA roles, and states. 

And with modern LLM agents, turning a spec into working code is surprisingly fast. Point the agent at the APG pattern, describe your component's markup, and get a solid first draft you can refine and test.

In nanotags, I packaged these behaviors as **attachments**. An attachment is just a function that takes the setup context and wires up the behavior:

```js
function attachRovingFocus(ctx, container, items) {
  // set initial tabindex, track active index...

  ctx.on(container, "keydown", e => {
    // handle ArrowLeft / ArrowRight / Home / End,
    // update tabindex, move focus
  })

  ctx.on(container, "focusin", e => {
    // sync active index when focus comes from outside
  })
}
```

No classes or inheritance. Because everything goes through `ctx`, all listeners are automatically cleaned up when the component disconnects. Using it in a component is one line:

```js
define("x-tabs")
  .withRefs(r => ({
    tablist: r.one("[role=tablist]"),
    tabs:    r.many("[role=tab]"),
  }))
  .setup(ctx => {
    attachRovingFocus(ctx, ctx.refs.tablist, ctx.refs.tabs)
  })
```

Multiple attachments compose in a single component without any lifecycle juggling; they all share the same `ctx`, all listeners get cleaned up together, all behaviors stay independent. 

So, the a11y didn't regress in the migration, and for our component set, the result was as good as what we had with React. Some interactions even improved, because attachments are written for one site, not for every possible usage.

And there is a long-term bonus to staying close to the platform. Some of these behaviors are slowly moving into the browser itself—the [scoped focusgroup](https://open-ui.org/components/scoped-focusgroup.explainer/) proposal, for example, would handle roving focus declaratively through an HTML attribute, with no JavaScript at all. When that ships, replacing an attachment with a native behavior is a small, local edit in your own codebase. 

The same migration is possible inside a component library too, of course, but you have to wait for the maintainers to do it, ship a new version, and align your usage with whatever API they end up with. You depend less on someone else's release schedule.

## Modularity and bundle size

The whole point was to pay for what you actually use. The core module (component definition, props, refs, effects, event listeners) is under 2.5 KB. Everything else lives in separate entry points that you import only if you need them: shared state between related components (`nanotags/context`, ~400 B), keyed list rendering for dynamic content (`nanotags/render`, ~400 B). If you don't import them, they never touch your bundle.

For comparison, [Lit weighs 6.2 KB](https://bundlejs.com/?q=lit%2Clit%2Fdecorators.js&treeshake=%5B%7Bhtml%2Ccss%2CLitElement%7D%5D%2C%5B%7BcustomElement%2Cproperty%7D%5D), and React plus ReactDOM is [62.8 KB gzipped](https://bundlejs.com/?q=react-dom%4019.2.4%2Fclient%2Creact%4019.2.4&treeshake=%5B%7B+createRoot+%7D%5D%2C%5B%7B+default+as+React+%7D%5D), before you add a single component library on top. These are not apples-to-apples comparisons. React is a full rendering engine with a virtual DOM, a component model, and a massive ecosystem. Lit is a full Web Components framework with template literals and reactive properties. nanotags is a thin layer over Custom Elements. 

>The question isn't which is "better", it's how much of a framework you actually need for the page you're building.

There's no runtime sitting between your code and the browser either. After initialization, it's just the DOM—events fire, attributes change, your handlers run. Astro renders the markup, nanotags wires up the interactivity, the browser does the rest.

>Combined with nanostores (under 1 KB), the total cost of a fully reactive component system on a static site is around 3 KB. 

That's less than a single hero image on most marketing pages.

## Final React-ions

React is a great tool. So is Vue, so is Svelte, so is Solid. 

But for a marketing site, a documentation portal, or a landing page, where the majority of the content is static and the interactive parts are simple (a dropdown, a dialog, a few tabs), the web platform already has what you need. Custom Elements, standard DOM APIs, regular CSS. Pair that with Astro for static rendering and nanostores for lightweight reactivity, and you have a stack that's fast, small, and built on standards.

nanotags is the layer that makes this stack pleasant to write. Validated, reactive props. Type-safe refs. Typed events. Automatic cleanup. Reusable a11y attachments. All under 2.5 KB, no Shadow DOM, no template engine, no runtime between your code and the browser.

The API is stable, the [documentation site](https://nanotags.psdcoder.dev/) has interactive examples, and there's an llms.txt for working alongside AI assistants. Source on [GitHub](https://github.com/psd-coder/nanotags).

Next time you start a marketing site, before you reach for React out of habit, try the platform first. You may find it does more than you remember.

---

**Lighter, faster frontends** Shipping a React SPA when a static site would do? We help teams pick the right tool and cut JavaScript bundles without losing features. [Contact Evil Martians](https://evilmartians.com/contact-us)
