Migrating Whop from PostgreSQL to PlanetScale MySQL with 0 downtime

Cover for Migrating Whop from PostgreSQL to PlanetScale MySQL with 0 downtime

Hypergrowth forces bold moves. And one such case we helped ship: moving a high‑traffic Rails app from PostgreSQL to PlanetScale MySQL—without pausing development—and learning exactly how to bridge two quite familiar yet very different databases in the process.

Databases are a particularly sticky product decision. This choice, made early on, endures as long as possible since migration can be incredibly painful. We’ll show how Evil Martians helped a fast-moving Rails team pull off a live migration from PostgreSQL to PlanetScale MySQL with no downtime or development freezes.

You’ll read how a dual-database setup kept features shipping while we hunted down every dialect mismatch, precision pitfall, and missing index along the way. This is a field guide to surviving (and perhaps, even enjoying) the code and schema-level migration of this process.

Whop makes the decision to migrate

Whop powers thousands of creators and communities. Eventually, a surge of celebrity traffic pushed their Rails app and primary database to the edge. Big influencers (think millions of subscribers) had started bringing their communities to Whop. As a result, Whop started experiencing downtime because of these intense traffic increases.

Whop’s call: migrate from PostgreSQL to PlanetScale MySQL. Not a database.yml swap, as while these two SQL worlds rhyme, they differ where it matters.

The dual database strategy: keeping development velocity alive

Our first challenge wasn’t technical, it was organizational.

The thing was, Whop couldn’t freeze development for weeks while we migrated their database. New features, bug fixes, and customer requests don’t pause for infrastructure projects. We needed a way to migrate without stopping the team’s momentum.

Our solution: run Rails in dual-database mode.

This is like the database equivalent of a Rails dual boot strategy. With a single environment variable, an entire application (and test suite) can switch between PostgreSQL and MySQL. This allowed Whop’s developers to keep shipping features on PostgreSQL while we worked on MySQL compatibility in parallel.

Book a call

Hire Evil Martians

Migrate from one DB to another without ruining developer velocity!

Implementation centers around a simple on_mysql? helper method:

module Kernel
  if ENV["ON_MYSQL"] == "true"
    def on_mysql? = true
  else
    def on_mysql? = false
  end
end

This Boolean check appears throughout the codebase wherever database-specific behavior diverges. Rather than littering the code with environment variable checks, on_mysql? provides a clean, readable way to handle database differences:

default: &default
  pool: ...
  connect_timeout: ...
  adapter: <%= on_mysql? ? "trilogy" : "postgresql" %>
<% if on_mysql? %>
  schema_dump: <%= ENV["DUMP_STRUCTURE"] == "1" ? "structure.mysql.sql" : "schema.rb" %>
<% else %>
  encoding: unicode
  prepared_statements: false
  advisory_locks: false
  ...
<% end %>

In this dual-database setup, PostgreSQL mode preserves the original structure.sql and adapter settings.

MySQL mode switches to the trilogy adapter and uses schema.rb by default (or structure.mysql.sql when raw SQL is needed).

This dual database approach reveals just how many subtle differences exist between these databases.

For instance, case-insensitive searches require different operators:

op = on_mysql? ? "LIKE" : "ILIKE"
scope = scope.where("resources.name #{op} ?", "%#{sanitized_query}%")

MySQL’s LIKE is case-insensitive by default, while PostgreSQL needs ILIKE.

Meanwhile, JSON array operations require completely different syntax:

if on_mysql?
  scope :with_roles, lambda { |roles|
    sanitized_roles = sanitize_roles(Array(roles))
    where("JSON_OVERLAPS(roles, JSON_ARRAY(#{sanitized_roles.join(',')}))")
  }
else
  scope :with_roles, lambda { |roles|
    where("roles @> ARRAY[?]::varchar[]", Array(roles))
  }
end

This dual-database setup became our migration safety net. Every change could be tested against both databases before deployment.

When problems emerged (as they always do) we could debug them without impacting production. So, the development team kept shipping features while we systematically addressed compatibility issues.

The schema safari: where data types roam wild

With our dual database infrastructure in place, we started discovering just how deep these differences run. While LIKE and ILIKE are simple enough to remember, the real challenges emerged in the schema layer.

If you assumed DECIMAL without precision works everywhere, or that DISTINCT ON is universal SQL, you’re about to discover PostgreSQL has been spoiling you. Moving to MySQL means confronting fundamental differences—from how NULLs sort to what happens when your partial indexes suddenly don’t exist.

Now, moving on, this is the field guide we wished we’d had: the specific gotchas, tested workarounds, and migration patterns that kept Whop’s development velocity intact.

Whether you’re planning a similar move or architecting around database constraints, here’s what actually breaks and how to fix it.

Numeric precision: the decimal awakening

Let’s be honest: PostgreSQL has spoiled us rotten. In Rails migrations, we can casually write t.decimal "price" or t.decimal "balance" without specifying precision and scale, including for monetary values across different currencies (where the order of subunits can vary greatly from one currency to another). PostgreSQL just handles it, with generous limits that feel like unlimited storage space compared to most real-world needs.

Then, MySQL brought us back to reality fast. Those casual t.decimal declarations default to precision 10, scale 0 (yes, zero!), meaning your balance column that happily stored 1234.56 in PostgreSQL will now truncate to 1235 and leave you wondering why your financial calculations are suddenly broken.

The fix is to always specify your precision needs upfront:

t.decimal "balance", precision: 10, scale: 2

Migration survival tip: You’re in the middle of a database migration, so don’t create extra problems for yourself.

Be generous with precision and scale: set precision to 15 and scale to 4 for most monetary fields, or even higher if you’re unsure. Successfully finish migration, optimize later. It’s better to have a working system with slightly oversized decimal fields than a broken migration because you tried to be clever with storage.

Timestamps: precision strikes again

If you thought precision differences were just a decimal problem, timestamps have their own surprise waiting. Both databases support timestamp precision, but they have very different defaults. MySQL defaults to 0 fractional seconds, while PostgreSQL gives you (up to) 6.

If fractional seconds matter to your application (and they often do for audit trails or event ordering), you’ll need to be explicit:

ALTER TABLE users ADD COLUMN created_at TIMESTAMP(6) DEFAULT CURRENT_TIMESTAMP(6);

JSON and the Array exodus

Our JSON story starts simply enough: MySQL has JSON, but no JSONB. Your JSONB columns migrate to plain JSON without drama. But if you’re indexing individual JSON attributes, you’ll hit MySQL’s functional index limitations fast.

Try to create an index on a JSON function and MySQL will say: “Cannot create a functional index on a function that returns a JSON or GEOMETRY value.”

The workaround is generated columns that extract your JSON keys into indexable fields:

ALTER TABLE users ADD COLUMN data_key INT GENERATED ALWAYS AS (data->'$.key') STORED;
CREATE INDEX index_users_on_data_key ON users (data_key);

Arrays present a bigger migration puzzle. PostgreSQL’s native array types simply don’t exist in MySQL—you’ll need to serialize them as JSON and adjust your queries accordingly.

Migration survival tip: When preparing data for migration, generated columns seem like the perfect bridge. Both databases support them, so why not add a generated column in PostgreSQL, migrate it to MySQL, then drop the original array column? Here’s the problem: PostgreSQL only has STORED generated columns, never virtual ones. Adding a stored generated column to a large table triggers a full table rewrite—exactly what you want to avoid during a migration. Instead, add a regular column and compute its values when you see fit.

Partial indexes: the conditional challenge

PostgreSQL’s partial indexes are those elegant WHERE clauses that let you index just the records you care about. And they don’t exist in MySQL. So, if you’ve been using them as unique constraints on subsets of data, you’re in for a creative problem-solving session.

Non-unique partial indexes are straightforward: drop the condition and index the whole column. But unique partial indexes require a clever workaround that exploits how NULLs behave in unique constraints.

Since all NULLs are considered distinct, you can enforce uniqueness on specific record subsets by making your conditional logic return NULL for excluded records:

t.index "`name`, (nullif((`status` = 'active'),false))",
        name: "index_active_users_on_name_uniq", unique: true

In MySQL, this translates to:

CREATE UNIQUE INDEX index_users_on_name_uniq ON users (name, (nullif((`status` = 'active'),false)));

And here’s how it works in practice:

> INSERT INTO users(name, status) VALUES('John Doe', 'active');
Query OK, 1 row affected (0.002 sec)

> INSERT INTO users(name, status) VALUES('John Doe', 'active');
ERROR 1062 (23000): Duplicate entry 'John Doe-1' for key 'users.index_users_on_name_uniq'

> INSERT INTO users(name, status) VALUES('John Doe', 'inactive');
Query OK, 1 row affected (0.002 sec)

The nullif() function returns NULL when both expressions are equal, otherwise it returns the first expression. This means active users get a non-NULL value (enforcing uniqueness) while inactive users get NULL (allowing duplicates).

Alternative approach: Use generated columns to move the conditional logic into a separate indexed field. This can be cleaner for complex conditions but requires more schema changes.

The materialized views mirage

If you’ve been leaning on PostgreSQL’s materialized views for complex aggregations or reporting queries, MySQL has some disappointing news. The MySQL FAQ cuts straight to the point:

Does MySQL have materialized views? No. Your materialized views need new homes. You have two main paths forward:

Option 1: Roll your own with separate tables. Create dedicated tables for your materialized data and refresh them via triggers or scheduled jobs.

The scheduled approach is simpler to implement, but full refreshes create a dangerous window where your table is empty between the TRUNCATE and INSERT. If your application tries to read during this window (or worse, caches the empty result) you’ll have bigger problems than database migration.

You can reduce the risks of this scenario by creating another table, filling it with data, then replacing the old table with the new one, and finally dropping the old table. This atomic swap eliminates the empty-table window that can break your application.

Option 2: Switch to regular views. If your underlying queries are fast enough without materialization, regular views eliminate the refresh complexity entirely. Test this thoroughly. Something that runs quickly on a development database might not scale to production.

The query chronicles: syntactic adventures

The NULL sorting surprise

Just when you think you’ve mapped all the differences, NULL sorting behavior sneaks up on you. This isn’t just about missing syntax. Rather, MySQL and PostgreSQL have fundamentally different opinions about where NULLs belong in sorted results.

The missing syntax: MySQL doesn’t support PostgreSQL’s NULLS FIRST and NULLS LAST clauses.

The bigger problem: Even without explicit NULL positioning, the databases sort them differently by default:

  • PostgreSQL: NULLs sort as if larger than any non-null value (NULLS FIRST for DESC, NULLS LAST for ASC)

    > SELECT id, name, last_activity_on FROM users ORDER BY last_activity_on DESC;
     id |    name     | last_activity_on
    ----+-------------+------------------
      3 | Jane Doe Jr |
      1 | John Doe    | 2025-08-01
      2 | John Doe Jr | 2025-08-01
  • MySQL: NULLs come first for ASC and last for DESC

    > SELECT id, name, last_activity_on FROM users ORDER BY last_activity_on DESC;
    +----+-------------+------------------+
    | id | name        | last_activity_on |
    +----+-------------+------------------+
    | 14 | John Doe    | 2025-08-08       |
    | 15 | John Doe Jr | 2025-08-08       |
    | 16 | Jane Doe    | NULL             |
    +----+-------------+------------------+

The migration-safe solution: If NULL positioning matters to your application, you should already have explicit NULLS FIRST/LAST in your code.

If not, but you need them, don’t add them now. In both cases, use Arel to generate portable SQL:

MyModel.order(User.arel_table[:my_column].desc.nulls_last)

This approach ensures your code migrates to MySQL without breaking existing sort behavior:

-- Forcing PostgreSQL to match MySQL's NULL placement
# SELECT id, name, last_activity_on FROM users ORDER BY last_activity_on DESC NULLS LAST;
 id |    name     | last_activity_on
----+-------------+------------------
  1 | John Doe    | 2025-08-01
  2 | John Doe Jr | 2025-08-01
  3 | Jane Doe Jr |

When you need MySQL to override its default NULL positioning, you can use conditional sorting to achieve the same effect as PostgreSQL’s explicit NULLS FIRST/LAST clauses:

-- Forcing MySQL to put NULLs first (opposite of its default)
> SELECT id, name, last_activity_on FROM users ORDER BY (last_activity_on IS NULL) DESC, last_activity_on ASC;
+----+-------------+------------------+
| id | name        | last_activity_on |
+----+-------------+------------------+
| 16 | Jane Doe    | NULL             |
| 14 | John Doe    | 2025-08-08       |
| 15 | John Doe Jr | 2025-08-08       |
+----+-------------+------------------+

DISTINCT ON: a PostgreSQL party trick

PostgreSQL’s DISTINCT ON is one of those features that spoils you. It elegantly solves the “first record per group” problem that comes up constantly in Rails applications. Want the latest order per customer? The highest-scoring post per category? DISTINCT ON handles it cleanly.

But MySQL doesn’t have DISTINCT ON. At all.

So, if you’ve been relying on queries like this in your Rails app:

SELECT DISTINCT ON (category_id) *
FROM items
ORDER BY category_id, qty DESC;

You’ll need to rewrite them using subqueries and window functions. The MySQL equivalent requires more ceremony but achieves the same result:

SELECT *
FROM (
   SELECT id, qty, ROW_NUMBER() OVER (PARTITION BY category_id ORDER BY qty DESC) as rn
   FROM items
) subquery
WHERE rn = 1;

The migration postmortem

The Whop team’s PostgreSQL-to-MySQL migration was more than a mere database swap. It represented a masterclass in how different databases shape your application architecture; a lot of features PostgreSQL gives us for free require deliberate engineering decisions in MySQL.

Now, here’s the migration checklist that actually matters:

  • Audit your DISTINCT ON queries first. These will require the most rewriting effort.
  • Map every PostgreSQL array column. JSON migrations need planning, not last-minute discoveries.
  • Inventory your partial indexes. Unique constraints on subsets need creative solutions.
  • Test your decimal precision assumptions. Financial calculations break silently.
  • Document your NULL sorting dependencies. Different defaults create subtle bugs.

Here’s what made this migration successful:

Simply put, the team didn’t fight MySQL’s design philosophy. Instead of trying to replicate PostgreSQL’s feature set, they adapted their application patterns to MySQL’s strengths. Generated columns replaced complex partial indexes. JSON arrays replaced native arrays. Window functions replaced DISTINCT ON.

The real lesson here: Database migrations succeed when you embrace the destination database’s worldview rather than recreating your source database’s behavior.

PostgreSQL and MySQL aren’t just different syntaxes; they represent different approaches to solving data problems. The dual database pattern proved essential here. When you can’t afford downtime and can’t stop development, running parallel database support is beyond just helpful, it’s necessary.

Your migration should start with an honest inventory. Count your JSONB columns, list your array fields, and audit your partial indexes. The technical translation work is predictable; the surprises come from features you forgot you were using.

The end…?

Migrating code and development processes was actually only the first step on the path to PlanetScale. Next came the stages of data migration and replication and preparing to move production without a single interruption—but that’s quite another story, and we hope Whop will tell it someday!

Book a call

Irina Nazarova CEO at Evil Martians

Reach out to to migrate from one DB to another without ruining developer velocity.