Ruby on Whales: Dockerizing Ruby and Rails development

Topics
Translations
This post introduces the Docker-based development runtime configuration I use for building Ruby on Rails projects, both in an old-school way (by writing) and in a modern way (by instructing LLM agents). This configuration has been helping Evil Martians and many other teams across the globe to ship amazing products without worrying about local setup intricacies. Read on to learn all the details, and feel free to use it, share it, and enjoy!
Notice: This article is regularly updated with the best and latest recommendations; for details, take a look at the Changelog.
So, where to start? This has been a pretty long journey: back in the day, I used to develop using Vagrant, but its VMs were a bit too heavy for my 4GB RAM laptop. In 2017, I decided to switch to containers, and that’s how I first started using Docker. But don’t get the impression that this was an instant fix!
I was in search of a configuration that was perfect for myself, my team, and well, everyone else. Something which was just good enough would not cut it. It took us quite some time to develop an approach known today as “Ruby on Whales” or simply “Dip setup” (named after one of the key components; more on that below). Let’s introduce its third major edition of this setup.
Originally, this post consisted mostly of annotated code and configuration examples. These days, we’ve shifted focus towards the whys and hows aspects (but we’ve still kept the annotated configurations in place):
Hire Evil Martians
Evil Martians help devtool teams perfect every layer of their stack and launch faster, reach out and let's go!
Ruby on Whales in a nutshell
A reproducible development environment is the key to efficient teamwork. Onboarding new engineers (or letting non-engineers quickly spin up a project), seamlessly rolling out development infrastructure changes (databases, native dependencies, and so on), eliminating “works on my machine” problems—all of that becomes possible if you encourage developers to use a standardized, consistent development environment.
Today, we usually use some form of containerization to create reproducible, isolated development environments. Containers can be launched remotely (Codespaces, Coder, Gitpod) or locally (Docker, Podman). You can put all dependencies into a single container or use a multi-service architecture; the number of ways you can orchestrate all of that is hardly enumerable.
Still, many teams do not invest in reproducible development experience. Why is that? Maybe they just don’t know where to start or which approach to pick? Ruby on Whales aims to answer both questions for Ruby on Rails applications.
Ruby on Whales is a local development configuration that uses Docker as its core and a multi-service architecture (via Compose). The minimal Docker development setup consists of the following components:
- a Docker runtime (e.g., Docker Desktop, OrbStack)
- a development container configuration (
Dockerfile) to run Ruby, Node.js, etc. - a manifest (
compose.yml) to define infrastructure dependencies (e.g., databases) and secondary services
With just two files, you can launch a containerized development environment with all dependencies using a single command: docker compose up. Or docker compose run web? Or, maybe, docker compose run --rm -it web?
Here’s the problem: the Docker Compose developer experience. It’s not meant as a tool for everyday development use. That’s why teams create custom Bash scripts, Makefiles, and such to hide the details of running development containers. We solve the DX problem by using Dip.
Dip is a thin wrapper over docker compose that provides a switch from an infrastructure-oriented flow to a development-oriented one. The key benefits of using Dip are as follows:
- The ability to define application-specific interactive commands and sub-commands; team-wide (or shared) commands are also supported via modules
- The
dip provisioncommand to quickly set up a development environment from scratch - Support for multiple
compose.ymlfiles (including OS-specific configurations) - Shell and ssh-agent integrations
With Dip in place, to start working on the app locally, you just need to execute a few commands:
# Builds a Docker image if none, runs additional commands
$ dip provision
# Runs a Rails server with the defined dependencies
$ dip rails s
=> Booting Puma
=> Rails 8.1.0 application starting in development
=> Run `bin/rails server --help` for more startup options
[1] Puma starting in cluster mode...
...
[1] - Worker 0 (PID: 9) booted in 0.0s, phase: 0The dip rails s command is just like bin/rails s: the server is accessible on localhost:3000, debugging (debugger or binding.irb) just works. Similarly, you can run dip rspec, dip psql, etc.
Thus, the Ruby on Whales setup consists of three components: Docker (Dockerfile), Compose (compose.yml), and Dip (dip.yml).
Why not DevContainers?
DevContainers is the most popular tool for building development containers today. We even have a default configuration with every rails new. So, why not just use it?
DevContainers assume that the entire development environment (including your IDE) runs within a Docker container. Terminal enhancements, extensions, and personal editor preferences must be brought to the container (and synchronized between projects). Even though the spec is editor-agnostic, non-VS Code-based editors either do not support it or have constant issues.
The Dip approach separates the tools for running code (within containers) from those for writing code (editors and terminals). (Truly) any editor, any terminal, your personalized development experience as a code writer. The separation has downsides, of course: dev tools, such as language servers, linters, and IDE extensions, may require special attention to work well with a containerized environment. Our Dip setup aims to address the most common DX issues; however, meeting all needs is hardly possible compared to running an IDE in a container.
In the end, it’s more a matter of taste and choosing tradeoffs. Some good news: you can have both configurations in your project, sharing the same Docker and Compose configurations—Dip is not a monopolist.
Speaking of writing code, we need to cover one more use case for using containers for development.
Isolating LLMs with Dip
Reproducibility implies isolation: any software installed on your machine can interfere with your development runtime. However, these days, we more often consider isolation from the opposite perspective: a way to prevent unrestricted access to the host system by development software. Yes, I’m talking about AI tools.
Recently, I’ve started using Dip more often, even for smaller Rails projects with no complex dependencies. Why? Because I’m not comfortable running claude --dangerously-skip-permissions all the time. There is something in this “dangerously” prefix that makes me more nervous with every task I delegate to AI, with every skill I install (even when I install Every’s skills).
I stopped worrying (almost) when I added the dip claude command to the Ruby on Whales setup. Claude runs within a container with all the dependencies available (so it can run bin/rails test and bin/rubocop when needed). It can only access the project’s files (all of them or not—it’s your decision via volumes mapping), credentials, and settings are stored on a per-project basis in Docker volumes.
Now, let me show how to get started with using Ruby on Whales.
Quick start
Ruby on Whales ships with a generator (or a Rails template) that can help you quickly adopt Docker for development by running a single command (and answering a few questions).
The source code can be found in the evilmartians/ruby-on-whales repository on GitHub.
Let’s see it in action:
An interactive Ruby on Whales installer (2026 edition)
The generator analyzes the project’s configuration to infer sensible defaults, so in most cases you just need to confirm them. However, a deterministic generator cannot account for all project-specific requirements; some manual fine-tuning may be required. That’s why the generator finishes its work by asking Claude Code (if available) to finalize the setup. It is our first experiment with hybrid generators (as we call them)—please, let us know what you think of this idea (or should we just release a skill instead?)
We recommend running the generator via Ruby Bytes, a tool to install Thor-based application templates with a charming interface (i.e., built with Charm Ruby libraries):
rbytes install https://railsbytes.com/script/z5OsoBThe generator creates a .dockerdev/ directory and a dip.yml file in your project. Let’s see what’s in there.
Annotated configuration
In this section, we’ll go through all the configuration files and explain every line. Fasten your seat belts! (Or feel free to skip this section if you only need to get things up and running right now; you can return to unveiling the magic behind the configuration later).
Dockerfile
The Dockerfile defines our Ruby application’s environment. This environment is where we’ll run servers, access the console (rails c), perform tests, do Rake tasks, and otherwise interact with our code in any way as developers:
# syntax=docker/dockerfile:1
ARG RUBY_VERSION
ARG DISTRO_NAME=bookworm
FROM ruby:$RUBY_VERSION-slim-$DISTRO_NAME
ARG DISTRO_NAME
# Common dependencies
RUN \
rm -f /etc/apt/apt.conf.d/docker-clean; \
echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache; \
apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get -yq dist-upgrade && \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
build-essential \
gnupg2 \
curl \
less \
git
# Install PostgreSQL dependencies
ARG PG_MAJOR
RUN curl -sSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
gpg --dearmor -o /usr/share/keyrings/postgres-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/postgres-archive-keyring.gpg] https://apt.postgresql.org/pub/repos/apt/" \
$DISTRO_NAME-pgdg main $PG_MAJOR | tee /etc/apt/sources.list.d/postgres.list > /dev/null
RUN \
apt-get update -qq && DEBIAN_FRONTEND=noninteractive apt-get -yq dist-upgrade && \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
libpq-dev \
postgresql-client-$PG_MAJOR
# Install NodeJS and Yarn
ARG NODE_MAJOR
RUN \
curl -sL https://deb.nodesource.com/setup_$NODE_MAJOR.x | bash - && \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
nodejs
RUN npm install -g yarn
# Application dependencies
# We use an external Aptfile for this, stay tuned
RUN \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
$(grep -Ev '^\s*#' /tmp/Aptfile | xargs)
# Install Claude CLI
RUN curl -fsSL https://claude.ai/install.sh | bash
ENV PATH /root/.local/bin:$PATH
ENV IS_SANDBOX 1
# Configure bundler
ENV LANG=C.UTF-8 \
BUNDLE_JOBS=4 \
BUNDLE_RETRY=3
# Store Bundler settings in the project's root
ENV BUNDLE_APP_CONFIG=.bundle
# Uncomment this line if you want to run binstubs without prefixing with `bin/` or `bundle exec`
# ENV PATH /app/bin:$PATH
# Upgrade RubyGems and install the latest Bundler version
RUN gem update --system && \
gem install bundler
# Create a directory for the app code
RUN mkdir -p /app
WORKDIR /app
# Document that we're going to expose port 3000
EXPOSE 3000
# Use Bash as the default command
CMD ["/bin/bash"]This configuration only contains the essentials, and so it can be used as a starting point. Let me illustrate what we’re are doing here a bit further. The first three lines might look a bit strange:
ARG RUBY_VERSION
ARG DISTRO_NAME=bookworm
FROM ruby:$RUBY_VERSION-slim-$DISTRO_NAMEWhy not just use FROM ruby:3.4.1, or whatever is the stable Ruby version du jour? Well, we’re going this route because we want to make our environment configurable from the outside using the Dockerfile as a sort of a template:
- The exact versions of the runtime dependencies are specified in
compose.yml(see below 👇). - The list of
apt-installable dependencies is stored in a separate file (also, see below 👇👇).
Additionally, we parameterize the Debian release (bookworm by default) to make sure we’re adding the correct sources for our other dependencies (such as PostgreSQL).
NOTE: The generator produces a tailored Dockerfile based on your project’s dependencies. The example shown here is the “full-featured” variant with PostgreSQL, Node.js, and all optional features enabled. Your generated output may differ.
Alright, now, note that we declare the argument once again after the FROM statement:
FROM ruby:$RUBY_VERSION-slim-$DISTRO_NAME
ARG DISTRO_NAMEThat’s the tricky part of how Dockerfiles work: the args are reset after the FROM statement. For more details, check out this issue.
Moving on, the rest of the file contains the actual build steps. First, we’ll need to manually install some common system dependencies (Git, cURL, etc.), as we’re using the slim base Docker image to reduce the size:
# Common dependencies
RUN \
apt-get update -qq \
&& DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
build-essential \
gnupg2 \
curl \
less \
gitWe’ll explain all the details of installing system dependencies below, alongside the application-specific dependencies.
Installing PostgreSQL and NodeJS via apt requires adding their deb package repos to the sources list.
Here’s the PostgreSQL (based on the official documentation):
ARG PG_MAJOR
RUN curl -sSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
gpg --dearmor -o /usr/share/keyrings/postgres-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/postgres-archive-keyring.gpg] https://apt.postgresql.org/pub/repos/apt/" \
$DISTRO_NAME-pgdg main $PG_MAJOR | tee /etc/apt/sources.list.d/postgres.list > /dev/null
RUN \
apt-get update -qq && DEBIAN_FRONTEND=noninteractive apt-get -yq dist-upgrade && \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
libpq-dev \
postgresql-client-$PG_MAJORSince we aren’t expecting anyone to use this Dockerfile without Docker Compose, we don’t provide a default value for the PG_MAJOR argument (the same applies to NODE_MAJOR below).
NOTE: The generator also supports MySQL and SQLite3 as an alternative to PostgreSQL. When MySQL is selected, the Dockerfile installs default-libmysqlclient-dev and the default-mysql-client packages instead.
Also, notice that in the code above that the DISTRO_NAME argument which we defined at the very beginning of the file comes back into play.
And, we repeat our apt-get ... apt-get clean spell again: we want to make sure all the major pieces of our environment are built in an isolated way (this will help us to better utilize Docker cache layers when performing upgrades).
For NodeJS, we use the NodeSource setup script, which handles adding the repository and GPG key for us:
ARG NODE_MAJOR
RUN \
curl -sL https://deb.nodesource.com/setup_$NODE_MAJOR.x | bash - && \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
nodejsThen, we install Yarn via NPM:
RUN npm install -g yarnSo, why are we adding NodeJS and Yarn in the first place? Rails now defaults to import maps for a Node-less approach, but if your app uses a JS bundler (Vite, esbuild) or has a complex frontend, you’ll need Node.js. We include it in the full-featured example so you don’t have to reconfigure later.
Now it’s time to install the application-specific dependencies:
RUN \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
$(grep -Ev '^\s*#' /tmp/Aptfile | xargs)Let’s talk about that Aptfile trick a bit:
RUN \
apt-get install \
$(grep -Ev '^\s*#' /tmp/Aptfile | xargs)Instead of using COPY to add the Aptfile to the image (which creates an extra layer), we use --mount=type=bind to make it available during the build step only. This idea was borrowed from heroku-buildpack-apt.
Here is an example Aptfile:
vim
# Application dependencies
sqlite3
libvips-dev
# Claude
bubblewrap
socatVim is always included so we can edit credentials. The sqlite3 and libvips-dev packages are included based on the gems present in the Gemfile.lock. Finally, Claude CLI depenencies are added when we decide to install it within a container.
Keeping project-specific dependencies in a separate file makes our Dockerfile more universal.
With regards to DEBIAN_FRONTEND=noninteractive, I kindly ask you to take a look at this answer on Ask Ubuntu.
The --no-install-recommends option helps save some space (and makes our image smaller) by disabling the installation of recommended packages. You can see more about saving disk space here.
That first (fairly cryptic) part of every RUN statement that installs packages also serves the same purpose: it moves out the local repository of retrieved package files into a cache that will be preserved between builds. We need this magic to be in every RUN statement that installs packages to make sure this particular Docker layer doesn’t contain any garbage. It also greatly speeds up the image build!
Claude CLI
The generator can optionally install Claude Code CLI into the development container:
# Install Claude CLI
RUN curl -fsSL https://claude.ai/install.sh | bash
ENV PATH /root/.local/bin:$PATH
ENV IS_SANDBOX 1The IS_SANDBOX=1 environment variable tells Claude CLI that it’s running inside a container, so it can adjust its behavior accordingly. This section is only included when the Claude CLI option is enabled in the generator.
The final part of the Dockerfile is mostly devoted to Bundler:
# Configure bundler
ENV LANG=C.UTF-8 \
BUNDLE_JOBS=4 \
BUNDLE_RETRY=3 \
# Store Bundler settings in the project's root
ENV BUNDLE_APP_CONFIG=.bundle
# Uncomment this line if you want to run binstubs without prefixing with `bin/` or `bundle exec`
# ENV PATH /app/bin:$PATH
# Upgrade RubyGems and install the latest Bundler version
RUN gem update --system && \
gem install bundlerUsing LANG=C.UTF-8 sets the default locale to UTF-8. This is an emotional setting, as otherwise, Ruby would use US-ASCII for strings—and that’d mean waving goodbye to those sweet, sweet emojis! 👋
Setting BUNDLE_APP_CONFIG is required if you’ll use the <root>/.bundle folder to store project-specicic Bundler settings (like credentials for private gems). The default Ruby image defines this variable so Bundler doesn’t fall back to the local config.
Optionally, you can add your <root>/bin folder to the PATH in order to run commands without bundle exec. We don’t do this by default, because it could break in a multi-project environment (for instance, when you have local gems or engines in your Rails app).
Previously, we also had to specify the Bundler version (taking advantage of some hacks to make sure it’s picked up by the system). Luckily, since Bundler 2.3.0, we no longer need to manually install the version defined in the Gemfile.lock (BUNDLED_WITH). Instead, to avoid conflicts, Bundler does this for us.
compose.yml
Docker Compose is a tool we can use to orchestrate our containerized environment. It allows us to link containers to each other, and to define persistent volumes and services.
Below is the compose file for developing a typical Rails application with PostgreSQL as the database, and with Sidekiq as the background job processor:
x-app: &app
build:
context: .
args:
RUBY_VERSION: '3.4.1'
PG_MAJOR: '17'
NODE_MAJOR: '22'
image: example-dev:1.0.0
working_dir: ${PWD}
environment: &env
NODE_ENV: ${NODE_ENV:-development}
RAILS_ENV: ${RAILS_ENV:-}
tmpfs:
- /tmp
- ${PWD}/tmp/pids
x-backend: &backend
<<: *app
stdin_open: true
tty: true
volumes:
- ${PWD}:/${PWD}:cached
- bundle:/usr/local/bundle
- rails_cache:/${PWD}/tmp/cache
- assets:/${PWD}/public/assets
- node_modules:/${PWD}/node_modules
- vite_dev:/${PWD}/public/vite-dev
- vite_test:/${PWD}/public/vite-test
- history:/usr/local/hist
- claude:/root/.claude
- ./.claude.json:/root/.claude.json
- ./.psqlrc:/root/.psqlrc:ro
- ./.bashrc:/root/.bashrc:ro
environment: &backend_environment
<<: *env
REDIS_URL: redis://redis:6379/
DATABASE_URL: postgres://postgres:postgres@postgres:5432
MALLOC_ARENA_MAX: 2
WEB_CONCURRENCY: ${WEB_CONCURRENCY:-1}
BOOTSNAP_CACHE_DIR: /usr/local/bundle/_bootsnap
XDG_DATA_HOME: /${PWD}/tmp/cache
YARN_CACHE_FOLDER: /${PWD}/node_modules/.yarn-cache
HISTFILE: /usr/local/hist/.bash_history
PSQL_HISTFILE: /usr/local/hist/.psql_history
IRB_HISTFILE: /usr/local/hist/.irb_history
EDITOR: vi
CLAUDE_CODE_TMPDIR: /root/.claude/__tmp__
depends_on: &backend_depends_on
postgres:
condition: service_healthy
redis:
condition: service_healthy
services:
rails:
<<: *backend
command: bundle exec rails
web:
<<: *backend
command: bundle exec rails server -b 0.0.0.0
ports:
- '3000:3000'
depends_on:
sidekiq:
condition: service_started
# Uncomment these lines to always run Vite dev server
# vite:
# condition: service_started
sidekiq:
<<: *backend
command: bundle exec sidekiq
postgres:
image: postgres:17
volumes:
- .psqlrc:/root/.psqlrc:ro
- postgres:/var/lib/postgresql/data
- history:/usr/local/hist
environment:
PSQL_HISTFILE: /usr/local/hist/.psql_history
POSTGRES_PASSWORD: postgres
ports:
- 5432
healthcheck:
test: pg_isready -U postgres -h 127.0.0.1
interval: 5s
redis:
image: redis:7.4-alpine
volumes:
- redis:/data
ports:
- 6379
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
vite:
<<: *backend
command: ./bin/vite dev
volumes:
- ${PWD}:/${PWD}:cached
- bundle:/usr/local/bundle
- node_modules:/${PWD}/node_modules
- vite_dev:/${PWD}/public/vite-dev
- vite_test:/${PWD}/public/vite-test
environment:
<<: *backend_environment
VITE_RUBY_HOST: 0.0.0.0
ports:
- "3036:3036"
volumes:
bundle:
node_modules:
history:
rails_cache:
postgres:
redis:
assets:
vite_dev:
vite_test:
claude:We define several services and two extension fields (x-app and x-backend). Extension fields allow us to define common parts of the configuration. We can attach YAML anchors to them, and later, embed anywhere in the file.
Since we use Dip, the compose.yml file only acts as a services registry. And that’s why we can put it into the .dockerdev/ folder (and not in the project’s root like when using Docker Compose directly).
NOTE: The generator produces a tailored compose.yml based on your project’s dependencies. The example shown here is the “full-featured” variant. Your generated output will only include the services and volumes relevant to your project.
On that note, let’s go ahead and take a thorough look at each service.
x-app
The main purpose of this extension is to provide all the required information to build our application container (as defined in the Dockerfile above):
x-app: &app
build:
context: .
args:
RUBY_VERSION: '3.4.1'
PG_MAJOR: '17'
NODE_MAJOR: '22'What is the context? The context directory defines the build context for Docker. This is something like a working directory for the build process—for example, when we execute the COPY command.
As this directory is packaged and sent to the Docker daemon every time an image is built, it’s better to keep it as small as possible. We’re good here, since our context is just the .dockerdev folder.
And, as we mentioned earlier, we’ll specify the exact version of our dependencies using the args as declared in the Dockerfile.
It’s also a good idea to pay attention to the way we tag images:
image: example-dev:1.0.0One of the benefits of using Docker for development is the ability to automatically synchronize configuration changes across the team. This means the only time you need to upgrade the local image version is when you make changes to it (or to the arguments or files it relies on). Using example-dev:latest is like shooting yourself in the foot.
Keeping an image version also helps work with two different environments without any additional hassle. For example, when working on a long-standing “chore/upgrade-to-ruby-4” branch, you can easily switch to master and use the older image with the older version of Ruby: no need to rebuild anything.
Rule of thumb: Increase the version number in the image tag every time you change
Dockerfileor its arguments (upgrading dependencies, etc.)
We also set a dynamic working_dir:
working_dir: ${PWD}This makes the container’s working directory match the host’s project directory, which is essential for the ${PWD}-based volume mounts (see below).
Next, we add some common environment variables (those shared by multiple services, e.g., Rails and Vite):
environment: &env
NODE_ENV: ${NODE_ENV:-development}
RAILS_ENV: ${RAILS_ENV:-}There are several things going on here, but I’d like to focus on just one: the X=${X:-smth} syntax. This could be translated as “For X variable within the container, if present, use the host machine’s X env variable, otherwise, use another value”. Thus, we make it possible to run a service in a different environment specified along with a command, e.g., RAILS_ENV=test docker-compose up rails.
Note that we’re using a dictionary value (NODE_ENV: xxx) and not a list value (- NODE_ENV=xxx) for the environment field. This allows us to re-use the common settings (see below).
We also tell Docker to use tmpfs for the /tmp folder within a container—and also for the tmp/pids folder of our application. This way, we ensure that no server.pid survives a container exit (say goodbye to any “A server is already running” errors):
tmpfs:
- /tmp
- ${PWD}/tmp/pidsx-backend
Alright, so now, we’ve finally reached the most interesting part of this post.
This service defines the shared behavior of all Ruby services.
Let’s talk about the volumes first:
x-backend: &backend
<<: *app
stdin_open: true
tty: true
volumes:
- ${PWD}:/${PWD}:cached
- bundle:/usr/local/bundle
- rails_cache:/${PWD}/tmp/cache
- assets:/${PWD}/public/assets
- node_modules:/${PWD}/node_modules
- vite_dev:/${PWD}/public/vite-dev
- vite_test:/${PWD}/public/vite-test
- history:/usr/local/hist
- claude:/root/.claude
- ./.claude.json:/root/.claude.json
- ./.psqlrc:/root/.psqlrc:ro
- ./.bashrc:/root/.bashrc:roDocker Desktop now uses VirtioFS by default, which has largely solved the macOS performance problems that plagued earlier versions. Alternatively, OrbStack provides excellent Docker performance on macOS. We keep :cached for compatibility.
The first item in the volumes list mounts the project directory using ${PWD}—this dynamic path allows the container’s directory structure to mirror the host, which is important for tools like debuggers and LSPs that need consistent paths.
The next line tells our container to use a volume named bundle to store the contents of /usr/local/bundle (this is where gems are stored by default). By doing this, we persist our gem data across runs: all the volumes defined in compose.yml will stay put until we run compose down --volumes.
The following lines put all the generated files into Docker volumes to avoid heavy disk operations on the host machine:
- rails_cache:/${PWD}/tmp/cache
- assets:/${PWD}/public/assets
- node_modules:/${PWD}/node_modules
- vite_dev:/${PWD}/public/vite-dev
- vite_test:/${PWD}/public/vite-testUse volumes for generated content (assets, bundle, node_modules, and so on) to keep Docker fast.
NOTE: The assets volume stores Propshaft (or Sprockets) compiled assets. The vite_dev and vite_test volumes store Vite build output. If you’re using Webpacker (legacy), replace these with packs and packs-test volumes. For tailwindcss-rails, add something like assets_builds:/${PWD}/app/assets/builds.
The Claude CLI volumes (claude:/root/.claude and .claude.json) persist Claude’s configuration and session data across container restarts. These are only included when the Claude CLI option is enabled.
We’ll then mount different command line tools configuration files and a volume to persist their history:
- history:/usr/local/hist
- ./.psqlrc:/root/.psqlrc:ro
- ./.bashrc:/root/.bashrc:roOh, and why is psql in the Ruby container? That’s because it’s used internally when you run rails dbconsole.
Pressing onward, our .psqlrc file contains the following trick which makes it possible to specify the path to the history file via the env variable—thus allowing us to specify the path to the history file via the PSQL_HISTFILE env variable, or otherwise, fall back to the default $HOME/.psql_history:
\set HISTFILE `[[ -z $PSQL_HISTFILE ]] && echo $HOME/.psql_history || echo $PSQL_HISTFILE`The .bashrc file allows us to add terminal customizations within a container:
alias be="bundle exec"Alright, let’s talk about the environment variables:
environment: &backend_environment
<<: *env
# ----
# Service discovery
# ----
REDIS_URL: redis://redis:6379/
DATABASE_URL: postgres://postgres:postgres@postgres:5432
# ----
# Application configuration
# ----
MALLOC_ARENA_MAX: 2
WEB_CONCURRENCY: ${WEB_CONCURRENCY:-1}
# -----
# Caches
# -----
BOOTSNAP_CACHE_DIR: /usr/local/bundle/_bootsnap
# This env variable is used by some tools (e.g., RuboCop) to store caches
XDG_DATA_HOME: /${PWD}/tmp/cache
# Puts the Yarn cache into a mounted volume for speed
YARN_CACHE_FOLDER: /${PWD}/node_modules/.yarn-cache
# ----
# Dev tools
# ----
HISTFILE: /usr/local/hist/.bash_history
PSQL_HISTFILE: /usr/local/hist/.psql_history
IRB_HISTFILE: /usr/local/hist/.irb_history
EDITOR: vi
CLAUDE_CODE_TMPDIR: /root/.claude/__tmp__First of all, we “inherit” variables from the common environment variables (<<: *env).
The first group of variables (DATABASE_URL and REDIS_URL) connect our Ruby application to other services.
The DATABASE_URL variable is supported by Rails (ActiveRecord) out of the box. Some libraries (Sidekiq, AnyCable) also support REDIS_URL, but not all of them: for instance, Action Cable must be explicitly configured.
The second group contains some application-wide settings. For example, we define MALLOC_ARENA_MAX and WEB_CONCURRENCY to help us keep Ruby memory handling in check.
Also, we have the variables responsible for storing caches in Docker volumes (BOOTSNAP_CACHE_DIR, XDG_DATA_HOME, YARN_CACHE_FOLDER).
We use bootsnap to speed up application load time. We store its cache in the same volume as the Bundler data. This is because this cache mostly contains the gem data, and we want to make sure the cache is reset every time we drop the Bundler volume (for instance, during a Ruby version upgrade).
The final group of variables aim to improve the developer experience. HISTFILE: /usr/local/hist/.bash_history is the most significant here: it tells Bash to store its history in the specified location, thus making it persistent. The same goes for PSQL_HISTFILE and IRB_HISTFILE.
NOTE: You need to configure IRB to store history in the specified location. To do that, drop these lines into your .irbrc file:
IRB.conf[:HISTORY_FILE] = ENV["IRB_HISTFILE"] if ENV["IRB_HISTFILE"]Finally, EDITOR: vi is used, for example, by the rails credentials:edit command to manage credentials files.
And with that, the only lines in this service we’ve yet to cover are:
stdin_open: true
tty: trueThese lines make this service interactive, that is, they provide a TTY. We need this, for example, to run the Rails console or Bash within a container.
This is the same as running a Docker container with the -it option.
rails
The rails server is our default backend service. The only thing it overrides is the command to execute:
rails:
<<: *backend
command: bundle exec railsThis service is meant for executing all the commands needed in development (rails db:migrate, rspec, etc.).
web
The web service is meant for launching a web server. It defines the exposed ports and the required dependencies to run the app itself. The sidekiq dependency ensures the background job processor is running alongside the web server. The Vite dev server dependency is commented out by default: usually, you don’t need to run a dev server all the time, and when you need it, you can run dip up vite for that.
vite
The Vite service runs the Vite development server for JS/CSS bundling with hot module replacement. The key setting is VITE_RUBY_HOST: 0.0.0.0, which makes the Vite dev server accessible from outside the container (it runs on localhost by default). The service exposes port 3036.
Health checks
When running common Rails commands such as db:migrate, we want to ensure that the DB is up and ready to accept connections. How can we tell Docker Compose to wait until a dependent service is ready? We can use health checks!
You’ve probably noticed that our depends_on definition isn’t just a list of services:
backend:
# ...
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
postgres:
# ...
healthcheck:
test: pg_isready -U postgres -h 127.0.0.1
interval: 5s
redis:
# ...
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30dip.yml
Finally, we reach an upper layer of our configuration—Dip configuration file, dip.yml. This file is the primary entrypoint to your development environment. It includes a provisioning script, available commands and Docker Compose configuration:
version: '7.1'
# Define default environment variables to pass
# to Docker Compose
# environment:
# RAILS_ENV: development
compose:
files:
- .dockerdev/compose.yml
project_name: example_demo
interaction:
# This command spins up a Rails container with the required dependencies (such as databases),
# and opens a terminal within it.
runner:
description: Open a Bash shell within a Rails container (with dependencies up)
service: rails
command: /bin/bash
# Run a Rails container without any dependent services (useful for non-Rails scripts)
bash:
description: Run an arbitrary script within a container (or open a shell without deps)
service: rails
command: /bin/bash
compose_run_options: [ no-deps ]
# A shortcut to run Bundler commands
bundle:
description: Run Bundler commands
service: rails
command: bundle
compose_run_options: [ no-deps ]
# A shortcut to run RSpec
rspec:
description: Run RSpec commands
service: rails
command: bundle exec rspec
rails:
description: Run Rails commands
service: rails
command: bundle exec rails
subcommands:
s:
description: Run Rails server at http://localhost:3000
service: web
compose:
run_options: [service-ports, use-aliases]
yarn:
description: Run Yarn commands
service: rails
command: yarn
compose_run_options: [ no-deps ]
ruby-lsp:
description: Run Ruby LSP
service: rails
command: bundle exec ruby-lsp
compose_run_options: [ service-ports, no-deps ]
psql:
description: Run Postgres psql console
service: postgres
default_args: example_demo_development
command: psql -h postgres -U postgres
'redis-cli':
description: Run Redis console
service: redis
command: redis-cli -h redis
claude:
description: Run Claude CLI
service: rails
command: claude --dangerously-skip-permissions
provision:
- '[[ "$RESET_DOCKER" == "true" ]] && echo "Re-creating the Docker env from scratch..." && dip compose down --volumes || echo "Re-provisioning the Docker env..."'
- dip compose up -d postgres redis
- (test -f .dockerdev/.claude.json) || (cp .dockerdev/.claude.json.example .dockerdev/.claude.json)
- dip bundle check || dip bundle install
- dip rails db:prepare
- dip rails db:test:prepare
- dip yarnLet me explain some bits of this in further detail.
First, the compose section:
compose:
files:
- .dockerdev/compose.yml
project_name: example_demoHere we should specify the path to our Compose configuration (.dockerdev/compose.yml). Accordingly, we can run dip from the project root, and the correct configuration will be picked up.
The project_name is important: if we don’t specify it, the folder containing the compose.yml file would be used (“dockerdev”), which could lead to collisions between different projects.
The rails command is also worth some additional attention:
rails:
description: Run Rails commands
service: rails
command: bundle exec rails
subcommands:
s:
description: Run Rails server at http://localhost:3000
service: web
compose:
run_options: [service-ports, use-aliases]By default, the dip rails command would call bundle exec rails within a Rails container. However, we use the subcommand feature of Dip here to treat dip rails s differently:
- We use the
webservice, notrails(so, the deps are up). - We expose the service ports (3000 in our case).
- We also enable network aliases, so other services can access this container via the
webhostname.
Under the hood, this will result in the following Docker Compose command:
docker compose run --rm --service-ports --use-aliases webNote that it uses run, and not up. This difference makes our server terminal-accessible. For example, this means that we can attach a debugger and use it without any problems (with the up command the terminal is non-interactive).
LSP commands
The ruby-lsp commands allows you to run language servers inside the container, making it possible to connect your editor’s LSP client to the containerized environment. The service-ports option ensures the LSP port is exposed to the host:
ruby-lsp:
description: Run Ruby LSP
service: rails
command: bundle exec ruby-lsp
compose_run_options: [ service-ports, no-deps ]When the Ruby on Whales generator discovers the ruby-lsp gem in your project’s dependencies, it automatially includes this command into the dip.yml file and adds a .dockerdev/ruby-lsp executable file:
#!/bin/bash
cd $(dirname $0)/..
dip ruby-lsp $@The executable above is a simple proxy used by your editor LSP extension to manage an LSP server within Docker. For example, when using Zed, you can specify a custom binary path for Ruby LSP like this:
// .zed/settings.json
{
"languages": {
"Ruby": {
"language_servers": ["ruby-lsp", "!solargraph", "!rubocop", "..."]
}
},
"lsp": {
"ruby-lsp": {
"binary": {
"path": ".dockerdev/ruby-lsp",
"arguments": ["stdio"]
}
}
}
}Note that the provided LSP setup in combination with ${PWD}-based paths makes it possible to navigate the application’s source code (via an LSP extension) but not the dependencies that are stored in a Docker volume. Is there a workaround? You can try storing Bundler dependencies in the project’s root (vendor/cache, or .bundle/gems, or whatever), for example (this post provides an example configuration for Neovim).
Claude CLI command
The claude command runs Claude Code CLI inside the container with --dangerously-skip-permissions to allow autonomous operation. This is only included when Claude CLI support is enabled in the generator.
Provisioning
The provision section has been designed for idempotent re-provisioning:
provision:
- '[[ "$RESET_DOCKER" == "true" ]] && echo "Re-creating the Docker env from scratch..." && dip compose down --volumes || echo "Re-provisioning the Docker env..."'
- dip compose up -d postgres redis
- (test -f .dockerdev/.claude.json) || (cp .dockerdev/.claude.json.example .dockerdev/.claude.json)
- dip bundle check || dip bundle install
- dip rails db:prepare
- dip rails db:test:prepare
- dip yarnThe first line checks for the RESET_DOCKER environment variable. When set to "true" (e.g., RESET_DOCKER=true dip provision), it will tear down all volumes and start fresh. Otherwise, it just re-provisions on top of the existing state.
The .claude.json.example spell is used to bootstrap a configuration file for Claude (we keep it in the host system, Git-ignored).
We use bundle check || bundle install instead of always running bundle install — this skips the installation if all gems are already present, making re-provisioning faster.
The db:prepare command is used instead of bin/setup because it handles both creating the database (if it doesn’t exist) and running migrations (if it does). Combined with db:test:prepare, this covers both development and test environments.
Interactive provisioning
For most applications, building an image and setting up a database is not enough to start developing: beyond this, some kind of secrets, or credentials, or .env files are required. Here, we’ve managed to use Dip to help new engineers quickly assemble all these wayfallen parts by providing an interactive provision experience.
Let’s consider, for example, that we need to put a .env.development.local file with some secret info and also configure RubyGems to download packages from a private registry (say, Sidekiq Pro). For this, I’ll write the following provision script:
# The command is extracted, so we can use it alone
configure_bundler:
command: |
(test -f .bundle/config && cat .bundle/config | \
grep BUNDLE_ENTERPRISE__CONTRIBSYS__COM > /dev/null) ||
\
(echo "Sidekiq ent credentials: "; read -r creds; dip bundle config --local enterprise.contribsys.com $creds)
provision:
- (test -f .env.development.local) || (echo "\n\n ⚠️ .env.development.local file is missing\n\n"; exit 1)
- dip compose down --volumes
- dip configure_bundler
- (test -f config/database.yml) || (cp .dockerdev/database.yml.example config/database.yml)
- dip compose up -d postgres redis
- dip bash -c bin/setupBelow you can see a demonstration of this command running in action:
An interactive Dip provisioning example
Services vs Docker for development
One more use case for standardizing the development setup is to make it possible to run multiple independent services locally. Let me quickly demonstrate how we do this with Dip. First, you need to dockerize each application (following this post). After that, we need to connect the apps to each other. How can we do this? With the help of Docker Compose external networks.
We add the following line to the dip.yml for each app:
# ...
provision:
# Make sure the named network exists
- docker network inspect my_project > /dev/null 2>&1 || \
docker network create my_project
# ...Finally, we attach services to this network via aliases in the compose.yml files:
# service A: compose.yml
services:
ruby:
# ...
networks:
default:
project:
aliases:
- project-a
networks:
project:
external: true
name: my_project
# service B: compose.yml
services:
web:
# ...
environment:
# We can access the service A via its alias defined for the external network
SERVICE_URL: http://project-a:3000
networks:
project:
external: true
name: my_projectMore dockerization resources
Did you realize that even this comprehensive blog post covers only the basic needs? Yeah, dockerizing development environment for Ruby on Rails applications can be hard work. The good news is that having a champion on your team eager to polish dockerized development experience to perfection is enough to make everyone happy. (By the way, you are that champion if you’ve read ‘till this line!)
And here are some additional resources to help you:
- System of a test provides guidance on running system tests fully within Docker
- Vite-alizing Rails tells more about using Vite with Rails and Docker
- Faster RuboCop runs for Rails provides an example of running linters outside of Docker (for simpler integration with IDEs, Git hooks, and such)
Happy sailing in the development environment containerization seas!
Acknoledgements
I would like to thank:
- Sergey Ponomarev for sharing performance tips and helping battle-test the initial dockerization attempts.
- Mikhail Merkushin for his work on Dip.
- Dmitriy Nemykin for helping with the major (v2) upgrade.
- Oliver Klee (Brain Gourmets) for continuous PRs with the configuration improvements and actualization.
Changelog
3.0.0 (2026-02-23)
- Reorganize sections: overview and quick start first, annotated configuration examples next, resources in the end.
- Replaced Webpacker with Vite
- Added Claude Code and Ruby LSP integration examples
- Dropped production Dockerfile example (Rails and deployment platforms provide their owns these days).
2.0.3 (2023-09-21)
- Upgrade Node.js installation script.
2.0.2 (2022-11-30)
- Use
RUN --mountfor caching packages between builds instead of manual cleanup.
2.0.1 (2022-03-22)
- Replace deprecated
apt-keywithgpg.
2.0.0 (2022-03-02)
- Major upgrade and new chapters.
1.1.4 (2021-10-12)
- Added
tmp/pidstotmpfs(to deal with “A server is already running” errors).
1.1.3 (2021-03-30)
- Updated
Dockerfileto mitigate MiniMagic licensing issues. See terraforming-rails#35 - Use dictionary to organize environment variables. See terraforming-rails#6
1.1.2 (2021-02-26)
- Update dependencies versions. See terraforming-rails#28
- Allow to use comments in Aptfile. See terraforming-rails#31
- Fix path to Aptfile inside Dockerfile. See terraforming-rails#33
1.1.1 (2020-09-15)
- Use
.dockerdevdirectory as build context instead of project directory. See terraforming-rails#26 for details.
1.1.0 (2019-12-10)
- Change base Ruby image to
slim. - Specify Debian release for Ruby version explicitly and upgrade to
buster. - Use standard Bundler path (
/usr/local/bundle) instead of/bundle. - Use Docker Compose file format v2.4.
- Add health checking to
postgresandredisservices.


