Reusable development containers with Docker Compose and Dip
Translations
Run and test your code in multiple Docker environments with minimal effort while keeping Docker Compose files under control. Spend less time wrangling YAML and drop into a container of choice from any host folder with one simple command. See examples for Ruby, Node.js, or Erlang, and adapt them to your stack.
Disclaimer: This article is being regularly updated with the best recommendations up to date, take a look at a Changelog section.
It all started with a new Mac. As a polyglot developer who works on commercial projects and maintains about a dozen popular open source libraries, I need to make sure that Ruby, Node.js, Go, and Erlang will co-exist on my machine with minimal hassle. I also need a way to switch easily between different contexts and runtimes: project A might be locked to Ruby 2.6.6, while library B needs to work both on edge (3.0), legacy (2.5), and alternative implementations (say, jruby
).
So you need environment managers: rvm, nvm, rbenv, pipenv, virtualenv, asdf, gvm… The list of acronyms just gets longer and longer. And each of them brings another “minor” configuration to your operating system until your echo $PATH
doesn’t fit on a screen and a new terminal tab takes 5 seconds to load.
Challenge accepted
Instead of dragging everything onto my host operating system, why can’t I just run whichever versions of whatever in isolation? Containers are great for that, and our team have been using dockerized environments for complex projects that require multiple services since the dawn of Docker.
So, I only installed Git, Docker, and Dip on my new computer to see how productive I can be with a barebones system setup.
Dipping into Docker
Docker and Docker Compose are great tools, but they can quickly lead to configuration fatigue.
Also, there are just so many terminal commands one can memorize without constantly consulting the documentation. Of course, you can set up aliases for everything, but how is that fundamentally different from polluting your .bashrc
or .zshrc
with all the extra configuration? Dragging a full-featured Docker-for-development configuration with Dockerfile
-s and docker-compose.yml
-s into every project, no matter how big or small it is, also sounds like an overkill.
Is there a better way? Luckily, my colleague Misha Merkushin has developed a great tool called Dip that stands for Docker Interaction Process. It allows you to create a separate configuration file called dip.yml
that controls how you run different services from your docker-compose
configuration. Think of it as a list of configurable shortcuts that allow you to do:
$ dip ruby:latest
# Instead of:
$ docker-compose -f ~/dev/shared/docker-compose.yml run --rm ruby-latest bash
…to land into the Linux environment configured to run the edge version of Ruby with your source code folder on the host already mapped inside the volume.
Two YAMLs instead of twenty
One of the Dip’s coolest features is that it’s going to look for that fancy dip.yml
that defines Docker Compose shortcuts everywhere up the file tree, starting from your PWD
. That means you can have one dip.yml
and one docker-compose.yml
in your home folder and store all your common configurations there, without having to copy-paste boilerplate YAMLs from project to project. Let’s try it. First, create the files:
cd $HOME
touch dip.yml
mkdir .dip && touch .dip/global-compose.yml
Then put this inside dip.yml
:
# ~/dip.yml
version: "5.0"
compose:
files:
- ./.dip/global-compose.yml
project_name: shared_dip_env
interaction:
ruby: &ruby
description: Open Ruby service terminal
service: ruby
command: /bin/bash
subcommands:
server:
description: Open Ruby service terminal with ports exposed (9292 -> 19292, 3000 -> 13000, 8080 -> 18080)
compose:
run_options: [service-ports]
jruby:
<<: *ruby
service: jruby
"ruby:latest":
<<: *ruby
service: ruby-latest
psql:
description: Run psql console
service: postgres
command: psql -h postgres -U postgres
createdb:
description: Run PostgreSQL createdb command
service: postgres
command: createdb -h postgres -U postgres
"redis-cli":
description: Run Redis console
service: redis
command: redis-cli -h redis
Think of every key inside the interaction
mapping as an alias that replaces docker-compose
flags. service
sub-key defines which Docker Compose service to run, and command
is an argument that you will normally pass to docker-compose run
. Now let’s see that mighty Docker Compose file!
# ~/.dip/global-compose.yml
version: "2.4"
services:
# Current stable Ruby
ruby: &ruby
command: bash
image: ruby:2.7
volumes:
# That's all the magic!
- ${PWD}:/${PWD}:cached
- bundler_data:/usr/local/bundle
- history:/usr/local/hist
# I also mount different configuration files
# for better DX
- ./.bashrc:/root/.bashrc:ro
- ./.irbrc:/root/.irbrc:ro
- ./.pryrc:/root/.pryrc:ro
environment:
DATABASE_URL: postgres://postgres:postgres@postgres:5432
REDIS_URL: redis://redis:6379/
HISTFILE: /usr/local/hist/.bash_history
LANG: C.UTF-8
PROMPT_DIRTRIM: 2
PS1: '[\W]\! '
# Plays nice with gemfiles/*.gemfile files for CI
BUNDLE_GEMFILE: ${BUNDLE_GEMFILE:-Gemfile}
# And that's the second part of the spell
working_dir: ${PWD}
# Specify frequenlty used ports to expose (9292 — Puma, 3000 — Rails).
# Use `dip ruby server` to run a container with ports exposed.
# Note that we "prefix" the ports with "1", so, 9292 will be available at 19292 on the host machine.
ports:
- 19292:9292
- 13000:3000
- 18080:8080
tmpfs:
- /tmp
# Alternative Ruby
jruby:
<<: *ruby
image: jruby:latest
volumes:
- ${PWD}:/${PWD}:cached
- bundler_jruby:/usr/local/bundle
- history:/usr/local/hist
- ./.bashrc:/root/.bashrc:ro
- ./.irbrc:/root/.irbrc:ro
- ./.pryrc:/root/.pryrc:ro
# Edge Ruby
ruby-latest:
<<: *ruby
image: rubocophq/ruby-snapshot:latest
volumes:
- ${PWD}:/${PWD}:cached
- bundler_data_edge:/usr/local/bundle
- history:/usr/local/hist
- ./.bashrc:/root/.bashrc:ro
- ./.irbrc:/root/.irbrc:ro
- ./.pryrc:/root/.pryrc:ro
# Current flavor of PostgreSQL
postgres:
image: postgres:11.7
volumes:
- history:/usr/local/hist
- ./.psqlrc:/root/.psqlrc:ro
- postgres:/var/lib/postgresql/data
environment:
PSQL_HISTFILE: /usr/local/hist/.psql_history
POSTGRES_PASSWORD: postgres
PGPASSWORD: postgres
ports:
- 5432
# Current flavor or Redis
redis:
image: redis:5-alpine
volumes:
- redis:/data
ports:
- 6379
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
# Volumes to avoid rebuilding dependencies every time you run your projects!
volumes:
postgres:
redis:
bundler_data:
bundler_jruby:
bundler_data_edge:
history:
Whenever you start using Docker volumes, you face the unavoidable mind melt of coming up with the right paths on the host. Strangely, online examples and tutorials often miss on a sacred piece of Unix knowledge that allows you to stop traversing the file tree in your head once and for all: the PWD
environment variable that always evaluates to… yes, you’re right, the current working directory.
Armed with this knowledge, you can store you docker-compose.yml
wherever you want (and not just in the project root) and be sure that the ${PWD}:/${PWD}:cached
spell will mount your current folder inside the container, no matter what WORKDIR
instruction you have in your Dockerfile
(and you might not even have access to that Dockerfile
if you are using base images as I do in my example).
Using multiple services in a shared Docker Compose file means I can develop libraries that depend on PostgreSQL or Redis: all I need is to use DATABASE_URL
and REDIS_URL
environment variables in my code. For example:
# Launch PostgreSQL in the background
dip up -d postgres
# Create a database. createdb is a shortcut defined in`dip.yml`.
dip createdb my_library_db
# Run psql
dip psql
# And, for example, run tests
# `dip ruby` already runs bash, so just provide `-c` as an extra argument
dip ruby -c "bundle exec rspec"
Databases “live” within the same Docker network as other containers since we’re using the same docker-compose.yml
.
If you want to run a web server, such as Puma, you can use the server
subcommand to expose ports to the host system:
$ dip ruby server -c "bundle exec puma"
Puma starting in single mode...
...
* Listening on http://0.0.0.0:9292
The web server will be accessible at http://localhost:19292 (we configured port-forwarding to “prefix” host ports with “1”).
Free bonus: integration with VS Code
If you’re a VC Code user and want to use the power of IntelliSense, you can combine this approach with Remote Containers: just run dip up -d ruby
and attach to a running container!
Not just Ruby: Node.js example with Docsify
Let’s take a look at an example beyond Ruby: running Docsify documentation servers.
Docsify is a JavaScript/Node.js documentation site generator. I use it for all of my open-source projects. It requires Node.js and the docsify-cli
package to be installed. But we promised not to install anything besides Docker, remember? Let’s pack it into a container!
First, we declare a base Node service in our ~/.dip/global-compose.yml
:
# ~/.dip/global-compose.yml
services:
# ...
node: &node
image: node:14
volumes:
- ${PWD}:/${PWD}:cached
# Where to store global packages
- npm_data:${NPM_CONFIG_PREFIX}
- history:/usr/local/hist
- ./.bashrc:/root/.bashrc:ro
environment:
NPM_CONFIG_PREFIX: ${NPM_CONFIG_PREFIX}
HISTFILE: /usr/local/hist/.bash_history
PROMPT_DIRTRIM: 2
PS1: '[\W]\! '
working_dir: ${PWD}
tmpfs:
- /tmp
It’s recommended to keep global dependencies in a non-root user directory. Also, we want to make sure we “cache” these packages by putting them into a volume.
We can define the env var (NPM_CONFIG_PREFIX
) in the Dip config:
# dip.yml
environment:
NPM_CONFIG_PREFIX: /home/node/.npm-global
Since we want to run a Docsify server to access a documentation website, we need to expose ports. Let’s define a separate service for that and also define a command to run a server:
services:
# ...
node: &node # ...
docsify:
<<: *node
working_dir: ${NPM_CONFIG_PREFIX}/bin
command: docsify serve ${PWD}/docs -p 5000 --livereload-port 55729
ports:
- 5000:5000
- 55729:55729
To install the docsify-cli
package globally, we should run the following command:
dip compose run node npm i docsify-cli -g
We can simplify the command a bit if we define the node
shortcut in our dip.yml
:
# ~/dip.yml
interaction:
# ...
node:
description: Open Node service terminal
service: node
Now we can type fewer characters: dip node npm i docsify-cli -g
To run a Docsify server, we just need to invoke dip up docsify
in the project’s folder.
Erlang example: Keeping build artifacts
The final example I’d like to share is from the world of compiled languages—let’s talk some Erlang!
As before, we define a service in our ~/.dip/global-compose.yml
and the corresponding shortcut in the dip.yml
:
# ~/.dip/global-compose.yml
services:
# ...
erlang: &erlang
image: erlang:23
volumes:
- ${PWD}:/${PWD}:cached
- rebar_cache:/rebar_data
- history:/usr/local/hist
- ./.bashrc:/root/.bashrc:ro
environment:
REBAR_CACHE_DIR: /rebar_data/.cache
REBAR_GLOBAL_CONFIG_DIR: /rebar_data/.config
REBAR_BASE_DIR: /rebar_data/.project-cache${PWD}
HISTFILE: /usr/local/hist/.bash_history
PROMPT_DIRTRIM: 2
PS1: '[\W]\! '
working_dir: ${PWD}
tmpfs:
- /tmp
# ~/dip.yml
interactions:
# ...
erl:
description: Open Erlang service terminal
service: erlang
command: /bin/bash
We can also use the PWD
trick from above to store dependencies and build files:
REBAR_BASE_DIR: /rebar_data/.project-cache${PWD}
That changes the default _build
location to the one within the mounted volume (and ${PWD}
ensures we have no collisions with other projects). It helps us speed up the compilation by not writing to the host (which is especially useful for macOS users).
The final trick: Multiple compose files
If your global-compose.yml
gets too fat, you can break it up into several files and group your services by their nature. Dip will take care of going through all of them to find the right service whenever you run your dip
commands.
# dip.yml
compose:
files:
- ./.dip/docker-compose.base.yml
- ./.dip/docker-compose.databases.yml
- ./.dip/docker-compose.ruby.yml
- ./.dip/docker-compose.node.yml
- ./.dip/docker-compose.erlang.yml
project_name: shared_dip_env
That’s it! The full example setup can be found in this gist. Feel free to use it and share your feedback or more examples by tweeting at us.
P.S. I have to admit that my initial plan of not installing anything on a local machine failed: I gave up and ran brew install ruby
to run irb
quickly. I don’t use it that often though.
P.P.S. Recently, I got access to GitHub Codespaces. I still haven’t figured out all the details, but it looks like it could become my first choice for library development in the future, and keeping the environment setup on a local machine will not be necessary anymore, except for situations when you have to work offline (do we ever?).
Changelog
1.1.0 (2021-02-01)
- Added
dip ruby server
subcommand.