MRSK: hot deployment tool to watch—or a total game changer?
Share this post on
The world of deployment and container management tools has seen a bright new contender enter the fold, and the hype train is chugging along at full speed. So, will MRSK change the game and make Docker container deployment dead simple? Let’s find out together! In this article, the SRE pros at Evil Martians attempt an objective analysis of the promises, applications, and potential of MRSK as well as its potential pitfalls.
TL;DR: If you want to know the results of our analysis, which projects will benefit the most from starting or moving to MRSK, who should switch to MRSK, and who should not—click here. Otherwise, read on!
To kick things off, a brief survey and history: Docker 1.0 turned 10 in 2023, and it initially addressed many issues crucial for shipping applications: portability, consistency and isolation. Yet, we still lacked a solution to manage Docker containers at scale. Thus, new instruments started to appear: Kubernetes was born and became the “planet scale” open source solution in the room. Meanwhile, popular cloud providers fought for users with their own services like Amazon ECS and Google Cloud Run. And the Ansible, Chef, and Puppet communities published a number of public roles and recipes of different quality and complexity which allowed deploying containers on a number of servers. Others, like Nomad and Docker Swarm, turned out to be a bit niche, but are still mature, alive, and kicking.
But all these tools have their limitations (more on those later), and, well, there’s always room for a shining new star in the sky. And here it is: MRSK.
MRSK aims to make container deployment as seamless and easy to a newcomer as possible. The underlying concept behind this tool is heavily indebted to old school Capistrano, a fact which its author highlights:
MRSK was specifically designed to be an imperative tool, which doesn’t hide complex logic under the hood. Accordingly, it’s easier to adopt MRSK from scratch, rather than, say, a declarative container orchestrator with state reconciliation logic. Thus, you’re required to read less documentation and write less configuration to kick off an app’s first deployment.
But, before we dive deep into the benefits of MRSK, we have to ask: do we really need another deployment tool? Perhaps one of the best ways to explore that question is by pointing out some of the gaps in preexisting solutions.
A deployment tool “vibe check”
So, on that note, let’s examine the various deployment tools and platforms, many of which you might already be familiar with, as well as the situations where they might hit some snags.
First, Heroku, Fly.io, and Render are excellent platforms that offer a pleasant developer experience. In fact, we generally do recommend these options instead of anything else because you can save a lot of time and effort upon initial project release (and for quite a lot of startups, they might be fine for years and years). But, as your project grows, your costs can start to skyrocket, or its technical requirements and limitations can begin to evolve beyond the platform.
Up next, Kubernetes (K8s). It’s the Evil Martian SRE team’s go-to recommendation for larger-scale projects which don’t fit the above solutions. It’s a brilliant container orchestration platform with a huge community and ecosystem around it. To make things even better, all the popular cloud providers (including AWS or Google Cloud, but also Digital Ocean and many others) have a managed K8s solution. Still it must be said, this is a complex system which can’t just be learned overnight. (Which is a point for MRSK and its simplicity, for sure!)
Nomad is also more suitable for experienced teams: it’s still a bit too complex and has a small community.
Mesos has a significantly steeper learning curve than even Kubernetes and this is coupled with a limited ecosystem.
Ansible, Chef, and Puppet are universal configuration management tools, and thus, are not tailored specifically for building and deploying containers.
In terms of complexity, Docker Swarm is simpler than many of the other contenders, but it’s not even close to offering the configurability of the other full-scale container orchestrators, it still has to be managed itself , doesn’t have a robust community, nor mass adoption in public clouds. Ultimately, it’s a niche solution.
And, last but not least, Amazon ECS and Google Cloud Run are popular options heavily promoted by their respective vendors. That said, they feature strict vendor-locks, necessitate vendor-locked knowledge, require additional (and complex or expensive) tooling and they aren’t very affordable either.
The benefits of MRSK
And that brings us back to MRSK and its benefits.
It’s simple and minimalistic. MRSK’s current attractiveness emerged mostly as a result of its simplicity. This single-purpose tool doesn’t require vast knowledge—a couple of virtual servers (Digital Ocean, AWS EC2, Linode, etc.) and a load balancer from your favorite cloud provider, and you’re set. There’s no need to manage the tool itself, and a lower threshold for beginning experts in infrastructure management to start containerized deployments means a lot for the evolution of the industry in general.
There’s no vendor lock. MRSK is a standalone CLI tool. You can easily run it from anywhere: a Docker container, your local machine or a CI/CD workflow. It basically has one strict requirement: we must provide it with the servers it will use to deploy.
It’s a Ruby gem. Since MRSK was written in Ruby, you can easily fork and customize it at your convenience.
It offers YAML configuration merging to spin up a copy of your applications with ease. MRSK has this concept of “destinations”. We define the main configuration in one file, and then, can instantly create a number of additional ones, which only need to consist of the differences between those configs and the main setup. This is an excellent way to spin up an instant review app or a full-scale staging environment, migrate to a different place, or just create an additional installation of your project somewhere across the world, in case you need that.
There are app health checks prior to actual app spin up. This is simple, helpful, and ensures you won’t just suddenly end up without a working application. (We personally recall how, previously, a similar Kubernetes feature changed our lives.)
MRSK provides an easy logs grep tool if you don’t have logs aggregation yet. For a loaded production application, a dedicated logs aggregation service is obviously a must, but when you start a small setup it can still be extremely useful.
Fast and easy rollback. Enough said: in general, it’s easy to quickly roll back containers, and this is also the case with MRSK.
ENV files templating. MRSK uses the default Rails credentials tools for environment variables and secret templating.
A deploy lock mechanism. Want to be certain no-one is deploying at the same time as you, or just want to freeze deployments to initiate a maintenance window? With MRSK, you can.
Running additional containers beside the actual app. A typical application may need to rely on additional services, like cache servers, databases, search and indexing services, and more. MRSK has a configuration section called “accessories”, which allows you to deploy those kinds of things, too. They will be deployed to designated servers, and they won’t be restarted each time you run
Finally, MRSK allows us to easily execute the 4 key application management tasks:
- On servers, it allows you to run a command on all servers (e.g. to update a package)
- …or to run a command on the primary server.
- In interactive sessions you can run commands in a separate container: for example, running a custom Rails task to fix a problem caused by a bug. (In this case, it’s better to get a separate isolated environment, which won’t be affected by the app’s main container restart and/or which won’t interfere with the app process)
- …or, to run commands in a currently running session (perhaps to catch that one, user-hated and slippery bug that evaded your QAs).
The other side: points to consider
There’s no doubt that MRSK is rapidly increasing in popularity (in fact, Evil Martians has already received customer requests to help elevate infrastructures with MRSK). And the tool certainly has a ton of promise and benefits worth considering, but still, like with any new tool, we should do our homework.
37signals and DHH are currently in the process of migrating their applications from public clouds to bare-metal servers using MRSK as their primary container deployment tool; this is a much-hyped development, especially considering how much money they plan to save in the process. They’ve publicized this fact themselves, sharing a ton of articles describing the process and the achievements they’ve made along the way, like the ability to cut down deployment times to a fraction of what they were, implement custom super-fast VM provisioning for their needs with Chef as the main management tool, and to simplify their infrastructure configuration overall. 37signals certainly put forth a lot of effort accomplishing all of this.
MRSK is just one part of their stack, isn’t it, though? We all have unique infrastructures to work with, and we need to think about MRSK in the context of each of our unique projects. We need to be careful not to amalgamate their particular private infrastructure and expertise with MRSK itself. On this note let’s quickly review some points you should consider before making a final decision about using MRSK or another solution.
First of all, any emerging tool needs to be treated with care, as there can be teething troubles involved in the process. Second, MRSK is designed for specific tasks and scenarios, and these may not include your infrastructure, requirement restrictions, or edge cases.
It pays to keep in mind that one of the responsibilities of a good SRE engineer is to meticulously investigate the benefits and downsides of any new tool, and only after due diligence is done, try to fit it to your particular situation—so, let’s do it.
Deployment process times
MRSK doesn’t offer any new secret tools to speed up deployment times; it simply delivers on two key promises:
- You’re always going to deploy to a server (virtual or a bare metal server) that you expect to be up and running.
- MRSK stays out of the way, allowing for a fast deployment process.
Under the hood, it has the same Docker Build process relying on: the speed of the server it’s running on, the structure (and the quality) of your Dockerfile, the size of your app, the Docker layer caching, how fast is the network between the computer where you run
mrsk deploy, the build server, the docker registry, the app nodes themselves, and the startup speed of your app.
To achieve a faster deployment, you’ll have to pay attention to each of those steps, and these are applicable to any infrastructure configuration you can imagine.
Again, MRSK stays out of the way, leaving it up to you, for better and for worse.
Migration to bare-metal servers
A lot of hype exists surrounding MRSK about the ability to get off the cloud and to bare metal servers.
MRSK is an excellent tool to untie your hands and allow you to deploy your early-stage app to any kind of server (be it an AWS EC2 instance, Digital Ocean droplet, a dedicated bare metal server you rented, or your Raspberry Pi).
But it isn’t a tool that facilitates migration from cloud to bare-metal servers. It has no additional tooling to aid you with bare-metal server management or solving bare-metal server problems. For that, you need an expert, or a team of experts, for your project.
And that brings us to the next point.
Managing the underlying infastructure
Setting up a fleet of bare metal servers is a non-obvious task which requires specialized knowledge, preparation, and planning and configuration management long before the point where
mrsk deploy is run.
Let’s say you wish to simplify things for yourself and choose to use virtual nodes instead. You still need to manage your cloud: provision nodes, configure network rules, and set up load balancers.
Further, a typical project consists of multiple components. It generally requires databases, a backup system, load balancers, access management system, monitoring and log aggregation servers (if New Relic and its rivals are out of the scope), build servers and image registry servers (if you’re aiming for the ultimate deployment time reduction).
In short, managing these all requires relevant experience.
To illustrate, while adding a PostgreSQL container to the accessories in your MRSK config can be an excellent solution for a demo or a review app, you can’t rely on this in production without a significant configuration effort.
Thus, it may still be wise to rely on managed database services modern clouds provide.
Monitoring and log aggregation
Each time you spin up a new infrastructure, simple or complex, you have to continuously ensure its operational status and stability at any given moment. Thus, your team should have a transparent monitoring solution of the entire setup and an easy way to gather and view log files across all instances of your application. Without those, issues can go undetected until they cause significant damage or downtime, which can be costly in terms of lost productivity, revenue, and reputation.
MRSK provides you with a neat log-grepping tool (we wish we had that back in the days of Capistrano!), which may be just enough for starters. But that’s it—so you’ll need a monitoring solution of your own.
For a small team with a limited number of servers, a free New Relic setup is an excellent option. It can cover your monitoring needs, and even provide you with a better log aggregation service. But it can be pricey the moment you step over the free-tier plan. In that case, to save, you may need to consider your own monitoring solution, like Prometheus with Grafana and ELK/Loki.
Who will benefit the most from MSRK?
MRSK is being technically promoted as a simple tool which is easy to use and may help less-experienced teams enter the world of container-based infrastructures without relying solely on proprietary services, but nevertheless, beginners should still take care here. On the flip side, for projects with teams who already have enough expertise to face all the technical challenges and tasks we’ve outlined above, it’s possible they could make a weighted, well-calculated decision to move to MRSK.
So, should you switch?
- If you don’t want to go deep into the depths of admin work or deployment, and if Heroku, Fly.io, or Render, or their likes fits your case and you’re happy: stick with that solution.
- If you don’t like public clouds and prefer VPS or bare metal hosting: try MRSK.
- If you already have K8s and it’s comfortable for you, or if you have a very advanced setup: advisable to leave it be for now.
- If your application has regular usage spikes and requires frequent manual or auto-scaling of the infrastructure: consider using Kubernetes, if you’re not already doing so.
- If you don’t like Kubernetes or find it too difficult: definitely investigate MRSK, it may fit your case.
MRSK is a new tool, and it has strong potential to grow into something bigger. That means new features, carving out its own niche, and pushing the community forward by helping “young” projects organize their own containerized infrastructures.
We hope we were able to shed some light on MRSK’s pros and cons—and if you have any infrastructure concerns, Evil Martians’ dedicated SRE team is on standby to help you sort out the various pros and cons that come with each option:
- We can consult on if it’s worth migrating to bare-metal servers, and if so, when and how to do it.
- Our experienced Kubernetes pros are always ready to jump in and assist with any related issues.
- We can also help you understand if MRSK is the right choice for your project.
- If MRSK is the choice for you, our SRE experts will help make it happen.
- And, in general, we’re here to help with any of your project’s needs!
Don’t hesitate, reach out to us for an extraterrestrial consultation!