Martian Kubernetes Kit: a smooth-sailing toolkit from our SRE team
Share this post on
We’ve been using Kubernetes since before it was a “thing”, and as of 2023, we believe that it is still underutilized. In fact, it’s the best (and basically only real “at-scale”) solution for orchestrating Docker containers—or containers in general, after you’ve outgrown services like Heroku or Fly.io! That’s a bold claim, but it’s a belief backed up by our years of SRE experience. In this post, we’ll expand on that, and we’ll introduce a Kubernetes toolkit we already use and support for our clients, which simultaneously de-complexifies and highlights the benefits of Kubernetes.
This post was written with some specific folks in mind. That is, for the people who are able to make infastructure and platform decisions on a project: CTOs, VPs of Engineering, Senior and Lead Engineers. Stay with us as we share the great efforts we’ve undertaken to make adopting Kubernetes easy and affordable for our clients, transforming it from a potential headache into just another tool in our toolbox, applicable for many cases.
We’ll also share our quickstart guide to determine if it’s worth moving your project to Kubernetes.
(Plus, in our next article we’ll delve deeper into the technical specifications of our toolkit.)
Well, why Kubernetes?
As a team that’s been using it since before it’s first full version release, we know that Kubernetes is the most advanced container orchestration solution out there: it has a vast ecosystem, it’s extremely flexible, and it can cover a huge number of edge cases.
Let’s speak from SRE experience that nearly dates back to our company’s foundation:
- We’ve had clients running on AWS ECS managed by CloudFormation who needed custom changes to their architecture, and on Kubernetes, they could’ve saved tens of thousands of dollars when introducing those changes, have done so faster, and with better overall infrastructure and monitoring to boot. More on that just a bit later!
- Additionally, we’ve seen clients run 20–30 full enterprise-level preview apps simultaneously—and trust us, this is way less expensive and way more seamless if you’ve got the right Kubernetes setup.
- Or, take a product that has specific peak times and fluctuation loads—this is a case we’ve seen. In their situation and many others, Kubernetes offers a much more configurable, accommodating, (and often cheaper) solution, and migration would’ve been quite effortless with our toolkit. While it’s not the only option, moving to Kubernetes would’ve been particularly beneficial for them.
- We’ve also had clients with complex and scalable applications and a ton of moving parts designed to be set up on-premise for their own clients. The keys here are “complex” and “scalable”. We can simply ask them to spin up a default Kubernetes cluster and make the app work there. Kubernetes offers a way more predictable uniform environment and you will definitely have your manifests and helm charts by the time you ship your application.
And there’s just more to brag about: Kubernetes is really scalable and extendable. You can use it to host a relatively simple application (architecture wise) or just go all out and implement whatever custom requirements you have.
Sure you have other container orchestrators or tools to deploy containers at scale, but Kubernetes basically is the solution that orchestrates containers to a serious degree that isn’t one-hundred percent vendor-locked in the modern world, and it’s readily available as a service on all modern clouds.
Essentially, Kuberentes is what modern cloud providers have been missing to make them really usable and comfortable.
And of course, you can run it on your own hardware, too!
Kubernetes, minus the sweat
We’ve helped a lot of projects, and over the years, we noticed that, as we were preparing our clients for the real world, we were actually doing a lot of similar things with Kubernetes for them.
Naturally, at some point, the thought pops into one’s head: wouldn’t it be nice if all of the clients we support referred to some core distribution for our Kubernetes configuration?
What if we had all the basic elements of infrastructure there, and then we kept their specific customizations in our client’s personal repos?
Well, we did it! Enter the configuration package that we’ve started to call the Martian Kubernetes Kit.
Martian Kubernetes Kit
So here’s the thing: with Kubernetes, even on managed solutions like AWS EKS, you get a bare, empty Kubernetes cluster. Nothing is there: no proper monitoring, no real log aggregation, no nothing. But we wanted a way for clients to get the full Kubernetes experience out of the box. That’s where Martian Kubernetes Kit comes in: it’s a turnkey solution that’s designed for them to start using Kubernetes smoothly, and it’s also constantly evolving to match modern demands.
So what does Martian Kubernetes Kit, a proven, tested, and reliable solution involve?
Common sense GitOps
At Evil Martians, we believe in infrastructure as code, which becomes even more crucial over time as your project evolves. Basically, if you’re not tracking your infrastructure components and the changes being made to them—you don’t know what infrastructure you have right now.
Thus, we opted to use Terragrunt/Terraform to configure the Cloud Provider (we’re focusing on AWS and GCP for now) and ArgoCD to rule the Kubernetes cluster contents itself.
The choice of Terraform was pretty obvious—it’s the most popular and wide-spread tool for managing infrastructure. Plus, outside engineers are way more likely to understand and be able to tweak this config with no issues.
Further, ArgoCD is a modern open-source continuous delivery tool with both essential and handy features like proper rollback management, and with a UI that more developers can navigate easier. Also, to ensure we can upgrade the entire configuration when needed, we used ArgoCD to manage the whole set of components on a Kubernetes cluster.
Now, if you were doing all this with Kubernetes out of the box, you’d have to manually configure this stuff yourself; with Martian Kubernetes Kit, not so.
Economic open source monitoring
Do you remember when we said that one of our clients could’ve saved tens of thousands of dollars with some custom changes? In their case, a significant part of that cost could’ve been cut just by using a different monitoring solution.
Yes, we now have services like DataDog, and if your business model permits it—use it, it’s amazing. But have you seen those memes where the cost of DataDog can be significantly larger than the cost of the rest of your infrastructure?
Or, in terms of monitoring and usability, take AWS CloudWatch. Yes, it’s out there and available, but it’s so uncomfortable to configure and use that we have actually yet to encounter a single client who relies on it.
Yet, literally every component prepared to run on Kubernetes provides metrics in a Prometheus format, and so, the Kubernetes ecosystem provides Martian Kubernetes Kit with a great monitoring solution: Prometheus + Grafana.
The amount of data even a default Prometheus setup can gather from a cluster is already outstanding compared to what people usually view with AWS CloudWatch—and we’ve pushed that further still.
Cheap logs aggregation
Besides the above, you also need logs aggregation. Our toolkit has this by default, with the nice and accessible Grafana Loki. The interface for log exploration is conveniently in the same Grafana instance as the view for monitoring and metrics. And, as a bonus, it usually turns out to be cheaper than its counterparts—even the open-source options.
Easy preview apps
Longing for an easy way to spin up a preview app like Heroku does? Well, our setup is prepared exactly for that. Just label a desired pull request on GitHub and you’ll get a copy of your application running for your team to work with. Close the pull request or remove its label and that copy is gone. ArgoCD, which we use to manage the Kubernetes cluster’s content, will take care of things.
Earlier we mentioned that we have clients with up to 30 full enterprise-level preview apps running at the same time. Having this option is actually a huge factor when getting development to a proper speed—and a ton of our preparations were dedicated to making this feature as adoptable as possible for our clients.
Predictable cluster upgrades
We’ve had clients with outdated infrastructures that were difficult (or impossible) for them to keep up to date, and they lost a lot of time keeping up with this chore. But upon moving to Martian Kubernetes Kit, this problem is no more. Kubernetes and its ecosystem are evolving at a rapid pace, and we’re ready for it. By ensuring that Martian Kubernetes Kit is up-to-date, we’re confident that we can deliver 2–3 cluster upgrades per year for all our clients who use it.
The lightspeed-fast development of the Kubernetes ecosystem was one of the motivating factors behind our creation of the Martian Kubernetes Kit. We needed to be prepared, and with this solution—we are.
Transferability in its DNA
Another thing: even though this is just for our clients, everything we bring to our Martian Kubernetes Kit configuration is an open source tool. No reinventing the wheel.
We also kept in mind that at any point in time, we may need to pass the project torch on to someone else. So, we had to make our toolkit in such a way so that any other engineer could pick up and continue to support the infrastructure efficiently.
In other words: we absolutely did not want to make our own vendor-lock!
More under the hood
And that’s not all, Martian Kubernetes Kit also has flexible secret management integration, CI/CD workflows prepared for numerous scenarios, certificate management, and quality of life tweaks throughout, like convenient scripts and helm chart templates.
Martian Kubernetes Kit in action
Let’s go deeper and look at a real case study of a client: this company was hosting their product on AWS ECS managed with CloudFormation. They wanted to add AnyCable (with metrics), fix their deployment process (which is extremely hard and unwieldy to do with bare CloudFormation), plus get alerts about their database and their deployment process. All of that is included into our configuration out of the box.
Another client needed a scalable app, but even further, they knew nothing about Kubernetes before us. Now they’re using their cluster with no problems—even without us supporting them on a daily basis! We just provide periodic updates and the rare fix.
And, in general, most of the clients we’ve seen do lack capacity to keep their infrastructure up-to-date; this is always left for the last moment when there is already some problem. Our solution makes it easier to manage multiple clusters, keeping updates nice and predictable.
As promised, up next, our quickstart guide: is Kubernetes right for you?
Should we stay or should we go?
We know that all projects are not fated to use Kubernetes, and we don’t just put it everywhere. But in the cases where it does fit, you can see big benefits. It’s a weighted choice.
Frankly speaking, the major sign that a Kubernetes-switch is a good move is that you need something more than you have right now. Maybe that’s better autoscaling, lower infrastructure costs, fewer limitations, the ability to better integrate microservices or third party products (like our own AnyCable or imgproxy), a custom database—whatever!
Sure, you can probably achieve the same custom dream with your current setup, but in many cases, the cost of doing so will be exponentially larger as you try to bend a particular platform to your newfound desire. You seriously have to weigh the long-term cost/benefit tradeoff of going down such a path. Kubernetes is on the table.
The next steps
Stay tuned! We’ll be sharing an article that will delve deeper into the technical decisions we made while creating the Martian Kubernetes Kit. You’ll basically see a full-scale example to help decide how you want to manage your Kubernetes infrastructure.
We’ll walk you through the whole journey:
- Spinning up the cloud infrastructure and organizing your code
- Properly applying the GitOps approach to the cluster
- The essential components that bare Kubernetes clusters lack (and why you need them to be happy)
- Integrating your application deployment process into our newly-created GitOps flow
- Finally, how to immediately move to having automated preview instances
We know that the Martian Kubernetes Kit proves that Kubernetes is not some impossible quantum drive science—it’s very much within your grasp to move to it.
If your curiosity is already piqued, feel free to learn more about our SRE services and reach out to us now. We’ve been using Kubernetes since before it was what it is today, and we’re ready to use that experience to help you decide if it’s the right time for your project to adopt it—our eam is on standby! Get in touch with us!