How to leverage the benefits of trunk-based development using Kubernetes
This article will showcase why the business might need to adopt Kubernetes, the approximate roadmap and the benefits this project can provide.
You don’t simply adopt Kubernetes. You have to solve some pain with it
One of the most widespread issues that hamper the competitiveness of businesses nowadays is the software delivery speed. For example, here is what a conventional Git-based product development and deployment process looks like:
- You have the master repository on Github, and development of new features goes in separate branches, which are merged into staging server before the release
- The QA team then tests the code on staging and if the tests are OK, the Ops engineers merge it to master branch and deploy the new app version to production environment
- The whole procedure takes 3–4–7 hours depending on the app complexity, so there are 1–2 releases a day, at the very best
- If some bugs escape testing, and are deployed to production, it crashes, so everything must be rolled back to the previous version
- The QA team then must check all the code to find what bug caused the crash, and demand the devs to fix it
- The devs, who were already creating new code based on the new app version, must now be pausing or discarding this job, as the bug fixing might make it irrelevant
- Let’s assume you use AWS Elastic Beanstalk or Google App Engine to host your software delivery pipeline. In this case, the whole testing + staging + release pipeline provisioning would take around 90 minutes and 45 more to roll back in case the prod crashes after the release.
Here are some business advantages you will want to get in such a case:
- split the monolith app to microservices to ease and speed up its development
- deploy new features more frequently and faster
- roll back faster, if need be
- update the development process to minimize the need for rollbacks and make it more efficient
First things first: the process defines the tools
The conventional Git-based software delivery model requires creating dynamic runtime environment for testing every code branch. Using Elastic Beanstalk or Google App Engine for this purpose would be very expensive and long. The ideal software delivery pipeline components must be:
- deployed automatically (using the containers, preferably)
- identical to production environment
- using spot instances to maximize cost-efficiency
Microservices comply with all these requests, but using them demands a complete change of the development paradigm. Trunk-based development model provides a separate branch for each feature, that can merge to master at any time, regardless of other features. It can also be deployed separately at any time. This is why different microservices can be developed and updated independently of each other, while the product remains functional at all times.
Standard development and Trunk-Based Development
More deployments for the deployment throne…
Trunk-based software development process allows adding the updates to master branch one after another and recording the process. This is essential for finding the faulty code the broke the production environment. Besides, the rollbacks are much smaller and faster this way. However, just changing the approach to deployments does not solve all the problems with them. Let’s assume your new deployments take 30 minutes and rollbacks take 15 minutes, so you can deploy 4–5 times a day. It is better than before, yet not as good as we would want.
In addition, Elastic Beanstalk was not designed to support microservice app architecture. You’ll inevitably come to the necessity to use Docker containers and orchestrate them in order to use microservices efficiently. In addition, using docker-compose is very beneficial for local software development.
There are various container configuration management systems around:
- AWS ECS
- Docker Swarm
- Apache Mesos/Marathon
- Nomad
- Kubernetes
To be honest, Kubernetes is the hands-down winner here, and for a good reason. Kubernetes is an open-source project originally developed by Google and boasting a very strong community and in-depth documentation, as well as a rapid platform development pace. Docker has already accepted Kubernetes as its default container management system instead of Docker Swarm. Amazon ECS is good if you are not afraid of AWS vendor lock-in, Apache Mesos/Marathon was a thing several years ago, but it does not have such a passionate support and is very different from all the other systems. Hashicorp Nomad is an interesting solution, yet it is supposed to work with other HashiCorp products that you don’t necessarily need. In addition, Nomad namespaces are paid for, while Kubernetes allows to have as many as you want for free.
Kubernetes is hard to master, yes. It has a very steep learning curve, no doubt — but all the major cloud service vendors provide Kubernetes-as-a-Service. You can expect to fully move your software delivery pipelines to Kubernetes in under a year, depending on the size of your company, complexity of operations and the amount of resources devoted to the task. For the sake of this article we will suppose, that you will dedicate 2 mid-level DevOps engineers to this project part-time.
Starting to work with K8S
It is best to accomplish a pilot and polish your workflows before performing an all-out transition to Kubernetes on your software delivery workflows. Start small — use AWS EKS, Google GKE, or even kops if you wish. Document everything you do for further reference and to ease the inevitable cluster troubleshooting.
The most important things to test and master during a pilot are cert-manager, cluster-autoscaler, integration with Prometheus and Grafana, Jenkins, Ansible, HashiCorp Vault and other tools. You will be able to grasp the best practices of rolling updates (where the production environment is updated node by node, without any interruption of end-user experience), learn to configure DNS and networking the best way for your project. We promise, your journey to learn Kubernetes will not be dull.
AWS spot instances are the most cost-effective type of destination for your pilot Kubernetes cluster. Just follow AWS Spot Instance Advisor to select the most appropriate type of instance for your goals and use kube-spot-termination-notice-handler to manage the notifications of any issues that might arise. Once you deploy your trunk-based application development workflow to a Kubernetes cluster, you will see that instead of 3–4–7 hours to test and release a new product version, the dynamic environment provisioning time would shorten to 1–2 hours max for each pull request. These environments can be created by simply adding the appropriate web hooks to your GitHub repos. This way, every pull-request will create an independent dynamic environment for testing separate features, located in its own namespace. Kubernetes Dashboard will help your developers debug their code of need be.
There are clusters for staging and clusters for production
Your training K8S cluster ecosystem will look like this:
- several master nodes in different AWS availability zones
- kops and Kubernetes 1.x
- staging server works on spot instances
- Amazon VPC with private subnet and a bastion host for security
- Prometheus + Grafana for gathering metrics and logs
- Datadog-agent for APM
- Dex+dex-k8s-authenticator to enable your developers to fix bugs real-time
Infrastructure as Code is another prerequisite for maximizing the efficiency of trunk-based development with Kubernetes. This will ensure, for example, that all Nginx Ingress controller and Datadog-agent versions are the same across your whole software delivery pipeline.
Why else would you make such a long journey? It’s time to migrate your production to Kubernetes
Below is an approximate structure of your GitHub monorepo (you do develop your apps the Google way, don’t you?)
The root Jenkinsfile contains the table of correspondence between the microservice name and the github directory with its code. When a developer merges a pull request to master, a tag is created at GitHub, and Jenkins deploys the required environment according to a preconfigured Jenkinsfile.
Helm/ catalogue contains Helm-charts with 2 separate values-files for staging and for production server. Should you need to deploy huge numbers of Helm-charts to a staging server, try using Scaffold or any other tool of your choosing.
In accordance with 12factor.app philosophy, each separate microservice in production environment writes its logs to stdout, reads its secrets from Vault and has a basic set of alerts (like checking the numbers of active podes, 50x-errors and Ingress controller latency).
This approach allows you to split the monolithic app into microservices at last, and move from Elastic Beanstalk to a fully-operational Kubernetes cluster.
Breaking monolith to microservices. The Vigeland Park in Осло
By using AWS Cloudfront as a CDN, you will be able to have canary-deployment throughout the migration. Just move your monolithic app one module at a time and test the results of each transition. This way, you will polish the process in merely few iterations and monitor the platform, workload and other metrics before finally moving 100% of your workflows to your brand new Kubernetes cluster.
Wrapping it up: Kubernetes provides immense benefits for your business
Once the after-migration dust settles, you will witness something along these lines:
- Average release time shortened from 90 to 30 minutes
- Quantity of releases growing from 1 to 15 (or even more!)
- Rollback time shortened from 45 to 1–2 minutes (as simple as rebooting the previous product version)
- New microservices can be easily added and released to production environment
- Kubernetes provides centralized monitoring, logging, and secret management capabilities and is able to operate your Infrastructure as Code.
This is an approximate roadmap, and it can differ in your particular case, but the results will be as impressive, as we described above. Should you have any more questions — IT Svit team will be glad to answer them!