Why should you switch to Docker and Kubernetes?

. 7 min read

If you have not given Docker and Kubernetes a chance till yet, then please give it a try. You will thank it for saving all the sleepless nights and production issues that are in the purview of your production application. You wouldn't have to worry about your infrastructure being underutilized, or over-utilized, to having spent hours at night or weekends on the production deployment change window. You wouldn't have to spend work hours outside the office in case some shit happens in production. You wouldn't have to worry, when someone says there is going to be a new micro-service in the system next release and that when a developer says a feature is working in his development environment but not in production. Lastly, you wouldn't have to worry about that post-production nightmare of post-release issues that you have to work over the weekends.

Kubernetes and Docker radically simplify the task of building, deploying, and maintaining distributed systems. It came to life from the giants like Google and decades of engineering maturity and experience. You won't regret using it.

It is said that any problem in computing can be solved by a level of abstraction and indirection. History of computing clearly showcase the abstractions that are added generations after generations from programming languages to run-time engines, from virtualization to cloud infrastructure. All these, just to make the development and management of complex applications easy and scale-able, so that the teams can focus on what matters for the business. In the current era of modern application development, one of the most vital abstraction initiatives is driven by container abstraction and it's orchestration engine like Kubernetes. Cloud Native Foundation and the amazing thriving community around it are the living proof of this new age abstraction. Today, Docker and Kubernetes are being used in small to large scale production-grade applications in almost every industry from simple micro-service applications to large machine learning clusters.

Docker is an open source container run-time that unlocks the potential of your organization by giving developers and IT the freedom to build, manage and secure business-critical applications without the fear of technology or infrastructure lock-in. Kubernetes is an open source orchestrator for deploying containerized applications. Docker and Kubernetes are powering cloud-native applications to internet-scale companies. from a cluster of Raspberry Pi computers to large cross regions clusters of machines.

But, what is driving this containerization culture and how can your team benefit from it?

Modern applications are nothing, but a number of small services collaborating in real time to serve customers. These services stack upon each other to offer the holistic view of what we call applications. When we open our favorite food delivery or the taxi hailing application, there are a ton of small services working together to help us get unified user experience. And in the current era of computing, they have to be available 24x7 and are expected to provide the same performance all the time irrespective of the number of users using the application or the peak holiday season. In computing terms, these are our reliable, fault tolerant, scalable, resilient and reactive distributed systems. Let's see how do abstractions like Docker containers and Kubernetes help with these.

Existing competitive business trends demand much more delivery velocity as compared to the past, when it comes to the number of features that can be shipped at a time, and also the time in which the features can be shipped.

The days of the past are gone, where it is practically OK to have a downtime of few hours for an application upgrade. Even the simplest of the applications like office suites are moving away from traditional delivery to online 365 presence with new features rolling out every other day. The delivery teams are shrinking and so are the release cycles. Across all industries, the critical difference between competitors drills down to the agility and innovation one can deliver faster. Docker Containers and Kubernetes engine provide the tooling necessary to build immutable, declarative, and self-healing infrastructure that radically increases the delivery time, the resiliency and reliability of the services.

Docker and Kubernetes strongly adhere to immutable infrastructure unlike traditional methods of deployment where a production state is largely unpredictable with respect to the underlying services. Unlike traditional deployments, Docker containers and Kubernetes allow declarative templates to indicate a production instance which can be deployed, replicated, cloned and distributed at any time, at any place. With the advent of cloud platforms, creating, distributing and scaling these infrastructures are now matter of a couple of minutes. With the immutable infrastructure and declarative templates, the state of the underlying infrastructure is always predictable. You can declare 100's of services, their version numbers, their network perimeters, replica counts, resource limits, configuration, etc. with a few lines of expressions. Pushing new release or rolling back to earlier one, is then a matter of applying the different template file to the server. All the DevOps pipeline including source control, testing, deployment, scaling, self-healing can be expressed in the same template. With the help of immutable infrastructure, you do not have to spend countless hours ensuring the system reliability or have a sysadmin always on toes for any misfortune. The templates are enough to instruct Kubernetes to restore and self-heal the state of the system in case of a failure, or catastrophic event.

Existing business trends demand the ability to scale up the performance and availability of the system in a matter of hours if not minutes, to be able to serve new customers. It is also desired to be able to expand the product to new avenues with new features and bigger teams in a matter of days.

The scalability demands lead to the culture of micro-services, to be able to decouple the product functionality into small chunks which work with each other to offer end-to-end experience. This architectural movement allowed smaller teams to work on services, to be able to deliver faster without cross-team communication noise. It also allowed to identify and individually scale the slower parts of the application without impacting the other deployments. However, it came with a cost. Deployment, discovery, delivery, monitoring, and distribution of a number of small services is non-trivial. Docker and Kubernetes help by decoupling the services in individual containers which orchestrate, discover, and communicate with each other via abstractions like load balancers and API services. They also allow to individually scale some of the services by providing more resources to it dynamically. Kubernetes declarative templates and docker immutable containers allow the scaling functionality to be a matter of minutes, via applying new configurations. Cloud infrastructure further enhances the support by integrating seamlessly with Kubernetes to dynamically commission, decommission and optimize resource consumption in a couple of minutes.

Kubernetes provides numerous abstractions to isolate operating environments of service teams, Dev/QA/Stage infrastructure, service abstractions for aggregating smaller service endpoints, co-locating the cohesive services into single entities, and providing discovery, configurations and reliable load balancing on top of each other. Scalability is also influenced by the reliability Kubernetes engine has to offer. It no longer requires each time to have dedicated sysadmin and operations team for deployment and management. It takes a small team to handle the delivery of hundreds of service and products. A number of cloud vendors including Amazon, Azure, Google, Digital Ocean have also started to offer managed Kubernetes service with financially backed SLA's further increasing the agility and scale.

In the current era, Vendor lock-in means a loss to the business. There is absolutely every effort made to squeeze every ounce of recurring cost and make partnerships with the cheapest and reliable cloud vendor. The cloud ecosystem is also focused on offering pre-built service irrespective of the industry with machine learning engines, knowledge graphs, chat ecosystem, graph and database system and more. It is no longer acceptable to pay more or settle for less in the competitive cloud landscape.

Historically, application delivery started with dedicated enterprise warehouse where operations and procurement of new infrastructure meant weeks or months. It moved to the PaaS cloud services which offered faster agile delivery, but also meant that developers have to learn the way of cloud technology and do not have the option to move away. Docker containers and Kubernetes create an immutable abstraction of infrastructure and the corresponding services that are portable. These portable infrastructure does not require a new learning curve depending on the ecosystem. It takes a couple of hours to move the infrastructure from on-premise to a cloud, and from one cloud to another.

Almost all major cloud vendor offer managed Kubernetes service today with the ecosystem of distributing and deploying containers. With the continued community effort of Cloud Native Foundation, the vendors are continuously adding the native support of cloud services which are portable and yet efficient to utilize the ecosystem.

Existing business demands agility and efficiency in every front. It is no longer acceptable to spend months in capacity estimation and carefully identifying the optimized allocation paths. With the increased demand in agility and scale, it is also not in-common to expect the services to have unpredictable demand for resources with some over-utilized, while others under-utilized.

Traditionally, capacity planning was a daunting task. The infrastructures were usually procured by rigorous benchmarks and carefully examining the resource needs. Any expectation of business growth and scale needs had to be planned months in advance in order to assess the needs and procure new infrastructure. The result was usually under-utilization or over-utilization of resources. Containers and Kubernetes provide tools to declarative automate the distribution of application across clusters, ensuring optimum level of utilization at all times. The efficiency is not only in terms of recurring infrastructure cost but also from the operational cost of energy spent on creating new infrastructure. Kubernetes declarative API allows the creation of new infrastructure for a new demo, performance cluster, or a new acceptance testing environment in a couple of minutes. It is a trivial task of having multiple QA environments running with different versions at the same time on the same infrastructure.

Containers and Kubernetes are the new age tools to radically change the way the applications are built, distributed, and deployed. It has given rise to the velocity of delivery, agility, and scale of business operations, drastically lowering the recurring costs. It is the biggest tool in the arsenal today to be future ready.