Why Use Kubernetes for Your Container Management?
Kubernetes is the most commonly used open source orchestration framework for containers. It’s used for automating deployment, scaling, and management of application containers and works with a range of container tools, including Docker.
But what makes Kubernetes so popular? And would it be useful to you? Let’s look at some of the benefits of Kubernetes.
A Kubernetes service is an abstraction that routes client requests for an application to a pod, which is another abstraction composed of Docker containers. A Kubernetes service serves an application running in a set or group of replicated pods. A deployment (or a replication controller, or a replica set) that is independent of a service manages the replication level of the pods.
Labels match a service to pods, and a client request is sent to a service and routed to a matching pod. In a distributed cluster pods for an application are typically running on different nodes or servers. By default a service is exposed at a cluster IP that is internal to the cluster, called a ClusterIP service.
A NodePort type of service exposes a service externally at each node in the cluster. This is in addition to exposing the service at the internal cluster IP. A LoadBalancer type of service exposes a service at an external load balancer DNS, which is in addition to exposing the service at each node and on the cluster IP. In fact, a request made to the load balancer is routed to an application via a node and the cluster IP.
For storage, Kubernetes makes use of volumes. These are abstractions associated with storage media, such as a directory on a host system, an AWS EBS volume, or a GitHub repo. A volume is mounted at a configured path in each container in a Kubernetes pod.
Kubernetes uses rolling updates for updating a running application to a newer version of a Docker image or a completely different Docker image. Instead of shutting down for maintenance, the application continues to serve client requests while being updated.
Kubernetes provides a flexible model for allocating computing resources, such as CPU and memory, based on configurable requests and limits. A request is the resource that a container calls, and a limit is the maximum of that resource that a container can be assigned. Resource quotas may also be created to limit the use of resources in a specific namespace.
Because the application load could fluctuate, a horizontal pod scaler can be used to provision extra pods if needed. A horizontal pod scaler is configured with a minimum and maximum number of pods within which to scale.
The microservices architecture provides benefits that overcome the shortcomings of the monolithic architecture. While a single application service is not uncommon, typically services have dependencies between them. An example of Kubernetes microservices is a WordPress service that stores data using a MySQL database service.
Kubernetes was originally developed by Google and is now maintained by the Cloud Native Computing Foundation, so it has reliable and trusted support. And because it’s open source, you can freely use Kubernetes to scale your container orchestration and achieve a more manageable application infrastructure.