Kubernetes (K8s), the Google-made open-source container orchestration platform that automates deployment, scaling, maintenance, scheduling and operation of multiple application containers across clusters. Kubernetes is used for large scale web hosting and a great, easy and extendable way to run production software. It supports automation as well as declarative configuration.
Kubernetes Master Machine components are
- API Server
- Controller Manager
The smallest unit of Kubernetes object model is known as a Pod, a basic building block, which is a single instance of an application.
Pods run on nodes and one cluster of Kubernetes needs at least one Node.
A Kubelet is a little application within each compute node and ensures containers are running in a pod. Kube-proxy is used for Kubernetes networking services.
Container runtime is the base engine to make the pods function.
Its other main components are persistent storage, services, Ingress Controllers, load balancers, Daemon Sets, and deployments.
Let’s understand the key features of Kubernetes.
The deployment object of Kubernetes has an in-built rollout rollback feature. It rolls out changes to update the running version of your app but also rolls back any changes if anything goes wrong, all the while keeping in check the health of your app and reducing downtime. Pods in the background check if the rollout is feasible or not. However, if the deployment isn’t stable or functioning as expected, automated rollback works as an escape strategy. That said, there is also the feature to manually undo rollbacks and the functionality to check all rollouts in history.
One of the best features of Kubernetes is self-healing, wherein if a Pod or a container goes down, it will start a new one as per the restart policies. Unhealthy containers/clusters are detected and restored to the desired status. Node failure across clusters are also monitored and when found not in sync, are brought back to the desired state. Even though it self corrects generally at the application layer, with external tools, you can also monitor the health of components and infrastructure. The benefit is that the app runs without glitches or disturbances.
These are used with Custom Resource which are an extension of the Kubernetes API not generally available in a default installation of Kubernetes. A default controller monitors/tracks one resource while the custom controller manages the custom resource. If a custom resource is created by the programmer, the custom controller is also a must. Whenever you introduce your own API into the project using a custom resource which is controlled by the custom controller, that’s the creation of a true declarative API. If you don’t create a custom controller, custom resources can be another storage component. Although Kubernetes has custom resources in-built too, you can accomplish tasks with custom controllers you couldn’t with the standard one.
Autoscaling is done on multiple metrics and custom metrics. A Horizontal Pod Autoscaler feature of Kubernetes automatically scales the number of pods in replication controller, deployment, replica set or stateful set based on observed CPU utilization. It adds new nodes to a cluster and even does vertical scaling which means raising resources in each cluster. This feature is especially helpful to scale based on traffic/utilization and infrastructure limitations. Workloads are scaled automatically, however the programmer must specify the metrics that determine the number of pods needed and the standard at which to remove or add pods. Kubernetes will constantly monitor your chosen metrics and scale accordingly.
Kubernetes has an excellent probe mechanism. It performs liveness and readiness probes to make operations more reliable and resilient.
In the liveness probe, Kubernetes checks if your app is alive or dead. It does this probe to figure out when to restart a container or restore a pod. If the liveness probe fails, the container is handled as per restart policies.
Readiness probe is done to check if the app is ready to process and serve requests. Kubernetes checks if containers are ready to accept traffic.
Service Discovery is defined as the process of determining how to connect to a service. Pods require network connectivity to run perfectly across clusters. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. You can use labels and selectors for advanced controls, for managing massive clusters and easy identification.
This feature is helpful in distributing traffic. Kube-proxy is a feature that manages virtual IPs used by services and manages load distribution by either randomly choosing a pod or using a round robin distribution system. More complex load distribution can be managed by Ingress, a specialised controller in a pod, which can also take into account complex systems and vendor parameters.
These are the sets of key value pairs that can be turned on and off. A feature can be in three stages. Alpha stage means it could be glitchy, buggy, disabled. Beta stage means a feature is enabled and safe while General Availability means a feature is always enabled and needs no gate. Feature gates in Kubernetes help disable and enable features as per your requirement.
Kubernetes storage may be used with the cloud, but it’s essentially separate storage and can be configured for huge application databases, speed and agility. Latest developments in Kubernetes allow for managing stateful applications and persistent storage.
Helm is the package manager of Kubernetes that uses the format system of charts to help with microservices, deployment, app testing and can be tremendously helpful to manage containers through update, install or remove. Programmers can use preconfigured charts for easy deployment.
Apart from Google, some of the top companies that use Kubernetes are Slack, Shopify, and Pokemon Go. It is an ideal tool for companies that are dependent on predictable, speedy, agile deployments. It takes less manual hours to accomplish tasks and efficiently creates performant apps.