Jump to content

Understanding Core Kubernetes Concepts & Components


Recommended Posts

‘Another way of doing cloud computing’ is how Kubernetes was described when it first came into the picture back in 2014. Until then software was built on monolithic code, making it a challenging task for developers to make even the smallest of changes. From there came microservices - a concept of writing code in smaller pieces. Each piece was processed in smaller computer-like virtual machines (VMs). An upgrade from this was containerization with the principle of ‘write once and run anywhere.’

Evolution-software-deployment.png

Figure: Evolution of the software deployment process

Packaged with software code and its dependencies, containers are platform agnostic and way faster than VMs. With the rise of containers came the existence of Kubernetes - a way to automate software development through a containerization process. Kubernetes, today accounts for a massive shift in how applications are built across industries. With a tool becoming so pivotal to the way the cloud functions, it pays to know the key components of Kubernetes and how they work with each other. That’s the focus of this post.

What is Kubernetes?

Kubernetes is an open-source orchestration tool to manage containerized applications. It automates the end-to-end process of creating, deploying, and operating containerized workloads and services. Kubernetes - also known as K8s for the number of letters between ‘K’ and ‘S’ - is portable, extensible, and has extensive community support.

Kubernetes is a Greek word, which means helmsman or pilot.

In simpler terms, Kubernetes enables you to cluster together a bunch of containers by managing the clusters efficiently. It removes manual processes involved in container management. It can operate across different hosts whether on-premises, public, private, or hybrid. The meteoric rise in its adoption is owed to its capability of enabling rapid scaling - a necessity for cloud-native applications.

Significance of Kubernetes

Although containerization has made software development very easy, it has two fundamental challenges.

Scalability: You can deploy thousands of containers to manage your software. With every new container, you add to the operational complexity making scalability difficult.

Reliability: To ensure the stability of the app, you must manage the containers efficiently with no downtime. A new container must be ready to replace a corrupted one. Streamlining such a process manually is impractical.

Kubernetes solves these issues with automation and ensures your software's scalability, availability, and reliability. Eliminating manual tasks, also allows you to better optimize your time and workforce.

But how does Kubernetes make life easy?

Why is Kubernetes important?

With Kubernetes developers can direct their focus on application building without worrying about the environments it will run on. It achieves this by automating a majority of operations through simple built-in commands. It also conducts health checks to restart workloads that stopped or terminate corrupted containers.

Before we dive into Kubernetes further, let’s quickly understand some of its key objects.

Pods: A Pod is a group of containers that use shared resources and storage along with details on container execution. It is the smallest deployable computing unit in Kubernetes.

Node: A Node is a worker machine that can either be a physical machine or a VM. It comprises a set of pods.

Services: A group of pods that work together is called a Service. It also contains policies on how to access them.

Namespaces: Namespaces are sets of non-overlapping sets of resources that facilitate multiple users working on the project.

ConfigMaps and Secrets: ConfigMaps and Secrets are methods of storing confidential configuration details.

Below listed are the key features of Kubernetes:

Service discovery and load balancing

Service discovery is a practice of enabling dynamic communication between services. It allows containers to be exposed as a network service via DNS name and HTTP endpoints, without having to worry about the Pods status. Without affecting the application stability, Pods can be created and destroyed as per the need. Using the same principle, Kubernetes also manages traffic directed to a container. It ensures that pods are getting as many requests as they can manage through load balancing.

Storage orchestration

Storage orchestration is the capability of Kubernetes to automate storage management while you take care of building your software. It manages the end-to-end process of the underlying storage infrastructure including provisioning and pod or container allocation. This feature helps you scale your applications reliably.

Automated rollouts and rollbacks

In Kubernetes, you mention your desired state declaratively and complete container orchestration occurs based on it. Whenever your developers make changes - configuration variables and labels or code alterations - to the deployment template, the system automatically rolls them out to match the desired state. If the actual state of the containers doesn’t match the desired state, Kubernetes will roll back the changes made automatically.Automatic bin packing

Kubernetes places your containers on nodes based on their resource requirements like CPU and RAM specifications. It optimizes your resources by perfectly matching containers with nodes based on the constraints to not compromise on availability.

Self-healing

With each application containing thousands of containers, monitoring or managing their performance is difficult. Even if one single container fails, your app can become unstable. To avoid such unfavorable issues, Kubernetes automates health checks and replaces a failed container with a new one. If a node on which containers are running malfunctions, the containers will be moved to another running node.

Secret and configuration management

Kubernetes allows you to store sensitive data such as passwords, authorization tokens, and SSH keys as a namespaced object in a YAML or JSON file. It allows you to add secrets and application configurations to your deployments easily and securely.

What are Kubernetes components?

Control Plane

Control plane manages the lifecycle of containers by instructing nodes on when and how to run workloads. It is essentially an orchestration layer that is responsible to maintain the desired state of the cluster in Kubernetes using the configuration and state information.

K8s-controlplane.png

Figure: The Kubernetes control plane - Source


It consists of the below five components:

  • Kube-apiserver

The Kubernetes API server is a control unit for your Kubernetes ecosystem. It receives REST requests for modifications to API objects, verifies their validity with Etcd, and executes them. kube-apiserver efficiently manages both internal and external requests by scaling horizontally to accommodate newer deployment instances.

  • Etcd

Etcd stores cluster states by holding data like the number of pods, namespace, API objects, and other service discovery details. Since it serves as a key-value storage space, only the API server can access data stored in Etcd. Any time a configuration change is detected, it sends alerts via API calls on the Etcd cluster node.

  • Kube-scheduler

Kube-scheduler is responsible for identifying newly created pods without an allocated node and finding an ideal node to place this new pod. It matches nodes and pods based on the IT resources demand and supply, along with other specifications like policy constraints.

  • Kube-controller-manager

The Kubernetes controller manager is a daemon that ensures that the cluster state remains consistent by embedding the core control loops, a non-terminating loop. It uses node controllers, job controllers, and service account controllers to monitor the cluster’s shared state and move the current state toward the desired state.

  • Cloud-controller-manager

This controller manager is used to connect your cluster to an API of your cloud provider to ensure that the desired state is implemented. You can use the manager to scale the cluster horizontally. It also separates components interacting with clusters from those talking with the cloud.

Nodes

Nodes are computing machines - either bare metal data centers or VMs on a cloud - that host and facilitate the execution of your containerized applications. The control plane manages nodes, and a single node can carry one or more pods. Each node possesses resources and services and based on these resources the control plane deploys and schedules pods in a Kubernetes cluster.

Nodes take instructions from the control plane on how each of the workloads assigned to them is executed. It comprises three parts as listed below:

  • Kubelet

Kubelet is a communication agent that passes instructions from the control plane to the container runtime. Kubelet ensures that pods are created, terminated, or updated based on the directions from the API server. It also monitors the health and status of pods. Kubelet works with PodSpec objects containing Pod specifications in YAML or JSON format to carry out its instructions.

  • Kube-proxy

Kube-proxy is a communication bridge between pods and nodes to maintain network rules. As a network proxy, it provides Kubernetes networking services to facilitate data transmission between the host, pods, and the outside world. It can forward TCP, UDP, and SCTP streams across clusters.

  • Container-runtime

Container runtime is a software layer that initiates the execution of containerized applications on each node. A container engine like Docker and RKT enables Kubernetes to run containers by loading container images. A runtime is a place where pods stored on a node are run. However, to do so, the runtime must comply with the Open Container Initiative, a set of standards and specifications for container technology.

Addons

Addons are tools or functionalities that extend the capabilities of Kubernetes using its resources like DaemonSet and Deployment. They are placed in kube-system namespace. While several add-ons serve a particular purpose, we have described four addons here:

  • DNS

Cluster DNS is a mandatory addon for all Kubernetes clusters to manage DNS records for services. You can set up the DNS system using CoreDNS and Kube-DNS addons that schedule DNS pods and services with a static IP on the cluster.

  • Web UI


add-app.png

Figure: Weave GitOps UI


Web UI is basically a dashboard for you to manage your Kubernetes clusters and troubleshoot clusters or applications running on clusters.

  • Container resource monitoring

This addon helps you monitor containers and register performance metrics in a database that can be accessed through the dashboard.

  • Cluster-level logging

This logging addon helps you understand issues within your application and initiate steps to debugging. Cluster-level logging saves container logs in a storage unit that allows you to search or browse logs.

What are the types of Kubernetes Services?

As discussed earlier, a service is a group of pods and policies helping you how to access them. The need for services arises from the fact that pods are not permanent. In the deployment stage, pods can be created and terminated dynamically. Due to this, the IP address assigned to individual pods becomes useless. This is where Service helps. Services help you access pods using endpoint labels.

There are multiple methods to help external traffic reach the right Kubernetes cluster. Let’s take a look.

  • ClusterIP

A default service type in Kubernetes, it allows applications to access clusters without exposing pods to external traffic i.e. a Service is only accessible from within the cluster. You can achieve this by assigning a ClusterIP while creating a service.

  • NodePort

In this type, the service is exposed on a specific port of the Nodes in the cluster, and traffic that is sent to that port gains access to the service.

  • LoadBalancer

A LoadBalancer exposes a Kubernetes service externally to make it available over the internet with one IP address. External users use the address to access the cluster's relevant node.

  • ExternalName

It is a service type that doesn’t have selectors; instead, it uses DNS names to gain access to pods.

What are the types of Kubernetes Pods?

  • ReplicaSet

ReplicaSet ensures the availability of your application by maintaining a set of pods always running. It uses a Pod template to create new pods based on the fields specified like a selector to identify pods to acquire and a number of replica pods to be maintained.

  • Deployment

A Deployment manages the application lifecycle by communicating the declarative desired state to the deployment controller with Kubernetes working towards changing the current state to the desired state.

  • DaemonSet

K8s-daemonset.png

Figure: DaemonSet in Kubernetes


A DaemonSet is a workload resource that makes Nodes create a copy of its pods. Whenever a node is added to a cluster, its pods are also added along with it, and when it's removed from the cluster, pods will also be removed.

  • StatefulSet

StatefulSet is a workload API that facilitates the creation of same-spec pods that are not interchangeable. It deploys and scales pods while ensuring that each pod is unique.

  • Job and Cronjob

A Job triggers execution on pods in a loop until a specified number of pods are terminated. The job tracks every pod completion, and once the given counts of pods are terminated, the Job is considered complete. All the pods created during the Job execution will be erased once the Job is deleted. If a Job is run at regular intervals and on a schedule, it is called a Cronjob.

Conclusion

With the growing popularity and adoption of Kubernetes, it is becoming a go-to approach to deploying containerized applications. While it has made the transition to a cloud environment smooth, it can become tedious. The steep learning curve of Kubernetes is often a challenge for developers. That is why GitOps emerged as a disciple of managing Kubernetes without getting into its complexities.

It enables you to manage Kubernetes through open-source tools. But the quickest and easiest way to do GitOps is through Weave GitOps and Weave GitOps Enterprise, an end-to-end GitOps platform powered by the popular CNCF project, Flux. The continuous operations tool enables you to deploy and manage Kubernetes clusters and applications from a single management console.

Get started with Weave GitOps or contact us to talk to one of our experts.

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...