Need of container orchestration tools
With the rise in the use of microservices use of container technology has also increased with it because the container provides the host for small independent applications like microservices. The rise of containers in microservices resulted in hundreds or thousands of containers in a single application. Managing all the container writing scripts or self-made tools is difficult. So, we use container orchestration tools like Docker swarm, Kubernetes, Red Hat OpenShift, Amazon ECS etc.
In this blog, we will discuss briefly about Kubernetes.
What is Kubernetes?
Kubernetes is a container orchestration tool used to automate the deployment, scaling, and management of containerized applications. It manages containers(Docker container or any other container). Kubernetes is also popularly known as K8s. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).
The main goal of Kubernetes is to provide a unified platform for managing containerized applications across different cloud providers and on-premise data centers. It achieves this by providing a set of APIs and tools for deploying, scaling, and managing containers.
Features of Kubernetes:
Automated deployment and scaling of containerized applications.
Load balancing and service discovery across containers
High availability.
Scalability (flexible to increasing or decreasing load)
Self-healing capabilities that automatically restart failed containers or replace them with new ones.
Disaster Recovery.
Kubernetes Architecture
At high-level Kubernetes architecture consists of two components:
Master nodes
Worker nodes
Master node:
The master node also known as a control panel is responsible for managing and coordinating the overall Kubernetes cluster. It consists of several components like:
a. Kubernetes API server: This component is the primary control plane component that exposes the Kubernetes API, which is used to manage and control the Kubernetes cluster.
b. etcd: This is a distributed key-value store that is used to store the configuration data and state of the Kubernetes cluster.
c. Kubernetes controller manager: This component manages the Kubernetes controllers, which are responsible for managing the state of various resources in the cluster.
d. Kubernetes scheduler: This component is responsible for scheduling workloads to run on the worker nodes.
Worker nodes
The worker nodes are the machines in the Kubernetes cluster where the containerized applications are deployed and run. Each worker node runs several components, including:
a. kubelet: This is the primary node agent that runs on each worker node and is responsible for managing and running containers on the node.
b. Kubernetes proxy: This component is responsible for managing network traffic to and from the containers running on the node.
c. Container runtime: This is the software that runs the containerized applications on the node. Kubernetes supports a variety of container runtimes, including Docker, containerd, and CRI-O.
Key Concepts in Kubernetes
Before we dive into the practical aspects of Kubernetes, let's first get familiar with some of the key concepts:
Pods: The smallest deployable unit in Kubernetes. It represents a single instance of a running process in a container.
Services: An abstraction that defines a set of pods and how to access them.
ReplicaSets: Used to ensure that a specified number of replicas of a pod are running at all times.
Deployments: A higher-level abstraction that manages ReplicaSets and allows rolling updates and rollbacks.
Nodes: The physical or virtual machines that run the Kubernetes cluster.
Cluster: A collection of nodes that run containerized applications and are managed by Kubernetes.