Kubernetes Cluster Architecture: Control Plane, Nodes, and Networking

Kubernetes Cluster Architecture: Control Plane, Nodes, and Networking

A Kubernetes cluster is a highly structured environment divided into two primary logical roles: the Control Plane, which makes global decisions about the cluster, and the Nodes, where the actual application workloads reside. Understanding how these components interact is essential for managing a production-grade environment.

Control Plane: Brain of the Cluster

The Control Plane node (formerly referred to as the "master node") coordinates all cluster operations. It monitors the state of the system, handles pod scheduling, and serves as the primary gateway for administrative tasks.

Core Components

  1. API Server: The central communication hub. It exposes a RESTful interface (supporting GET, POST, PUT, etc.) and is the only component that communicates directly with the cluster store.

  2. etcd (Cluster Store): A consistent, highly available key-value store used to persist the state of all Kubernetes objects. The API Server is stateless; etcd provides the "memory" for the system.

  3. Scheduler: Responsible for watching the API Server for newly created Pods that have no assigned node. It selects the best node based on resource requirements (CPU, RAM), administrative policies, and constraints like Pod Affinity (keeping pods together) or Anti-Affinity (spreading pods apart).

  4. Controller Manager: Executes the control loops that maintain the desired state. For example, the ReplicaSet Controller ensures the correct number of pods are running by requesting the API Server to create or delete instances as needed.

Administrative Interaction: kubectl

While not technically part of the Control Plane binary, kubectl is the primary command-line tool used by administrators to interact with the API Server. It facilitates retrieving information and submitting the declarative manifests that define your deployments.


Worker Nodes: Execution Layer

Nodes are the physical or virtual machines that provide the compute capacity for the cluster. They are responsible for running your application containers and maintaining local network reachability.

Core Node Components

  • Kubelet: An agent that runs on every node in the cluster (including the Control Plane). It monitors the API Server for Pods assigned to its node and manages their lifecycle, including executing health probes.

  • Kube-proxy: Implements the network rules on nodes. It handles the Service abstraction—routing traffic to the correct Pods and performing load balancing—typically via iptables.

  • Container Runtime: The software responsible for pulling images and running containers. Kubernetes uses the Container Runtime Interface (CRI) to support various runtimes. While Docker was previously the standard, modern clusters utilize containerd or other CRI-compliant runtimes.


Operational Workflow: From Deployment to Self-Healing

When you submit a deployment manifest via kubectl, the following sequence occurs:

  1. Submission: The API Server persists the deployment in etcd.

  2. Creation: The Controller Manager sees the new deployment and instructs the API Server to create the required number of Pod objects.

  3. Scheduling: The Scheduler identifies the new Pods, evaluates node resources, and assigns them to specific worker nodes.

  4. Execution: The Kubelets on those nodes detect the assignment, pull the necessary container images via the runtime, and start the Pods.

Self-Healing in Action: If a node fails, the Control Plane loses contact with that node's Kubelet. The Controller Manager detects that the actual number of running Pods has dropped below the desired count. It triggers the creation of new Pods, which the Scheduler then places on the remaining healthy nodes, restoring the desired state.


Kubernetes Networking Fundamentals

Kubernetes networking operates on a "flat" model where every Pod receives a unique IP address. To facilitate this, the system follows two strict rules:

  1. Pods can communicate with all other Pods across all nodes without Network Address Translation (NAT).
  2. Agents on a node (like the Kubelet) can communicate with all Pods on that same node.

Core Networking Scenarios

  • Intra-Pod Communication: Containers within the same Pod share a network namespace and communicate via localhost.

  • Inter-Pod (Same Node): Pods communicate via a software bridge on the node using their unique IP addresses.

  • Inter-Pod (Cross-Node): Communication requires Layer 2 or Layer 3 reachability between nodes. In environments where you do not control the underlying network, an Overlay Network is used to create a virtual layer 3 network across all nodes.

  • External Access: Managed by Services and implemented via the Kube-proxy, which routes external traffic to the internal Pod IPs.

Cluster Add-on Pods

Some essential services are provided by specialized pods called Add-ons.

  • DNS (CoreDNS): Provides service discovery within the cluster, allowing Pods to find each other via DNS names rather than volatile IP addresses.
  • Ingress Controllers: Advanced Layer 7 load balancers used for content-based routing.
  • Dashboard: A web-based UI for cluster administration.

Previous Post Next Post

Contact Form