Day 1: Understanding How Kubernetes (and AKS) Works
Today I learned the core idea behind Kubernetes and how it helps run and
manage containers at scale.
Instead of manually running containers on individual machines, Kubernetes gives
us a central brain that decides where containers should run, how
many should run, and what happens when something fails.
This is the foundation for understanding Azure Kubernetes Service (AKS)
later.
What Problem Kubernetes Solves
Containers (like Docker containers) are easy to run on a single machine, but
things get complicated when:
- I have many containers
- I have many servers (VMs or physical machines)
- Some machines may run out of CPU or memory
- Some machines or containers may crash
Kubernetes exists to orchestrate containers:
- Decide where containers should run
- Keep them running
- Automatically recover from failures
Kubernetes Cluster: Nodes and
Containers
At the lowest level, Kubernetes runs on a cluster.
What is a Cluster?
A Kubernetes cluster is
made up of:
- Multiple nodes
- Each node is either:
- A virtual machine (VM), or
- A physical server
Example:
- Node 1
- Node 2
- Node 3
Each node provides:
- CPU
- Memory
- Network
Container Runtime on Each Node
To run containers, every node must have a container runtime installed.
Examples of container runtimes:
- Docker
- containerd
- CRI-O
The container runtime is responsible for:
- Pulling container images
- Starting containers
- Stopping containers
Without Kubernetes, I could SSH into a node and run:
docker run my-container{codeBox}
But this creates problems:
- I may choose a node with not enough resources
- I don’t want to manually manage 100 nodes and 200
containers
The Control Plane: The Brain of
Kubernetes
To solve these problems, Kubernetes introduces another layer called the Control
Plane (also called the Master).
What is the Control Plane?
The control plane:
- Knows all nodes
- Knows all containers
- Decides where containers should run
- Ensures the cluster matches the desired configuration
I never talk directly to nodes.
I always talk to the control plane.
How Developers Interact:
kubectl and YAML
As a developer, I interact with Kubernetes using kubectl.
kubectl
- Command-line interface (CLI)
- Used to send requests to the control plane
Example command:
kubectl apply -f app.yaml{codeBox}
YAML Manifest Files
The YAML file contains:
- Container image name
- Number of replicas (how many containers)
- Ports to expose
- Service type (LoadBalancer, NodePort, etc.)
- How components connect to each other
This file describes the desired state of my application.
Control Plane Components
Explained Simply
Once I run kubectl apply, the request goes through several control plane
components:
1. API Server
- Entry point of Kubernetes
- Receives all requests from kubectl
- Validates and processes them
2. etcd
- Key-value database
- Stores the entire cluster state
- Keeps information like:
- Which containers are running
- Configuration of applications
3. Scheduler
- Decides which node should run each container
- Looks at:
- CPU availability
- Memory availability
4. Controller Manager
- Ensures the actual state matches the desired state
- Works with the scheduler to manage containers
kubelet: The Agent on Each Node
Every node runs a component called kubelet.
The kubelet:
- Receives instructions from the control plane
- Talks to the container runtime
- Starts and stops containers on the node
Think of kubelet as:
The local Kubernetes agent on each
node
Example: Deploying 5 Containers
If I request 5 containers:
- Node 1 might run 2 containers
- Node 2 might run 1 container
- Node 3 might run 2 containers
Kubernetes decides this based on available resources.
Desired State Configuration
(Self-Healing)
One of the most important concepts I learned today is Desired State
Configuration.
Example:
- Desired state: 5 containers running
If:
- Node 2 crashes
- Container 3 stops
Kubernetes will:
- Detect the failure
- Reschedule the container on another node
- Restore the system back to 5 running containers
This is how Kubernetes provides self-healing.
How Users Access the
Application
Users do not access the control plane.
Instead:
- Users send requests to the application
- Traffic goes through a Load Balancer
- The load balancer distributes traffic across
containers
Example:
- Request 1 → Container 1
- Request 2 → Container 4
- Request 3 → Container 2
Load Balancer and Cloud
Providers
Load balancers are usually:
- Not internal Kubernetes components
- Provided by the cloud provider
Kubernetes can run:
- On-premises
- On cloud platforms like:
- Azure (AKS)
- AWS (EKS)
- Google Cloud (GKE)
Cloud providers offer managed services:
- Load Balancers
- Managed Disks
- Storage with high availability and backups
Cloud Controller Manager
Kubernetes uses a component called
the Cloud Controller Manager to interact with the cloud.
It can:
- Request a load balancer
- Provision managed disks
- Attach cloud resources to the cluster
In AKS, this interaction happens automatically with Azure services.
Architecture
kubectl -> YAML (Desired State Configuration)Control Plane:- API Server (entry point)- etcd (cluster state storage)- Scheduler (pod placement)- Controller Manager (state reconciliation)Nodes:Node 1:- kubelet- container runtime- containers (pod1, pod2)Node 2:- kubelet- container runtime- containers (pod3, pod4)Traffic Flow:Users -> Load Balancer -> Kubernetes Service -> Pods on Nodes{alertInfo}
Key Takeaways from Day 1
- Kubernetes orchestrates containers across multiple
nodes
- Nodes run containers using a container runtime
- The control plane decides where containers run
- kubectl + YAML define the desired state
- Kubernetes automatically heals failures
- Cloud providers extend Kubernetes with managed
services
- AKS is Azure’s managed Kubernetes implementation

