Day 2:
Understanding AKS Networking – Kubenet vs Azure CNI
Today I learned about networking in Azure Kubernetes Service (AKS),
specifically networking plugins and how they control IP address
assignment and communication between pods, nodes, and services.
This topic is critical because networking decisions directly impact:
- Scalability
- IP address consumption
- Performance
- Feature availability in AKS
What Are
Networking Plugins in AKS?
In Kubernetes, a networking plugin defines:
- How pods get IP addresses
- How pods communicate with each other
- How pods communicate with resources outside the
cluster
In AKS, networking is implemented using CNI (Container Network Interface)
plugins.
Networking
Plugin Options in AKS
AKS supports multiple networking modes:
- Kubenet (default)
- Azure CNI
- Azure CNI Overlay
- Bring Your Own CNI (e.g., Cilium)
Important Note About the Azure Portal
In the Azure
Portal, when creating an AKS cluster, I only see:
- Kubenet
- Azure CNI
This does not mean these are the only options.
Other networking configurations (like Azure CNI Overlay or custom CNIs) are
available when using:
- Azure CLI
- Terraform
- ARM/Bicep templates
Kubenet: The
Default AKS Networking Plugin
High-Level Idea
With Kubenet:
- Nodes get IP addresses from the Azure VNet
subnet
- Pods get IP addresses from a separate
internal Pod CIDR
- Pods use NAT (Network Address Translation) to
communicate outside their node
This design minimizes IP address usage in the VNet.
IP Address
Assignment in Kubenet
Node IP Addresses
- Each node receives one IP address from the
subnet
- Example subnet range:
- 10.224.0.0/16
Running:
kubectl get nodes -o wideshows node IPs from this subnet.
Pod IP
Addresses
Pods do not get IPs from the subnet.
Instead:
- Pods get IPs from a Pod CIDR
- Example Pod CIDR:
- 10.244.0.0/16
Running:
kubectl get pods -A -o wide
shows pod IPs
from this CIDR.
System Pods
vs Application Pods
System Pods (kube-system namespace)
Some system
pods reuse the node’s IP address.
This is:
- Valid
- Expected
- Configured intentionally
Examples:
- kube-proxy
- some networking components
Application
Pods (User Workloads)
Pods I create myself (e.g., NGINX deployment) behave differently.
Example:
- Deployment with 10 NGINX replicas
- All pods get IPs from the Pod CIDR
- None of them use the node IP
This is the standard Kubernetes behavior.
Pod CIDR
Allocation per Node
Even though the Pod CIDR is cluster-wide, it is subdivided per node.
Example:
- Cluster Pod CIDR: 10.244.0.0/16
- Node 1 gets: 10.244.0.0/24
- Node 2 gets: 10.244.1.0/24
- Node 3 gets: 10.244.2.0/24
Each node can only assign pod IPs from its own /24 range.
This explains
why:
- Pod IPs are not sequential
- Pods on different nodes appear to “jump” in IP ranges
Service IP
Addresses in Kubernetes
Pods are usually accessed through Services.
Service IPs:
- Come from a Service CIDR
- Example:
- 10.0.0.0/16
Important:
- Service CIDR is independent of:
- Subnet CIDR
- Pod CIDR
- This behavior is the same for all networking
plugins
Services exist on an internal Kubernetes virtual network.
How Pods
Communicate Across Nodes in Kubenet
Same Node Communication
- Pods communicate directly
- No routing complexity
Different Node Communication
- Traffic goes through:
- User Defined Routes (UDR)
- NAT
- IP forwarding
Pods do not directly access the VNet subnet.
Instead:
- Pod traffic is NATed to the node IP
- Route table directs traffic to the correct node
- Target node delivers traffic to the destination pod
Route Tables
in Kubenet
AKS automatically creates a route table when using Kubenet.
For each node:
- A route is created
- The route maps:
- Pod CIDR for that node
- Next hop = node’s IP address
This is why:
- Route tables are required for Kubenet
- IP forwarding must be enabled
Kubenet
Limitations
1. Route Table Limit
Azure supports a maximum of 400 routes per route table.
Because:
- Each node requires one route
This means:
- Maximum cluster size ≈ 400 nodes
This is sufficient for most workloads, but not all.
2.
Additional Network Hop
Kubenet adds:
- One extra hop
- Minor latency
The latency is:
- Very small (milliseconds or less)
- But noticeable for extremely chatty applications
3. Feature
Limitations
Kubenet does not support:
- AKS Virtual Nodes
- Windows node pools
- Azure Network Policies
However:
- Calico Network Policies are supported and
commonly used
Kubenet
Summary (Recap)
- Nodes get IPs from the subnet
- Pods get IPs from a separate Pod CIDR
- Services get IPs from Service CIDR
- NAT is used for pod communication
- Route tables manage pod-to-pod traffic
- Efficient IP usage
- Some AKS features are not supported
Key
Takeaways from Day 2
- Networking plugins control pod and service IP
behavior
- Kubenet minimizes IP usage but has scaling limits
- Pod CIDRs are split per node
- Services always use an internal Kubernetes CIDR
- Kubenet relies on routing + NAT for connectivity