Day 2:
Understanding AKS Networking – Kubenet vs Azure CNI
Today I learned about networking in Azure Kubernetes Service (AKS),
specifically networking plugins and how they control IP address
assignment and communication between pods, nodes, and services.
This topic is critical because networking decisions directly impact:
- Scalability
- IP address consumption
- Performance
- Feature availability in AKS
What Are
Networking Plugins in AKS?
In Kubernetes, a networking plugin defines:
- How pods get IP addresses
- How pods communicate with each other
- How pods communicate with resources outside the
cluster
In AKS, networking is implemented using CNI (Container Network Interface)
plugins.
Networking
Plugin Options in AKS
AKS supports multiple networking modes:
- Kubenet (default)
- Azure CNI
- Azure CNI Overlay
- Bring Your Own CNI (e.g., Cilium)
Important Note About the Azure Portal
In the Azure
Portal, when creating an AKS cluster, I only see:
- Kubenet
- Azure CNI
This does not mean these are the only options.
Other networking configurations (like Azure CNI Overlay or custom CNIs) are
available when using:
- Azure CLI
- Terraform
- ARM/Bicep templates
Kubenet: The
Default AKS Networking Plugin
High-Level Idea
With Kubenet:
- Nodes get IP addresses from the Azure VNet
subnet
- Pods get IP addresses from a separate
internal Pod CIDR
- Pods use NAT (Network Address Translation) to
communicate outside their node
This design minimizes IP address usage in the VNet.
IP Address
Assignment in Kubenet
Node IP Addresses
- Each node receives one IP address from the
subnet
- Example subnet range:
- 10.224.0.0/16
Running:
kubectl get nodes -o wideshows node IPs from this subnet.
Pod IP
Addresses
Pods do not get IPs from the subnet.
Instead:
- Pods get IPs from a Pod CIDR
- Example Pod CIDR:
- 10.244.0.0/16
Running:
kubectl get pods -A -o wide
shows pod IPs
from this CIDR.
System Pods
vs Application Pods
System Pods (kube-system namespace)
Some system
pods reuse the node’s IP address.
This is:
- Valid
- Expected
- Configured intentionally
Examples:
- kube-proxy
- some networking components
Application
Pods (User Workloads)
Pods I create myself (e.g., NGINX deployment) behave differently.
Example:
- Deployment with 10 NGINX replicas
- All pods get IPs from the Pod CIDR
- None of them use the node IP
This is the standard Kubernetes behavior.
Pod CIDR
Allocation per Node
Even though the Pod CIDR is cluster-wide, it is subdivided per node.
Example:
- Cluster Pod CIDR: 10.244.0.0/16
- Node 1 gets: 10.244.0.0/24
- Node 2 gets: 10.244.1.0/24
- Node 3 gets: 10.244.2.0/24
Each node can only assign pod IPs from its own /24 range.
This explains
why:
- Pod IPs are not sequential
- Pods on different nodes appear to “jump” in IP ranges
Service IP
Addresses in Kubernetes
Pods are usually accessed through Services.
Service IPs:
- Come from a Service CIDR
- Example:
- 10.0.0.0/16
Important:
- Service CIDR is independent of:
- Subnet CIDR
- Pod CIDR
- This behavior is the same for all networking
plugins
Services exist on an internal Kubernetes virtual network.
How Pods
Communicate Across Nodes in Kubenet
Same Node Communication
- Pods communicate directly
- No routing complexity
Different Node Communication
- Traffic goes through:
- User Defined Routes (UDR)
- NAT
- IP forwarding
Pods do not directly access the VNet subnet.
Instead:
- Pod traffic is NATed to the node IP
- Route table directs traffic to the correct node
- Target node delivers traffic to the destination pod
Route Tables
in Kubenet
AKS automatically creates a route table when using Kubenet.
For each node:
- A route is created
- The route maps:
- Pod CIDR for that node
- Next hop = node’s IP address
This is why:
- Route tables are required for Kubenet
- IP forwarding must be enabled
Kubenet
Limitations
1. Route Table Limit
Azure supports a maximum of 400 routes per route table.
Because:
- Each node requires one route
This means:
- Maximum cluster size ≈ 400 nodes
This is sufficient for most workloads, but not all.
2.
Additional Network Hop
Kubenet adds:
- One extra hop
- Minor latency
The latency is:
- Very small (milliseconds or less)
- But noticeable for extremely chatty applications
3. Feature
Limitations
Kubenet does not support:
- AKS Virtual Nodes
- Windows node pools
- Azure Network Policies
However:
- Calico Network Policies are supported and commonly used
Kubenet Summary (Recap)
1. Environment Setup: Create the Resource Group
az group create --name rg-aks-cni --location westeurope{alertSuccess}
Every Azure resource needs a home. This command creates a logical container (rg-aks-cni) in West Europe to hold your cluster resources, ensuring they can be managed and deleted together later.
Create the AKS Cluster with Kubenet
az aks create -g rg-aks-cni -n aks-cni --network-plugin azure{alertSuccess}
This provisions the actual Kubernetes cluster. The key flag here is --network-plugin kubenet. Unlike Azure CNI, Kubenet uses a simpler networking model where pods receive IPs from a logical range (NAT'd by the node) rather than consuming IP addresses directly from your Azure Virtual Network.
2. Verifying Network Configuration
Check Node PodCIDR
kubectl get node -o jsonpath='{.items[*].spec.podCIDR}'{alertSuccess}
With the cluster running, this command verifies that Kubenet has successfully assigned a specific range of IP addresses (CIDR block) to your nodes. Any pod scheduled on a specific node will receive an IP from that node's assigned range.
3. Deploying Workloads
Create Nginx Deployment
kubectl create deployment nginx --image=nginx --replicas=10{alertSuccess}
To test the networking, you need active traffic. This spins up 10 replicas of an Nginx web server. Using 10 replicas increases the likelihood that pods will be distributed across multiple nodes, effectively testing the network routing between them.
Inspect Pod IP Assignments
kubectl get pods -o wide{alertSuccess}
This lists your running pods along with their assigned IP addresses and the Node they are running on. In a Kubenet cluster, you should see that the Pod IPs correspond to the podCIDR ranges you identified in the previous step.
4. Service Layer Inspection
Check Service CIDR
kubectl get svc -A{alertSuccess}
This command displays the ClusterIPs for all services. It confirms that your Service Network (virtual IPs used for internal load balancing) is distinct from your Pod Network, ensuring no IP conflicts between the two layers.
5. Cluster Health & Overview
Full System Inspection
- kubectl get nodes -o wide{alertSuccess}
- kubectl get pods -A -o wide
- kubectl services -A
These commands provide a bird's-eye view of the entire cluster state. They allow you to simultaneously validate that:
- Nodes are Ready and have the correct internal IPs.
- System pods (like CoreDNS and metrics-server) are running correctly alongside your Nginx application.
- All network services are active and properly exposed.
Kubenet
Summary (Recap)
- Nodes get IPs from the subnet
- Pods get IPs from a separate Pod CIDR
- Services get IPs from Service CIDR
- NAT is used for pod communication
- Route tables manage pod-to-pod traffic
- Efficient IP usage
- Some AKS features are not supported
Key
Takeaways from Day 2
- Networking plugins control pod and service IP
behavior
- Kubenet minimizes IP usage but has scaling limits
- Pod CIDRs are split per node
- Services always use an internal Kubernetes CIDR
- Kubenet relies on routing + NAT for connectivity