19: Azure CNI & Networking
19: Azure CNI & Networking
Objective
Understand the networking models available in Azure Kubernetes Service (AKS), learn the differences between kubenet, Azure CNI, Azure CNI Overlay, and Azure CNI with Cilium, and explore the networking configuration of your training cluster.
Theory
Networking Models in AKS
AKS supports several networking models, each with different trade-offs for IP management, scalability, and feature support.
kubenet (Basic)
- Nodes get an IP from the Azure VNet subnet; Pods get IPs from a separate, internal address space
- Pods communicate across nodes via User-Defined Routes (UDRs) managed by AKS
- Limited to 400 nodes per cluster
- No direct VNet connectivity for Pods (requires NAT)
- Deprecated — retiring on 31 March 2028. Migrate to Azure CNI Overlay before that date
Azure CNI (Traditional)
- Every Pod gets an IP address directly from the VNet subnet
- Full VNet integration — Pods are directly routable from other VNet resources
- Limitation: IP address consumption is high — you must pre-allocate enough IPs in the subnet for all Pods
- Subnet size limits the maximum number of Pods in the cluster
- Formula:
(number of nodes) x (max pods per node) + reserved IPs
Azure CNI Overlay (Recommended)
- Nodes get IPs from the VNet subnet, but Pods get IPs from a separate Pod CIDR (default:
10.244.0.0/16) - Pod IPs are not routable from the VNet — NAT is used for outbound traffic
- Highly scalable — Pod IP space is independent of the VNet subnet size
- Supports up to 1,000 nodes and 250 pods per node
- Recommended for most new AKS clusters
Azure CNI with Cilium
- Uses eBPF-based dataplane powered by Cilium
- Replaces kube-proxy entirely — no iptables rules for Service routing
- Provides advanced features: network policies, observability, load balancing at kernel level
- Combined with Azure CNI Overlay for IP management
- Used in this training cluster
Networking Model Comparison
graph TB
subgraph kubenet["kubenet (Basic)"]
direction TB
KN_Node["Node<br/>IP: 10.224.0.4<br/>(VNet)"]
KN_Pod1["Pod<br/>IP: 10.244.0.5<br/>(internal)"]
KN_Pod2["Pod<br/>IP: 10.244.0.6<br/>(internal)"]
KN_Node --> KN_Pod1
KN_Node --> KN_Pod2
KN_Note["Pods use NAT<br/>UDR for cross-node<br/>Max 400 nodes"]
end
subgraph cni["Azure CNI (Traditional)"]
direction TB
CNI_Node["Node<br/>IP: 10.224.0.4<br/>(VNet)"]
CNI_Pod1["Pod<br/>IP: 10.224.0.10<br/>(VNet)"]
CNI_Pod2["Pod<br/>IP: 10.224.0.11<br/>(VNet)"]
CNI_Node --> CNI_Pod1
CNI_Node --> CNI_Pod2
CNI_Note["Pods consume VNet IPs<br/>Limited by subnet size"]
end
subgraph overlay["Azure CNI Overlay"]
direction TB
OV_Node["Node<br/>IP: 10.224.0.4<br/>(VNet)"]
OV_Pod1["Pod<br/>IP: 10.244.0.5<br/>(Pod CIDR)"]
OV_Pod2["Pod<br/>IP: 10.244.0.6<br/>(Pod CIDR)"]
OV_Node --> OV_Pod1
OV_Node --> OV_Pod2
OV_Note["Pods use separate CIDR<br/>Scalable, recommended"]
end
style kubenet fill:#ffcdd2,stroke:#c62828,stroke-width:1px,stroke-dasharray: 5 5
style cni fill:#fff9c4,stroke:#f9a825,stroke-width:1px
style overlay fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px
Private Clusters
In production environments, you may encounter private AKS clusters:
- The Kubernetes API server is accessible only via a private endpoint within the VNet
- No public IP is assigned to the API server
- Management access requires being on the VNet (via VPN, ExpressRoute, or a jump box)
- Provides an additional layer of security for sensitive workloads
Note: Our training cluster uses a public API endpoint for convenience. Production clusters should evaluate private clusters based on security requirements.
Practical Tasks
Note: This is primarily a theory and exploration exercise. There are no deployments to create — you will examine the existing cluster configuration.
Task 1: Examine Cluster Networking
Check the node IP addresses — these come from the VNet subnet:
kubectl get nodes -o wide
Look at the INTERNAL-IP column. These IPs belong to the Azure VNet subnet (e.g., 10.224.0.x).
Now check the Pod IP addresses:
kubectl get pods -n student-XX -o wide
Compare the Pod IPs with the node IPs. If the cluster uses Azure CNI Overlay, Pod IPs will be from a different range (e.g., 10.244.x.x) than the node IPs (e.g., 10.224.0.x). This confirms that Pods use the separate Pod CIDR, not the VNet subnet.
Task 2: Verify Cilium Is Running
Check if Cilium pods are running in the kube-system namespace:
kubectl get pods -n kube-system -l k8s-app=cilium
You should see Cilium agent pods running on each node. These replace kube-proxy and handle all network policy enforcement and Service routing using eBPF.
Also check if kube-proxy is absent (replaced by Cilium):
kubectl get pods -n kube-system -l component=kube-proxy
With Cilium dataplane, you should see no kube-proxy pods — Cilium handles this function instead.
Task 3: Inspect Network Configuration
Try to inspect network-related ConfigMaps:
kubectl get configmap -n kube-system azure-cni-networkmonitor -o yaml
Note: This ConfigMap may not be present in all configurations. If it does not exist, proceed with the next command.
Check node annotations for networking details:
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.annotations}{"\n"}{end}'
Check the cluster’s Pod CIDR configuration:
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
This shows the Pod CIDR range assigned to each node — these should be subnets of the overlay Pod CIDR (e.g., 10.244.0.0/24, 10.244.1.0/24).
Common Problems
| Problem | Possible Cause | Solution |
|---|---|---|
| Pod IPs are in the same range as node IPs | Cluster uses Azure CNI (traditional), not Overlay | This is expected for non-Overlay clusters |
| Cilium pods not found | Cluster does not use Cilium dataplane | Check with instructor — cluster may use a different network policy engine |
azure-cni-networkmonitor ConfigMap not found |
Not all AKS configurations create this ConfigMap | Use node annotations and Pod CIDR inspection instead |
| Pods cannot reach external services | Network policy or NSG blocking egress | Check NetworkPolicies and Azure NSG rules |
Best Practices
- Use Azure CNI Overlay for new clusters — It provides the best balance of scalability and simplicity without consuming VNet IP addresses for Pods.
- Enable Cilium dataplane — eBPF-based networking provides better performance and richer network policy support compared to iptables-based alternatives.
- Plan your IP address space — Even with Overlay, plan VNet subnets for nodes, Services, and any other Azure resources carefully.
- Consider private clusters for production — Restricting API server access to private endpoints reduces the attack surface significantly.
- Understand your networking model before deploying — The networking model cannot be changed after cluster creation.
Summary
In this exercise you learned:
- The four networking models available in AKS: kubenet, Azure CNI, Azure CNI Overlay, and Azure CNI with Cilium
- How Azure CNI Overlay separates Pod IPs from VNet IPs using a dedicated Pod CIDR
- How Cilium replaces kube-proxy with an eBPF-based dataplane
- How to inspect your cluster’s networking configuration using kubectl
- The concept of private clusters and when to use them
- Why Azure CNI Overlay with Cilium is the recommended configuration for new AKS clusters