04: Multi-Container Pods
04: Multi-Container Pods
Objective
Understand when and why to use multi-container Pods. Learn the sidecar pattern and init container pattern by building Pods that use shared volumes and container dependencies.
Theory
Why Multi-Container Pods?
A Pod can contain more than one container. All containers in a Pod share:
- The same network namespace (localhost communication, same IP)
- The same storage volumes (via volume mounts)
- The same lifecycle (scheduled together on the same node)
Multi-container Pods are used when containers are tightly coupled and need to work together as a single unit. If containers can run independently, they should be in separate Pods.
Common Multi-Container Patterns
| Pattern | Description | Example |
|---|---|---|
| Sidecar | A helper container that extends the main container | Log shipper, monitoring agent, proxy |
| Ambassador | A proxy container that handles external communication | Database proxy, API gateway |
| Adapter | A container that transforms output from the main container | Log format converter, metrics adapter |
Native Sidecar Containers (Kubernetes 1.29+)
Starting with Kubernetes 1.29 (supported in AKS), Kubernetes has native sidecar support. Native sidecars are defined as init containers with restartPolicy: Always. They:
- Start before the main containers (like init containers)
- Run alongside main containers for the Pod’s lifetime (like sidecars)
- Shut down after main containers exit (proper lifecycle ordering)
This eliminates the old workaround of using regular containers as sidecars, which had no lifecycle guarantees. The Istio service mesh on AKS uses native sidecars by default on AKS 1.33+.
Note: This exercise uses the traditional sidecar pattern (regular container) which remains valid and widely used. Native sidecars are the direction Kubernetes is heading for production sidecar workloads.
Init Containers
Init containers run before the main (app) containers start. They are used for setup tasks:
- Wait for a dependency (database, external service) to be ready
- Populate shared volumes with configuration or data
- Run database migrations
Key differences from regular containers:
- Init containers run sequentially, one at a time
- Each init container must complete successfully before the next one starts
- If an init container fails, Kubernetes restarts it (according to the Pod restart policy)
- Init containers do not support readiness probes (they are not long-running)
Sidecar Pattern Diagram
graph TB
subgraph Pod["Pod: web-with-logging"]
direction LR
subgraph main["Main Container<br/>nginx"]
nginx_proc["nginx process<br/>writes to /var/log/nginx"]
end
subgraph sidecar["Sidecar Container<br/>log-shipper"]
tail_proc["busybox tail<br/>reads from /var/log/nginx"]
end
subgraph vol["Shared Volume<br/>emptyDir: log-volume"]
logs["/var/log/nginx"]
end
main -->|writes logs| vol
sidecar -->|reads logs| vol
end
style Pod fill:#e1f5fe,stroke:#0288d1,stroke-width:2px
style main fill:#e8f5e9,stroke:#388e3c,stroke-width:1px
style sidecar fill:#fff3e0,stroke:#f57c00,stroke-width:1px
style vol fill:#f3e5f5,stroke:#7b1fa2,stroke-width:1px
Practical Task 1: Sidecar Pattern — Shared Logging
In this task you will create a Pod with two containers:
- nginx — serves HTTP requests and writes access logs to a shared volume
- log-shipper (busybox) — reads the access log from the shared volume and outputs it to stdout
Both containers mount the same emptyDir volume at different paths.
Step 1: Create the Manifest
Create a file called sidecar-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: web-with-logging
namespace: student-XX # Replace XX with your student number
labels:
app: web-with-logging
team: teamXX
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
name: http
volumeMounts:
- name: log-volume
mountPath: /var/log/nginx
- name: log-shipper
image: busybox:1.36
command:
- /bin/sh
- -c
- "tail -f /logs/access.log"
volumeMounts:
- name: log-volume
mountPath: /logs
volumes:
- name: log-volume
emptyDir: {}
Step 2: Deploy and Verify
kubectl apply -f sidecar-pod.yaml
kubectl get pod web-with-logging -n student-XX
Wait until READY shows 2/2:
NAME READY STATUS RESTARTS AGE
web-with-logging 2/2 Running 0 20s
Step 3: Generate Traffic
Port-forward to the nginx container and make some requests:
kubectl port-forward pod/web-with-logging 8080:80 -n student-XX
In another terminal:
curl http://localhost:8080
curl http://localhost:8080
curl http://localhost:8080
Step 4: View Logs from the Sidecar
kubectl logs web-with-logging -c log-shipper -n student-XX
You should see the access log entries from nginx, proving that the sidecar reads from the shared volume.
You can also view logs from the main container:
kubectl logs web-with-logging -c nginx -n student-XX
Step 5: Explore the Shared Volume
Exec into the nginx container and check the log directory:
kubectl exec -it web-with-logging -c nginx -n student-XX -- ls -la /var/log/nginx
Exec into the sidecar and check its view of the same volume:
kubectl exec -it web-with-logging -c log-shipper -n student-XX -- ls -la /logs
Both containers see the same files, confirming the shared emptyDir volume works.
Step 6: Clean Up
kubectl delete pod web-with-logging -n student-XX
Practical Task 2: Init Container — Wait for a Service
In this task you will create a Pod with an init container that waits for a Kubernetes Service to become available before the main container starts.
This simulates a real-world scenario where your application needs a database or external service to be ready before it can start.
Step 1: Create the Manifest
Create a file called init-container-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: app-with-init
namespace: student-XX # Replace XX with your student number
labels:
app: app-with-init
team: teamXX
spec:
initContainers:
- name: wait-for-service
image: busybox:1.36
command:
- /bin/sh
- -c
- |
echo "Waiting for mydb service to be resolvable..."
until nslookup mydb.student-XX.svc.cluster.local; do
echo "mydb not ready yet, retrying in 2 seconds..."
sleep 2
done
echo "mydb is available! Starting main container."
containers:
- name: app
image: <ACR_NAME>.azurecr.io/kuard:1
ports:
- containerPort: 8080
name: http
Note: Replace
<ACR_NAME>with the actual ACR name andXXwith your student number.
Step 2: Deploy and Observe
kubectl apply -f init-container-pod.yaml
kubectl get pod app-with-init -n student-XX
The Pod should be stuck in Init:0/1 status because the mydb service does not exist yet:
NAME READY STATUS RESTARTS AGE
app-with-init 0/1 Init:0/1 0 10s
Check what the init container is doing:
kubectl logs app-with-init -c wait-for-service -n student-XX
Step 3: Create the Service
Now create the service that the init container is waiting for:
kubectl create service clusterip mydb --tcp=3306:3306 -n student-XX
Step 4: Watch the Pod Start
kubectl get pod app-with-init -n student-XX --watch
After a few seconds, the init container should complete and the main container will start:
NAME READY STATUS RESTARTS AGE
app-with-init 0/1 Init:0/1 0 30s
app-with-init 0/1 PodInitializing 0 35s
app-with-init 1/1 Running 0 36s
Press Ctrl+C to stop watching.
Step 5: Clean Up
kubectl delete pod app-with-init -n student-XX
kubectl delete service mydb -n student-XX
Useful Commands
| Command | Description |
|---|---|
kubectl logs <pod> -c <container> -n student-XX |
View logs for a specific container |
kubectl logs <pod> --all-containers -n student-XX |
View logs from all containers |
kubectl exec -it <pod> -c <container> -n student-XX -- <cmd> |
Execute a command in a specific container |
kubectl describe pod <pod> -n student-XX |
See init container status, events, volume mounts |
kubectl get pod <pod> -n student-XX -o jsonpath='{.status.initContainerStatuses}' |
Check init container status programmatically |
Common Problems
| Problem | Possible Cause | Solution |
|---|---|---|
Pod stuck in Init:0/1 |
Init container dependency not available | Check init container logs; create the missing service or resource |
| Sidecar shows no logs | File not created yet by main container | The tail -f command will wait; generate some traffic |
READY shows 1/2 |
One container is crashing | Check logs for the failing container with -c flag |
| Volume mount path conflict | Two containers writing to the same file | Use different subdirectories or coordinate file access |
Best Practices
- Use init containers for preconditions — Do not add wait/retry logic to your main application. Use init containers to handle dependencies.
- Keep sidecars lightweight — Sidecar containers add resource overhead. Use minimal images like
busyboxfor simple tasks. - Shared volumes should be
emptyDir— For temporary data shared between containers in the same Pod,emptyDiris the simplest choice. It is deleted when the Pod is removed. - Name your containers clearly — When a Pod has multiple containers, clear names make it easier to target the right one with
kubectl logs -candkubectl exec -c. - Consider resource requests for all containers — Every container in the Pod consumes resources. Set requests and limits for sidecars too.
Summary
In this exercise you learned:
- When and why to use multi-container Pods
- The sidecar pattern: a helper container that reads from a shared volume
- How
emptyDirvolumes enable data sharing between containers in the same Pod - The init container pattern: running setup tasks before the main container starts
- How to view logs and exec into specific containers in a multi-container Pod
Review Questions
- What resources do containers in the same Pod share?
- When should you use multiple containers in a single Pod vs. separate Pods?
- What happens if an init container fails?
- What is the difference between an init container and a sidecar container?
- What type of volume is best suited for sharing temporary data between containers in the same Pod?