12: Deployments
12: Deployments
Objective
Learn how to create and manage Kubernetes Deployments, understand the Deployment-ReplicaSet-Pod hierarchy, scale applications, and build a full-config Deployment that integrates ConfigMaps, Secrets, resource limits, health probes, and volumes into a single manifest.
Theory
What is a Deployment?
A Deployment is a Kubernetes controller that manages the desired state of your application. Instead of creating Pods directly, you describe the desired state in a Deployment, and the Deployment controller ensures that state is maintained.
The hierarchy is:
- Deployment — Declares the desired state (image, replicas, strategy).
- ReplicaSet — Created and managed by the Deployment. Ensures the correct number of Pod replicas are running.
- Pods — Created and managed by the ReplicaSet. Run the actual containers.
Key Benefits
| Feature | Description |
|---|---|
| Self-healing | If a Pod crashes or is deleted, the ReplicaSet automatically creates a replacement |
| Declarative updates | Change the Deployment spec and Kubernetes handles the rollout |
| Scaling | Change the replica count to scale up or down |
| Rollback | Revert to a previous version if something goes wrong |
| Rolling updates | Update Pods gradually without downtime |
Deployment Hierarchy with Self-Healing
graph TB
D["Deployment<br/>nginx-deployment<br/>replicas: 3"]
RS["ReplicaSet<br/>nginx-deployment-7d4f8b6c9<br/>manages 3 Pods"]
P1["Pod 1<br/>Running"]
P2["Pod 2<br/>Running"]
P3["Pod 3<br/>Running"]
P3X["Pod 3<br/>Crashed"]
P3N["Pod 3 (new)<br/>Created automatically"]
D --> RS
RS --> P1
RS --> P2
RS --> P3
P3 -. "Pod crashes" .-> P3X
P3X -. "ReplicaSet detects<br/>and creates new Pod" .-> P3N
style D fill:#e1f5fe,stroke:#0288d1,stroke-width:2px
style RS fill:#fff3e0,stroke:#f57c00,stroke-width:2px
style P1 fill:#c8e6c9,stroke:#388e3c,stroke-width:1px
style P2 fill:#c8e6c9,stroke:#388e3c,stroke-width:1px
style P3 fill:#c8e6c9,stroke:#388e3c,stroke-width:1px
style P3X fill:#ffcdd2,stroke:#c62828,stroke-width:1px
style P3N fill:#c8e6c9,stroke:#388e3c,stroke-width:2px
Practical Tasks
Task 1: Create a Simple Deployment
Create a basic nginx Deployment with 1 replica and observe how Kubernetes creates the Pod.
Create a file called deployment-nginx.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: student-XX
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.26
ports:
- containerPort: 80
Deploy and watch the Pod creation in real time:
kubectl apply -f deployment-nginx.yaml
kubectl get pods -n student-XX -w
You should see the Pod transition through phases:
NAME READY STATUS RESTARTS AGE
nginx-deployment-7d4f8b6c9-abc12 0/1 ContainerCreating 0 2s
nginx-deployment-7d4f8b6c9-abc12 1/1 Running 0 5s
Press Ctrl+C to stop watching.
Inspect the created resources:
kubectl get deployment nginx-deployment -n student-XX
kubectl get replicaset -n student-XX
kubectl get pods -n student-XX
Notice the naming chain: Deployment name -> ReplicaSet name (with hash) -> Pod name (with random suffix).
Test self-healing: Delete the Pod manually and watch Kubernetes recreate it:
kubectl delete pod -l app=nginx -n student-XX
kubectl get pods -n student-XX -w
Task 2: Scaling
Scale the Deployment to 3 replicas by updating the YAML.
Edit deployment-nginx.yaml and change replicas: 1 to replicas: 3, then apply:
kubectl apply -f deployment-nginx.yaml
kubectl get pods -n student-XX -l app=nginx
You should see 3 Pods:
NAME READY STATUS RESTARTS AGE
nginx-deployment-7d4f8b6c9-abc12 1/1 Running 0 5m
nginx-deployment-7d4f8b6c9-def34 1/1 Running 0 10s
nginx-deployment-7d4f8b6c9-ghi56 1/1 Running 0 10s
Now scale back down to 1 replica using the imperative command:
kubectl scale deployment nginx-deployment --replicas=1 -n student-XX
kubectl get pods -n student-XX -l app=nginx
Only 1 Pod should remain.
Clean up:
kubectl delete deployment nginx-deployment -n student-XX
Task 3: Full-Config Deployment (Challenge)
This is a test exercise — you will build a complete Deployment manifest yourself, integrating everything you have learned so far. No copy-pasting — use the knowledge from exercises 06-11 to write the YAML from scratch.
Architecture Overview
graph TB
CM["ConfigMap: app-config-XX<br/>app.properties, config.json, ENVIRONMENT"]
SEC["Secret: app-secrets-XX<br/>DB_PASSWORD"]
subgraph DEP["Deployment: kuard-deployment-XX (2 replicas)"]
subgraph POD["Pod"]
CONT["Container: kuard"]
ENV["env:<br/>ENVIRONMENT (from ConfigMap)<br/>DB_PASSWORD (from Secret)"]
VOL["/config<br/>(mounted ConfigMap)"]
RES["resources:<br/>CPU: 100m, Memory: 64Mi"]
LP["livenessProbe: /healthy"]
RP["readinessProbe: /ready"]
end
end
CM -->|"envFrom / configMapKeyRef"| ENV
CM -->|"volumeMounts"| VOL
SEC -->|"secretKeyRef"| ENV
style DEP fill:#e3f2fd,stroke:#1976d2
style POD fill:#f0f4ff
style CM fill:#fff3e0,stroke:#f57c00
style SEC fill:#fce4ec,stroke:#e91e63
Step 1: Create configuration files
# Create app.properties
cat > app.properties << EOF
environment=production
database.url=postgres://db:5432
api.key=123456789
EOF
# Create config.json
cat > config.json << EOF
{
"database": {
"host": "db.example.com",
"port": 5432
},
"cache": {
"enabled": true,
"ttl": 300
}
}
EOF
Step 2: Write the YAML manifest yourself
Create a file full-config-XX.yaml containing all three resources separated by ---:
- ConfigMap named
app-config-XX:- Include files
app.propertiesandconfig.json(use--from-filewith--dry-run=client -o yamlto generate, or write inline) - Add a key
ENVIRONMENTwith valueproduction
- Include files
- Secret named
app-secrets-XX:- Type:
Opaque - Contains key
DB_PASSWORDwith valueadmin123
- Type:
- Deployment named
kuard-deployment-XX:- 2 replicas
- Image:
<ACR_NAME>.azurecr.io/kuard:1 - QoS Class: Guaranteed (requests = limits)
- CPU: 100m
- Memory: 64Mi
- Health checks:
- Liveness probe: HTTP GET endpoint
/healthy - Readiness probe: HTTP GET endpoint
/ready - Both on port 8080
- initialDelaySeconds: 5, periodSeconds: 10
- Liveness probe: HTTP GET endpoint
- Environment variables:
ENVIRONMENTfrom ConfigMapapp-config-XXDB_PASSWORDfrom Secretapp-secrets-XX
- ConfigMap
app-config-XXmounted as a volume at/config
Hint: You can put all three resources in one file separated by
---. Look back at exercises 07 (ConfigMap), 08 (Secrets), 10 (Resources), and 11 (Probes) if you need to refresh the syntax.
Step 3: Deploy and verify
kubectl apply -f full-config-XX.yaml
Port-forward to the kuard UI:
kubectl port-forward deployment/kuard-deployment-XX 8080:8080
Open http://localhost:8080 and check:
- Server Env tab —
ENVIRONMENTandDB_PASSWORDshould be visible - File System Browser tab —
/config/should containapp.propertiesandconfig.json - Liveness Probe tab — status should show healthy
- Readiness Probe tab — status should show ready
Success Criteria
- Deployment runs 2 replicas (both
1/1 Running) - QoS Class is
Guaranteed(verify:kubectl get pod -l app=kuard-deployment-XX -o yaml | grep qosClass) - Both config files are visible in
/config - Both environment variables are set correctly
- Health checks pass
Clean Up
kubectl delete -f full-config-XX.yaml
Useful Commands
| Command | Description |
|---|---|
kubectl get deployment -n student-XX |
List all Deployments |
kubectl describe deployment <name> -n student-XX |
Show Deployment details and events |
kubectl get replicaset -n student-XX |
List all ReplicaSets |
kubectl scale deployment <name> --replicas=N -n student-XX |
Imperatively scale a Deployment |
kubectl rollout status deployment <name> -n student-XX |
Watch rollout progress |
kubectl rollout history deployment <name> -n student-XX |
View rollout history |
Common Problems
| Problem | Possible Cause | Solution |
|---|---|---|
Pods stuck in Pending |
Insufficient cluster resources for the requested CPU/memory | Reduce resource requests or check node capacity |
Pods in CrashLoopBackOff |
Application error, misconfigured probes, or missing ConfigMap/Secret | Check logs with kubectl logs and events with kubectl describe pod |
Only some Pods are READY |
Readiness probe failing on some replicas | Check probe configuration and application health on each Pod |
| ConfigMap/Secret changes not reflected | Pods use cached values from when they started | Restart the Deployment: kubectl rollout restart deployment <name> |
selector does not match template.metadata.labels |
Mismatched labels between Deployment selector and Pod template | Ensure spec.selector.matchLabels matches spec.template.metadata.labels exactly |
Best Practices
- Never create bare Pods in production — Always use a Deployment (or another controller) so that Pods are self-healing and manageable.
- Use labels consistently — The
selector.matchLabelsmust match the Pod template labels. Use a consistent labeling scheme across all resources. - Keep all related resources in one file — Using
---separators, you can define ConfigMap, Secret, and Deployment in a single file for easier management. - Set resource requests and limits — Ensures Guaranteed QoS and predictable scheduling.
- Configure liveness and readiness probes — Makes your application self-healing and prevents traffic from reaching unhealthy Pods.
- Use
kubectl applyinstead ofkubectl create—applyis declarative and idempotent, allowing you to update resources by re-applying the same file.
Summary
In this exercise you learned:
- The Deployment -> ReplicaSet -> Pod hierarchy and how self-healing works
- How to create a simple Deployment and watch Pod creation
- How to scale a Deployment both declaratively (YAML) and imperatively (
kubectl scale) - How to build a full-config Deployment that integrates ConfigMap, Secret, resource limits, probes, and volume mounts
- How to verify the complete configuration using kuard’s web interface
- That Deployments are the standard way to run stateless applications in production