Workload Controllers Quiz
Quiz
spec levels in a Deployment manifest and what each controls.spec levels in a Deployment manifest and what each controls.Three nested spec levels:
Deployment spec (top-level): Controls the Deployment lifecycle
replicas: How many pods to maintainstrategy: Update strategy (RollingUpdate, Recreate)selector: Which pods this Deployment manages
Pod template spec (under
template.spec): Defines pod configurationvolumes: Storage volumesrestartPolicy: Container restart behaviorsecurityContext: Pod-level security settings
Container spec (under
template.spec.containers): Individual container settingsimage: Container image to runports: Exposed portsresources: CPU/memory requests and limitsprobes: Liveness and readiness probes
Did you get it right?
maxSurge: 1 and maxUnavailable: 0, which statements are TRUE?maxSurge: 1 allows 1 extra pod temporarily. maxUnavailable: 0 ensures no pods go unavailable, maintaining full availability. New pods must be ready before old ones are removed. Both ReplicaSets exist during the update. Option 3 is false - new pods are created first.maxSurge: 1 allows 1 extra pod temporarily. maxUnavailable: 0 ensures no pods go unavailable, maintaining full availability. New pods must be ready before old ones are removed. Both ReplicaSets exist during the update. Option 3 is false - new pods are created first.nginx:1.21?apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
template:
spec:
containers:
- name: web
image: nginx:1.22maxSurge: 0 (no extra pods) and maxUnavailable: 1 (one can be down), Kubernetes terminates 1 old pod first, then creates 1 new pod with the updated image. This continues sequentially, causing brief moments with only 2 pods available.maxSurge: 0 (no extra pods) and maxUnavailable: 1 (one can be down), Kubernetes terminates 1 old pod first, then creates 1 new pod with the updated image. This continues sequentially, causing brief moments with only 2 pods available.pod-template-hash label is manually added by users to track different versions of pods.pod-template-hash label is automatically generated by the Deployment controller. It’s a hash of the pod template specification and is used to uniquely identify ReplicaSets and prevent selector conflicts during rolling updates.pod-template-hash label is automatically generated by the Deployment controller. It’s a hash of the pod template specification and is used to uniquely identify ReplicaSets and prevent selector conflicts during rolling updates.revisionHistoryLimit parameter controls how many old _____ are retained for rollback capability.revisionHistoryLimit (default: 10) specifies how many old ReplicaSets to keep. These scaled-to-zero ReplicaSets allow you to rollback to previous versions. Setting it to 0 would prevent rollbacks.revisionHistoryLimit (default: 10) specifies how many old ReplicaSets to keep. These scaled-to-zero ReplicaSets allow you to rollback to previous versions. Setting it to 0 would prevent rollbacks.minReadySeconds: 30 on a Deployment. A new pod passes its readiness probe at t=10s but crashes at t=25s. What happens?minReadySeconds requires a pod to stay ready for the specified duration before being considered available. If the pod crashes during this period, it’s NOT considered available, and the rolling update pauses to prevent rolling out a bad version.minReadySeconds requires a pod to stay ready for the specified duration before being considered available. If the pod crashes during this period, it’s NOT considered available, and the rolling update pauses to prevent rolling out a bad version.strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: _____
maxUnavailable: 0maxSurge: 3 (or 100%) to allow all 3 new pods to be created alongside the 3 old pods. Combined with maxUnavailable: 0, this ensures all old pods remain until all new pods are ready.maxSurge: 3 (or 100%) to allow all 3 new pods to be created alongside the 3 old pods. Combined with maxUnavailable: 0, this ensures all old pods remain until all new pods are ready.database has 3 replicas. You scale it to 5 replicas. What happens?kubectl scale statefulset database --replicas=5database-3 is created first. Only after it becomes Ready will database-4 be created. This ordered scaling ensures each pod is stable before proceeding.database-3 is created first. Only after it becomes Ready will database-4 be created. This ordered scaling ensures each pod is stable before proceeding.clusterIP: None) is required for StatefulSets. It creates DNS entries like <pod-name>.<service-name>.<namespace>.svc.cluster.local, providing stable network identities for each pod.clusterIP: None) is required for StatefulSets. It creates DNS entries like <pod-name>.<service-name>.<namespace>.svc.cluster.local, providing stable network identities for each pod.volumeClaimTemplates:
- Automatically creates a unique PersistentVolumeClaim for each pod in the StatefulSet
- Each PVC is bound to a specific pod ordinal (e.g., data-mysql-0, data-mysql-1)
- PVCs persist even if pods are deleted - when a pod is recreated, it reattaches to the same PVC
- Scaling up creates new PVCs; scaling down does NOT delete PVCs (manual cleanup required)
vs. Regular PersistentVolumeClaim:
- Regular PVC is manually created and all pods reference the same PVC by name
- With ReadWriteOnce (RWO), only one pod can mount it at a time
- With ReadWriteMany (RWX), all pods share the same storage (not isolated)
- volumeClaimTemplates provide per-pod isolated persistent storage
- Essential for stateful applications where each instance needs its own data
Did you get it right?
replicas field - the number of nodes determines the pod count. If a node is added, a new pod is automatically created on it.replicas field - the number of nodes determines the pod count. If a node is added, a new pod is automatically created on it.accelerator=nvidia?apiVersion: apps/v1
kind: DaemonSet
metadata:
name: gpu-plugin
spec:
selector:
matchLabels:
app: gpu
template:
metadata:
labels:
app: gpu
spec:
nodeSelector:
accelerator: nvidia
containers:
- name: plugin
image: gpu-plugin:1.0nodeSelector restricts the DaemonSet to only nodes with the accelerator=nvidia label. Even though there are 5 total nodes, only 2 match the selector, so only 2 pods are created.nodeSelector restricts the DaemonSet to only nodes with the accelerator=nvidia label. Even though there are 5 total nodes, only 2 match the selector, so only 2 pods are created.restartPolicy: Never or OnFailure, while Deployments use Always.restartPolicy: Never or OnFailure, while Deployments use Always.completions (total pods needed), parallelism (concurrent pods), backoffLimit (retry count), and ttlSecondsAfterFinished (cleanup time). There is no replicas field - that’s for Deployments/ReplicaSets.completions (total pods needed), parallelism (concurrent pods), backoffLimit (retry count), and ttlSecondsAfterFinished (cleanup time). There is no replicas field - that’s for Deployments/ReplicaSets.completions: 10 and parallelism: 3. How does it execute?spec:
completions: 10
parallelism: 3
template:
spec:
containers:
- name: worker
image: processor:1.0
restartPolicy: OnFailurerestartPolicy values are allowed?restartPolicy: Never (don’t restart, create new pod on failure) or OnFailure (restart container on failure). The Always policy is NOT allowed for Jobs because they are meant to run to completion, not continuously.restartPolicy: Never (don’t restart, create new pod on failure) or OnFailure (restart container on failure). The Always policy is NOT allowed for Jobs because they are meant to run to completion, not continuously.CronJob concurrencyPolicy options:
Allow (default)
- Multiple job instances can run simultaneously
- Use when: Jobs are independent and can overlap safely
- Example: Hourly log analysis where overlapping is fine
Forbid
- Skip the new job if the previous one hasn’t finished
- Use when: Jobs must not overlap (e.g., exclusive resource access)
- Example: Database backup that locks tables
Replace
- Cancel the running job and start the new one
- Use when: Only the latest execution matters
- Example: Generating a “current status” report where old jobs are obsolete
Did you get it right?
apiVersion: batch/v1
kind: CronJob
metadata:
name: daily-backup
spec:
schedule: "_____"
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: backup:1.0minute hour day month day-of-week. For 2:30 AM daily: 30 (minute) 2 (hour) * (any day) * (any month) * (any day of week) = 30 2 * * *.minute hour day month day-of-week. For 2:30 AM daily: 30 (minute) 2 (hour) * (any day) * (any month) * (any day of week) = 30 2 * * *.kubectl scale deploy <name> --replicas=0) stops all pods while preserving the Deployment configuration. You can later scale back up. rollout pause stops updates but doesn’t stop pods. suspend is for CronJobs, not Deployments.kubectl scale deploy <name> --replicas=0) stops all pods while preserving the Deployment configuration. You can later scale back up. rollout pause stops updates but doesn’t stop pods. suspend is for CronJobs, not Deployments.restartPolicy: Always). Jobs and CronJobs create pods that run to completion with restartPolicy: Never or OnFailure.restartPolicy: Always). Jobs and CronJobs create pods that run to completion with restartPolicy: Never or OnFailure.kubectl rollout undo deployment web-app --to-revision=3--to-revision=3 flag explicitly rolls back to revision 3, regardless of the current revision. Without this flag, kubectl rollout undo would rollback to the immediate previous revision (4 in this case).--to-revision=3 flag explicitly rolls back to revision 3, regardless of the current revision. Without this flag, kubectl rollout undo would rollback to the immediate previous revision (4 in this case).revisionHistoryLimit count) to enable rollbacks. This allows quick restoration to previous versions.revisionHistoryLimit count) to enable rollbacks. This allows quick restoration to previous versions.progressDeadlineSeconds parameter sets a timeout for deployment progress. The default value is _____ seconds.progressDeadlineSeconds is 600 seconds (10 minutes). If a deployment doesn’t make progress within this time (new pod ready, old pod terminated, or status change), it’s marked as failed.progressDeadlineSeconds is 600 seconds (10 minutes). If a deployment doesn’t make progress within this time (new pod ready, old pod terminated, or status change), it’s marked as failed.updateStrategy.type: RollingUpdate and updateStrategy.rollingUpdate.maxUnavailable, which specifies how many nodes can update simultaneously. However, unlike Deployments, DaemonSets don’t support canary deployments.updateStrategy.type: RollingUpdate and updateStrategy.rollingUpdate.maxUnavailable, which specifies how many nodes can update simultaneously. However, unlike Deployments, DaemonSets don’t support canary deployments.selector field do?selector field uses label matching to identify which pods the Deployment manages. It must match the labels in template.metadata.labels. This allows the Deployment controller to track and manage the correct pods.selector field uses label matching to identify which pods the Deployment manages. It must match the labels in template.metadata.labels. This allows the Deployment controller to track and manage the correct pods.Deployment:
- Use for: Stateless applications (web apps, APIs, microservices)
- Characteristics: Pods are interchangeable, ephemeral storage, parallel deployment
- Example: REST API server, web frontend, stateless processing service
StatefulSet:
- Use for: Stateful applications requiring stable identity and stable storage
- Characteristics: Ordered pods, stable network IDs, stable per-pod storage (each pod always reattaches to its specific PVC)
- Example: Databases (MySQL, PostgreSQL), message brokers (Kafka, RabbitMQ), distributed systems (Zookeeper, etcd)
DaemonSet:
- Use for: Cluster-wide services that must run on every node (or selected nodes)
- Characteristics: One pod per node, automatic scaling with cluster
- Example: Log collectors (Fluentd), monitoring agents (Prometheus Node Exporter), network plugins (Calico), storage daemons
Did you get it right?
backoffLimit: 3. The pod fails 4 times. What happens?spec:
backoffLimit: 3
template:
spec:
containers:
- name: worker
image: processor:1.0
restartPolicy: NeverbackoffLimit: 3 means the Job will retry up to 3 times after the initial failure. After 4 total attempts (1 initial + 3 retries) all fail, the Job is marked as Failed and stops creating new pods.backoffLimit: 3 means the Job will retry up to 3 times after the initial failure. After 4 total attempts (1 initial + 3 retries) all fail, the Job is marked as Failed and stops creating new pods.minReadySeconds (stability wait), progressDeadlineSeconds (timeout), maxSurge (creation rate), and maxUnavailable (termination rate). revisionHistoryLimit only affects cleanup, not update speed.minReadySeconds (stability wait), progressDeadlineSeconds (timeout), maxSurge (creation rate), and maxUnavailable (termination rate). revisionHistoryLimit only affects cleanup, not update speed.ReplicaSet: Maintains pod count and self-healing (but no update strategy)
Deployment: Provides rolling updates, rollbacks, and zero-downtime deployments
StatefulSet: Manages stateful apps with stable identity, ordered deployment, and stable storage bindings
DaemonSet: Ensures node-level coverage for cluster-wide services
Did you get it right?