Pod Lifecycle and Scheduling Quiz
Quiz
Question 1 of 30
(0 answered)
Question 1
A Pod is stuck in the
Pending phase with nodeName=worker-1 set. What is the most likely cause?โ
Correct!
When
nodeName is set but the Pod is still Pending, it means the scheduler has already assigned the node, but the kubelet is likely pulling images, creating containers, or running init containers.โ
Incorrect
When
nodeName is set but the Pod is still Pending, it means the scheduler has already assigned the node, but the kubelet is likely pulling images, creating containers, or running init containers.Check the pod lifecycle diagram - what happens after the scheduler assigns a node?
Question 2
Which of the following will cause a container to restart when
restartPolicy: Always is set?โ
Correct!
restartPolicy: Always restarts containers on ANY termination (exit code 0 or non-zero), crashes, and liveness probe failures. Node resource exhaustion would cause pod eviction, not container restart.โ
Incorrect
restartPolicy: Always restarts containers on ANY termination (exit code 0 or non-zero), crashes, and liveness probe failures. Node resource exhaustion would cause pod eviction, not container restart.Remember that ‘Always’ means restart on ANY termination, not just failures.
Question 3
The scheduler’s filtering phase removes nodes that have insufficient resources, while the scoring phase ranks the remaining nodes.
โ
Correct!
Correct! The filtering phase eliminates unsuitable nodes (insufficient resources, taint mismatches, etc.), and then the scoring phase ranks the remaining candidates to select the best one.
โ
Incorrect
Correct! The filtering phase eliminates unsuitable nodes (insufficient resources, taint mismatches, etc.), and then the scoring phase ranks the remaining candidates to select the best one.
Think about the two-phase approach: eliminate first, then rank.
Question 4
Arrange the scheduling process steps in the correct order:
Drag to arrange from first to last
โฎโฎ
Scheduler watches for unscheduled pods
โฎโฎ
Pod created (nodeName=null)
โฎโฎ
Filtering phase (remove unsuitable nodes)
โฎโฎ
Scoring phase (rank remaining nodes)
โฎโฎ
Select highest-scored node
โฎโฎ
Bind pod to node
โ
Correct!
The scheduler follows this exact sequence: watch for unscheduled pods โ filter unsuitable nodes โ score remaining nodes โ select best โ bind to node.
โ
Incorrect
The scheduler follows this exact sequence: watch for unscheduled pods โ filter unsuitable nodes โ score remaining nodes โ select best โ bind to node.
Question 5
Given this configuration, what will happen if you try to schedule a pod without the required toleration?
# Node taint:
kubectl taint nodes worker-1 gpu=nvidia:NoSchedule
# Pod spec (no tolerations):
apiVersion: v1
kind: Pod
metadata:
name: regular-app
spec:
containers:
- name: app
image: nginxWhat will this code output?
โ
Correct!
The
NoSchedule taint effect prevents pods without matching tolerations from being scheduled on the node. The pod will remain Pending until it can find a suitable node or gets a matching toleration.โ
Incorrect
The
NoSchedule taint effect prevents pods without matching tolerations from being scheduled on the node. The pod will remain Pending until it can find a suitable node or gets a matching toleration.NoSchedule affects new pod scheduling, not running pods.
Question 6
What is the key difference between
nodeSelector and nodeAffinity?โ
Correct!
nodeAffinity provides more expressiveness with required (hard) and preferred (soft) constraints, plus operators like In, NotIn, Exists, Gt, and Lt. nodeSelector only supports exact label matching with AND logic.โ
Incorrect
nodeAffinity provides more expressiveness with required (hard) and preferred (soft) constraints, plus operators like In, NotIn, Exists, Gt, and Lt. nodeSelector only supports exact label matching with AND logic.Think about flexibility - which one offers ‘preferred’ scheduling?
Question 7
In pod affinity rules, the
topologyKey field defines the scope of co-location. What is the topologyKey value to ensure pods are scheduled on the same physical node?โ
Correct!
The
kubernetes.io/hostname label uniquely identifies each node, so using it as a topologyKey ensures pods are co-located on the exact same physical node.โ
Incorrect
The
kubernetes.io/hostname label uniquely identifies each node, so using it as a topologyKey ensures pods are co-located on the exact same physical node.It’s a built-in Kubernetes label that identifies individual nodes.
Question 8
Complete the PodDisruptionBudget to ensure at least 3 pods remain available during voluntary disruptions:
Fill in the missing field name and value
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: web-pdb
spec:
_____: 3
selector:
matchLabels:
app: webโ
Correct!
The
minAvailable field specifies the minimum number of pods that must remain running during voluntary disruptions. Alternatively, you could use maxUnavailable to specify the maximum number that can be down.โ
Incorrect
The
minAvailable field specifies the minimum number of pods that must remain running during voluntary disruptions. Alternatively, you could use maxUnavailable to specify the maximum number that can be down.Question 9
Which scenarios are considered VOLUNTARY disruptions that PodDisruptionBudgets protect against?
โ
Correct!
PDBs protect against human-initiated or automated voluntary disruptions like draining nodes, rolling updates, and manual deletions. They do NOT protect against involuntary disruptions like hardware failures, resource exhaustion, or kernel panics.
โ
Incorrect
PDBs protect against human-initiated or automated voluntary disruptions like draining nodes, rolling updates, and manual deletions. They do NOT protect against involuntary disruptions like hardware failures, resource exhaustion, or kernel panics.
Think ‘planned’ vs ‘unexpected’ - PDBs protect against planned operations.
Question 10
What is Pod Priority and how does it differ from Preemption?
What is Pod Priority and how does it differ from Preemption?
Pod Priority assigns importance levels to pods using PriorityClass, determining scheduling order when resources are available.
Preemption is the action of evicting lower-priority pods to make room for higher-priority pods when the cluster is at capacity.
Key difference:
- Priority = scheduling order (which pod goes first)
- Preemption = resource reclamation (whether to evict others)
You can have priority WITHOUT preemption by setting preemptionPolicy: Never.
Did you get it right?
โ
Correct!
โ
Incorrect
Question 11
A pod has resource requests and limits defined, but the values are not equal (requests < limits). What QoS class will it be assigned?
โ
Correct!
This is
Burstable QoS because the pod has resource requests and limits defined, but they are not equal. Guaranteed requires requests == limits for all resources. BestEffort has no resources defined.โ
Incorrect
This is
Burstable QoS because the pod has resource requests and limits defined, but they are not equal. Guaranteed requires requests == limits for all resources. BestEffort has no resources defined.Look for whether requests equal limits or not.
Question 12
What is the correct eviction order when a node experiences memory pressure?
Node memory pressure detected!
Running pods:
- Pod A: BestEffort (no resources defined)
- Pod B: Burstable (using more than requests)
- Pod C: Guaranteed (requests == limits)
- Pod D: Burstable (within requests)What will this code output?
โ
Correct!
Eviction order under resource pressure: 1) BestEffort first, 2) Burstable pods exceeding requests, 3) Burstable pods within requests, 4) Guaranteed pods last (only as last resort).
โ
Incorrect
Eviction order under resource pressure: 1) BestEffort first, 2) Burstable pods exceeding requests, 3) Burstable pods within requests, 4) Guaranteed pods last (only as last resort).
Best effort gets evicted first, guaranteed last.
Question 13
The
preStop hook executes AFTER the SIGTERM signal is sent to the container.โ
Correct!
False! The
preStop hook executes BEFORE SIGTERM. Flow: Pod deleted โ preStop runs (blocking) โ SIGTERM sent โ wait for graceful shutdown โ SIGKILL if needed.โ
Incorrect
False! The
preStop hook executes BEFORE SIGTERM. Flow: Pod deleted โ preStop runs (blocking) โ SIGTERM sent โ wait for graceful shutdown โ SIGKILL if needed.The name ‘preStop’ gives a clue about when it runs.
Question 14
You set
terminationGracePeriodSeconds: 60 and your preStop hook runs for 45 seconds. How much time does your application have to handle SIGTERM and shut down gracefully?โ
Correct!
The
terminationGracePeriodSeconds is a TOTAL budget that includes preStop execution time. If preStop takes 45s, only 15s remain for the application to handle SIGTERM before SIGKILL is sent at the 60s mark.โ
Incorrect
The
terminationGracePeriodSeconds is a TOTAL budget that includes preStop execution time. If preStop takes 45s, only 15s remain for the application to handle SIGTERM before SIGKILL is sent at the 60s mark.It’s a total budget, not separate time allowances.
Question 15
What exit code indicates that a container was forcefully killed with SIGKILL?
โ
Correct!
Exit code 137 = 128 + 9 (SIGKILL signal number). When a process is terminated by a signal, the exit code is 128 plus the signal number. SIGKILL (exit code 137) indicates forceful termination, often because the container didn’t exit within
terminationGracePeriodSeconds. For reference, SIGTERM has exit code 143 (128 + 15).โ
Incorrect
Exit code 137 = 128 + 9 (SIGKILL signal number). When a process is terminated by a signal, the exit code is 128 plus the signal number. SIGKILL (exit code 137) indicates forceful termination, often because the container didn’t exit within
terminationGracePeriodSeconds. For reference, SIGTERM has exit code 143 (128 + 15).It’s 128 plus the signal number for SIGKILL (9).
Question 16
What happens when you apply this topology spread constraint with 3 replicas across 3 zones?
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: webWhat will this code output?
โ
Correct!
With
maxSkew: 1 and 3 replicas across 3 zones, pods will be evenly distributed (1 per zone) because the maximum difference between any two zones cannot exceed 1. This achieves perfect balance: 1-0=1, satisfying the constraint.โ
Incorrect
With
maxSkew: 1 and 3 replicas across 3 zones, pods will be evenly distributed (1 per zone) because the maximum difference between any two zones cannot exceed 1. This achieves perfect balance: 1-0=1, satisfying the constraint.Calculate: what’s the most even distribution possible with maxSkew=1?
Question 17
Which taint effects will cause existing pods WITHOUT matching tolerations to be evicted?
โ
Correct!
Only
NoExecute evicts existing pods without matching tolerations. NoSchedule and PreferNoSchedule only affect new pod scheduling, allowing existing pods to continue running. ScheduleAnyway is not a valid taint effect.โ
Incorrect
Only
NoExecute evicts existing pods without matching tolerations. NoSchedule and PreferNoSchedule only affect new pod scheduling, allowing existing pods to continue running. ScheduleAnyway is not a valid taint effect.The word ‘Execute’ relates to running pods, not just scheduling.
Question 18
Explain the difference between Pod Affinity and Pod Anti-Affinity, with use cases.
Explain the difference between Pod Affinity and Pod Anti-Affinity, with use cases.
Pod Affinity:
- Attracts pods to nodes where certain other pods are running
- Purpose: Co-locate related pods
- Use case: Schedule app pod near its cache pod (reduce network latency)
Pod Anti-Affinity:
- Repels pods from nodes where certain other pods are running
- Purpose: Separate pods for high availability
- Use case: Spread replicas across different nodes/zones (avoid single point of failure)
topologyKey determines scope:
kubernetes.io/hostname= same/different nodetopology.kubernetes.io/zone= same/different availability zone
Did you get it right?
โ
Correct!
โ
Incorrect
Question 19
A deployment has 5 replicas with a PodDisruptionBudget of
minAvailable: 3. You run kubectl drain node-1 which has 3 of the 5 pods. What happens?โ
Correct!
The drain operation will respect the PDB by evicting pods gradually. It can evict up to 2 pods (5-3=2) while ensuring at least 3 remain available. As evicted pods reschedule on other nodes, the drain can continue until all pods are moved.
โ
Incorrect
The drain operation will respect the PDB by evicting pods gradually. It can evict up to 2 pods (5-3=2) while ensuring at least 3 remain available. As evicted pods reschedule on other nodes, the drain can continue until all pods are moved.
PDBs ensure minimum availability - drain must work around this constraint.
Question 20
Complete the node affinity rule to PREFER (soft constraint) SSD nodes:
Fill in the missing field name for soft preferences
affinity:
nodeAffinity:
_____:
- weight: 100
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssdโ
Correct!
preferredDuringSchedulingIgnoredDuringExecution creates a soft constraint - the scheduler tries to match but will schedule anyway if impossible. Compare with requiredDuringSchedulingIgnoredDuringExecution for hard constraints.โ
Incorrect
preferredDuringSchedulingIgnoredDuringExecution creates a soft constraint - the scheduler tries to match but will schedule anyway if impossible. Compare with requiredDuringSchedulingIgnoredDuringExecution for hard constraints.Question 21
When using
nodeName to assign a pod to a specific node, which of the following is TRUE?โ
Correct!
When
nodeName is set, the scheduler is completely bypassed. No resource checks, no taint validation, no affinity evaluation - the pod is directly assigned to the specified node. This can lead to scheduling failures if the node doesn’t exist or can’t run the pod.โ
Incorrect
When
nodeName is set, the scheduler is completely bypassed. No resource checks, no taint validation, no affinity evaluation - the pod is directly assigned to the specified node. This can lead to scheduling failures if the node doesn’t exist or can’t run the pod.Think about the trade-off: speed vs safety checks.
Question 22
Setting
preemptionPolicy: Never on a PriorityClass means the pod will have low priority and can be preempted by others.โ
Correct!
False!
preemptionPolicy: Never means this pod will NOT preempt (evict) other lower-priority pods, even if it has high priority. The pod still benefits from priority for queue ordering, but won’t kick out running workloads.โ
Incorrect
False!
preemptionPolicy: Never means this pod will NOT preempt (evict) other lower-priority pods, even if it has high priority. The pod still benefits from priority for queue ordering, but won’t kick out running workloads.The policy controls what the pod CAN DO, not what can be done TO it.
Question 23
Which of the following are valid node affinity operators?
โ
Correct!
Valid node affinity operators are:
In, NotIn, Exists, DoesNotExist, Gt (greater than), and Lt (less than). Contains and Equals are not valid operators.โ
Incorrect
Valid node affinity operators are:
In, NotIn, Exists, DoesNotExist, Gt (greater than), and Lt (less than). Contains and Equals are not valid operators.Think about set operations and numeric comparisons.
Question 24
A pod with this configuration is deleted. What happens at t=30s if the preStop hook is still running?
spec:
terminationGracePeriodSeconds: 30
containers:
- name: app
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 60"]What will this code output?
โ
Correct!
At t=30s, the
terminationGracePeriodSeconds budget is exhausted, so SIGKILL is sent immediately, even if preStop is still running. This is why preStop duration MUST be less than the total grace period.โ
Incorrect
At t=30s, the
terminationGracePeriodSeconds budget is exhausted, so SIGKILL is sent immediately, even if preStop is still running. This is why preStop duration MUST be less than the total grace period.The grace period is a hard limit - it’s not extended for hooks.
Question 25
What is the difference between Topology Spread Constraints and Pod Anti-Affinity?
What is the difference between Topology Spread Constraints and Pod Anti-Affinity?
Pod Anti-Affinity:
- Relationship-based: “Keep pods with label X away from pods with label Y”
- Can be hard (required) or soft (preferred with weights)
- Works across any topology domain defined by
topologyKey(node, zone, region, etc.) - Good for: High availability (spread replicas), workload isolation
Topology Spread Constraints:
- Distribution-based: “Spread pods evenly within a max skew of N”
- Fine-grained control via
maxSkew(1, 2, 3…) for balanced distribution - Focuses on balanced distribution across topology domains
- Good for: Even load distribution, multi-zone deployments
Key Difference:
- Anti-affinity: “Don’t schedule near pods with label X” (defines relationships)
- Topology spread: “Balance across domains with max skew N” (defines distribution)
Example:
- Anti-affinity with
topologyKey: kubernetes.io/hostname: “Never 2 replicas on same node” - Topology spread with
maxSkew: 1: “Across 3 zones, distribution can be 2-2-1 but not 3-1-1”
Did you get it right?
โ
Correct!
โ
Incorrect
Question 26
What is the default value for
terminationGracePeriodSeconds if not specified?โ
Correct!
The default
terminationGracePeriodSeconds is 30 seconds if not explicitly specified. This gives pods 30 seconds total for preStop hooks and graceful shutdown before SIGKILL.โ
Incorrect
The default
terminationGracePeriodSeconds is 30 seconds if not explicitly specified. This gives pods 30 seconds total for preStop hooks and graceful shutdown before SIGKILL.It’s the most commonly seen value in Kubernetes documentation.
Question 27
What is the scheduling phase called where unsuitable nodes are eliminated before scoring?
โ
Correct!
The filtering phase (also called predicate phase) eliminates nodes that cannot run the pod due to insufficient resources, taint mismatches, node selector conflicts, or affinity violations. The remaining nodes proceed to the scoring phase.
โ
Incorrect
The filtering phase (also called predicate phase) eliminates nodes that cannot run the pod due to insufficient resources, taint mismatches, node selector conflicts, or affinity violations. The remaining nodes proceed to the scoring phase.
It’s about removing/eliminating unsuitable options.
Question 28
A Pod’s QoS class can be changed after the Pod is created by updating its resource requests and limits.
โ
Correct!
False! QoS class is determined at Pod creation based on resource definitions and cannot be changed afterward. You must delete and recreate the Pod with different resource specifications to change its QoS class.
โ
Incorrect
False! QoS class is determined at Pod creation based on resource definitions and cannot be changed afterward. You must delete and recreate the Pod with different resource specifications to change its QoS class.
Think about immutability - many pod specs cannot be changed after creation.
Question 29
Which built-in Kubernetes taint is automatically applied when a node becomes unready?
โ
Correct!
Kubernetes automatically applies
node.kubernetes.io/not-ready:NoExecute when a node becomes unready. The NoExecute effect means both new pods cannot schedule AND existing pods without tolerations are evicted.โ
Incorrect
Kubernetes automatically applies
node.kubernetes.io/not-ready:NoExecute when a node becomes unready. The NoExecute effect means both new pods cannot schedule AND existing pods without tolerations are evicted.NoExecute is used because running pods should be moved off unhealthy nodes.
Question 30
Complete the toleration to allow scheduling on nodes tainted with ANY value for the ‘gpu’ key:
Fill in the operator type
tolerations:
- key: "gpu"
operator: "_____"
effect: "NoSchedule"โ
Correct!
The
Exists operator tolerates any value for the specified key. This is useful when you want to tolerate a taint regardless of its value. Compare with Equal which requires an exact value match.โ
Incorrect
The
Exists operator tolerates any value for the specified key. This is useful when you want to tolerate a taint regardless of its value. Compare with Equal which requires an exact value match.Quiz Results
Score
0/0
Accuracy
0%
Right
0
Wrong
Skipped
0
Last updated on