Services Quiz
Quiz
Background Synchronization (NOT queried during traffic):
Endpoints work as a background synchronization mechanism:
Setup Phase: Endpoints Controller watches pods matching Service selector → Updates Endpoints object → kube-proxy watches Endpoints → Programs iptables/IPVS rules
Active Traffic: Request hits Service IP → Kernel networking stack → Pre-programmed iptables/IPVS rules → Direct routing to Pod IP
Key Point: Endpoints are NOT queried during active traffic. They update iptables/IPVS rules asynchronously in the background.
Benefits:
- Kernel-level routing (no API calls)
- No control plane bottleneck
- Nanosecond routing decisions
- Traffic continues even if API server is slow
Did you get it right?
apiVersion: v1
kind: Service
metadata:
name: backend-api
spec:
type: ClusterIP
selector:
app: backend
ports:
- port: 80
targetPort: 8080kubectl port-forward svc/backend-api 8080:80 to create a local tunnel, then access via localhost:8080.kubectl port-forward svc/backend-api 8080:80 to create a local tunnel, then access via localhost:8080.<service-name>.<namespace>.svc.cluster.local. The svc component identifies it as a Service resource. Example: api.default.svc.cluster.local.<service-name>.<namespace>.svc.cluster.local. The svc component identifies it as a Service resource. Example: api.default.svc.cluster.local.apiVersion: v1
kind: Service
metadata:
name: web-app
spec:
type: _____
selector:
app: web-app
ports:
- port: 80
targetPort: 8080
nodePort: 30080NodePort type exposes the Service on each node’s IP at a static port (30080 in this case), allowing external access via any node’s IP address.NodePort type exposes the Service on each node’s IP at a static port (30080 in this case), allowing external access via any node’s IP address.LoadBalancer Traffic Flow:
Client (external)
↓
Cloud Load Balancer (AWS ELB, GCP LB, Azure LB)
↓
Node IP:NodePort (LB distributes across nodes)
↓
iptables/IPVS rules (programmed by kube-proxy)
↓
DNAT: NodePort → Pod IP (e.g., 10.244.1.5:8080)
↓
PodKey Points:
- LoadBalancer type automatically creates a NodePort
- Cloud provider’s LB uses NodePorts as backend targets
- kube-proxy programs iptables/IPVS rules
- ClusterIP is also created but data plane traffic flows directly from NodePort to Pod via DNAT
- Only works with cloud providers (AWS, GCP, Azure)
Did you get it right?
clusterIP: None, which means no ClusterIP is allocated. DNS queries return individual Pod IPs instead of a single Service IP, allowing direct pod-to-pod connectivity without kube-proxy load balancing.clusterIP: None, which means no ClusterIP is allocated. DNS queries return individual Pod IPs instead of a single Service IP, allowing direct pod-to-pod connectivity without kube-proxy load balancing.apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
clusterIP: None
selector:
app: mysql
ports:
- port: 3306mysql-0.mysql.default.svc.cluster.local, mysql-1.mysql.default.svc.cluster.local, etc. This enables both direct pod access and load balancing.mysql-0.mysql.default.svc.cluster.local, mysql-1.mysql.default.svc.cluster.local, etc. This enables both direct pod access and load balancing.ClusterIP (default):
- Internal cluster communication only
- Use for: Microservices, internal APIs, backends
- Access: Within cluster via DNS or ClusterIP
NodePort:
- Exposes Service on each node’s IP at a static port (30000-32767)
- Use for: Development/testing, direct node access
- Access: External via
<NodeIP>:<NodePort> - Production: ❌ Use LoadBalancer or Ingress instead
LoadBalancer:
- Creates external cloud load balancer
- Use for: Production external access (cloud only)
- Access: External via cloud LB IP
Headless (clusterIP: None):
- Returns pod IPs directly, no load balancing
- Use for: StatefulSets, direct pod access, custom load balancing
- Access: DNS returns individual pod IPs
ExternalName:
- DNS alias (CNAME) to external service
- Use for: Accessing external databases/APIs, migration scenarios
- Access: Returns CNAME, no proxying
Did you get it right?
apiVersion: v1
kind: Service
metadata:
name: external-database
spec:
ports:
- port: 3306
targetPort: 3306sessionAffinity: ClientIP ensures that requests from the same client IP address are consistently routed to the same pod. This creates session stickiness based on the client’s source IP, useful for stateful connections or in-memory caching.sessionAffinity: ClientIP ensures that requests from the same client IP address are consistently routed to the same pod. This creates session stickiness based on the client’s source IP, useful for stateful connections or in-memory caching.apiVersion: v1
kind: Service
metadata:
name: stateful-app
spec:
sessionAffinity: _____
sessionAffinityConfig:
clientIP:
timeoutSeconds: _____
selector:
app: stateful-app
ports:
- port: 80sessionAffinity: ClientIP and timeoutSeconds: 10800 (3 hours = 10800 seconds). This ensures requests from the same client IP go to the same pod for 3 hours.sessionAffinity: ClientIP and timeoutSeconds: 10800 (3 hours = 10800 seconds). This ensures requests from the same client IP go to the same pod for 3 hours.sessionAffinity: None, traffic distribution depends on kube-proxy mode: iptables (default) uses random selection, while IPVS mode uses round-robin by default. No session stickiness is maintained.sessionAffinity: None, traffic distribution depends on kube-proxy mode: iptables (default) uses random selection, while IPVS mode uses round-robin by default. No session stickiness is maintained.externalTrafficPolicy: Cluster (default):
- External traffic can land on any node (even if no matching pods)
- Kubernetes then routes to any matching pod in the cluster
- Traffic is distributed across all matching Pods cluster-wide
- ❌ Source IP is lost (SNAT applied)
- ❌ Extra network hop possible (traffic may cross nodes)
- ✅ Better load distribution
externalTrafficPolicy: Local:
- External traffic is sent only to nodes with matching Pods
- Traffic stays on the receiving node (no cross-node routing)
- ✅ Source IP preserved (no SNAT)
- ✅ No extra network hops
- ❌ Uneven distribution if pods spread unevenly across nodes
Use Cases:
- Local: When you need source IP (logging, security) or want to avoid cross-node traffic
- Cluster: When even distribution is more important than preserving source IP
Note: With LoadBalancer + Local, health checks ensure only nodes with pods receive traffic.
Did you get it right?
externalTrafficPolicy: Local, Kubernetes configures health checks so that nodes without local pods are marked as unhealthy, causing the cloud load balancer to exclude them from the backend pool. This ensures traffic only goes to nodes with running pods.externalTrafficPolicy: Local, Kubernetes configures health checks so that nodes without local pods are marked as unhealthy, causing the cloud load balancer to exclude them from the backend pool. This ensures traffic only goes to nodes with running pods.port (Service’s exposed port), targetPort (port on the Pod that traffic is forwarded to), nodePort (node’s static port for NodePort/LoadBalancer types), and protocol (TCP/UDP/SCTP). Traffic flow: Client → Service’s port → Service forwards to Pod’s targetPort → Container Key distinction: containerPort is defined in the pod spec (where container listens), while targetPort in the Service spec points to that container port.port (Service’s exposed port), targetPort (port on the Pod that traffic is forwarded to), nodePort (node’s static port for NodePort/LoadBalancer types), and protocol (TCP/UDP/SCTP). Traffic flow: Client → Service’s port → Service forwards to Pod’s targetPort → Container Key distinction: containerPort is defined in the pod spec (where container listens), while targetPort in the Service spec points to that container port.apiVersion: v1
kind: Service
metadata:
name: api
namespace: production
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: api-serverapi.production or the full FQDN api.production.svc.cluster.local. This is Kubernetes’ DNS search domain behavior.api.production or the full FQDN api.production.svc.cluster.local. This is Kubernetes’ DNS search domain behavior.kubectl port-forward svc/my-service 8080:80 creates a tunnel from your local machine’s port 8080 to the Service’s port 80. You can then access it via localhost:8080. This is useful for debugging ClusterIP Services without exposing them externally.kubectl port-forward svc/my-service 8080:80 creates a tunnel from your local machine’s port 8080 to the Service’s port 80. You can then access it via localhost:8080. This is useful for debugging ClusterIP Services without exposing them externally.Pod to Service Traffic Lifecycle:
Pod Created: Pod scheduled with labels matching Service selector
Networking Setup: kubelet/CNI assigns Pod IP
Containers Start: Containers start, Pod is NotReady
Pod Becomes Ready: Readiness probe succeeds (Ready=True)
Endpoints Controller Detection: Controller detects Ready pod matching selector
Endpoints Update: EndpointSlices updated with Pod IP:port
kube-proxy Watch: kube-proxy receives Endpoint updates
Rules Programming: kube-proxy programs iptables/IPVS rules
Traffic Inclusion: Traffic now routes to the new pod (typically within 1-2 seconds)
Reverse for deletion: Pod Deleted → Endpoints Controller removes IP → Endpoints updated → kube-proxy removes rules → Traffic stops
Key Points:
- Pods must be Ready before being added to Endpoints
- Asynchronous background synchronization
- Active traffic uses pre-programmed kernel rules, not real-time API queries
Did you get it right?
kubectl apply -f loadbalancer-service.yaml
kubectl get svc my-service<pending> indefinitely since there’s no controller to provision an external load balancer. The Service still works as a NodePort internally.<pending> indefinitely since there’s no controller to provision an external load balancer. The Service still works as a NodePort internally.