Networking Quiz
Quiz
--service-cluster-ip-range flag. This range must not overlap with the Pod CIDR or Node CIDR.--service-cluster-ip-range flag. This range must not overlap with the Pod CIDR or Node CIDR.Node 1
├─ Pod CIDR: 10.244.1.0/24
├─ Pod A: 10.244.1.5
└─ Pod B: 10.244.1.6
Pod A → Pod BCoreDNS provides DNS-based service discovery within the cluster.
It watches the API server for service creation/changes and automatically creates DNS records. When a pod makes a DNS query, CoreDNS resolves service names to their ClusterIPs, enabling pods to communicate using service names instead of IPs.
DNS Format: <service-name>.<namespace>.svc.<cluster-domain>
Example: my-service.default.svc.cluster.local → 10.96.100.50
Did you get it right?
# /etc/resolv.conf inside a pod
nameserver _____
search default.svc.cluster.local
search svc.cluster.local
search cluster.local
options ndots:5<service-name>.<namespace>.svc.<cluster-domain>, what is the default value for cluster-domain?Cluster Configuration:
Pod CIDR: 10.244.0.0/16
Service CIDR: 10.96.0.0/12
Node CIDR: 192.168.0.0/24
Node 1: 192.168.0.10
├─ Pod CIDR: 10.244.1.0/24
└─ Pod 1: 10.244.1.5
Services:
└─ my-service: 10.96.100.50A veth (virtual ethernet) pair is like a virtual ethernet cable with two ends. Traffic sent into one end appears on the other.
In Kubernetes:
- One end is placed in the pod’s network namespace (appears as
eth0) - The other end is in the host namespace (appears as
vethXXXX)
Purpose:
- Connects isolated pod network namespace to host network
- Allows pods to send and receive traffic via the node
- Serves as the basic building block for pod networking
- The host end is managed by the CNI plugin (via a bridge, IP routing, or eBPF depending on the implementation).
Analogy: Like a network cable connecting two separate computers, except both ‘computers’ are on the same physical machine.
Did you get it right?
A NetworkPolicy with the configuration shown will apply to which traffic?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- _____:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080podSelector field in the ingress.from section specifies which pods are allowed to send traffic. This NetworkPolicy allows ingress only from pods with label app=frontend. Note this is different from the top-level podSelector which defines which pods the policy applies to (app=backend).podSelector field in the ingress.from section specifies which pods are allowed to send traffic. This NetworkPolicy allows ingress only from pods with label app=frontend. Note this is different from the top-level podSelector which defines which pods the policy applies to (app=backend).Non-overlapping CIDRs are critical for proper routing and avoiding conflicts.
Why they must be separate:
Routing ambiguity: If ranges overlap, the network stack can’t determine whether a destination IP belongs to a pod, service, or node
Service ClusterIP uniqueness: Service IPs must be in a distinct range so kube-proxy can create specific iptables rules
Pod connectivity: CNI needs to route pod traffic correctly without conflicting with node infrastructure traffic
Example of bad config:
❌ Pod CIDR: 10.0.0.0/16
❌ Node CIDR: 10.0.0.0/24 ← Overlap!Correct config:
✅ Pod CIDR: 10.244.0.0/16
✅ Service CIDR: 10.96.0.0/12
✅ Node CIDR: 192.168.0.0/24Did you get it right?
ndots:5 option controls when the resolver treats a name as fully qualified. If a name has 5 or more dots, it’s queried as-is first before trying the search domains. If it has fewer than 5 dots, the search domains are tried first (e.g., ‘my-service’ becomes ‘my-service.default.svc.cluster.local’). This is why short service names work within Kubernetes.ndots:5 option controls when the resolver treats a name as fully qualified. If a name has 5 or more dots, it’s queried as-is first before trying the search domains. If it has fewer than 5 dots, the search domains are tried first (e.g., ‘my-service’ becomes ‘my-service.default.svc.cluster.local’). This is why short service names work within Kubernetes.