networking.k8s.io/v1) that controls pod-to-pod and pod-to-external traffic at L3/L4 using label selectors, namespace selectors, and CIDR blocks -- but only works if your CNI plugin supports it.kubectl apply -f network-policy.yaml and kubectl get networkpolicy -n <namespace>foo does not affect namespace bar. You must create policies per namespace. [src1]ipBlock rules apply to external (non-pod) traffic only. Pod IPs are ephemeral; do not use ipBlock to target pods. [src1]| Pattern | podSelector | policyTypes | Ingress Rules | Egress Rules | Effect |
|---|---|---|---|---|---|
| Default deny all ingress | {} (all pods) | [Ingress] | [] (empty) | -- | Block all inbound traffic namespace-wide |
| Default deny all egress | {} (all pods) | [Egress] | -- | [] (empty) | Block all outbound traffic namespace-wide |
| Default deny both | {} (all pods) | [Ingress, Egress] | [] (empty) | [] (empty) | Block all traffic in both directions |
| Allow all ingress | {} (all pods) | [Ingress] | [{from: [podSelector: {}]}] | -- | Override deny: allow all inbound |
| Allow specific pod ingress | matchLabels: {app: db} | [Ingress] | from: [{podSelector: {app: api}}] | -- | Only pods labeled app=api can reach app=db |
| Allow cross-namespace | matchLabels: {app: db} | [Ingress] | from: [{namespaceSelector: {team: frontend}}] | -- | Pods in namespaces labeled team=frontend can reach db |
| Allow specific port | matchLabels: {app: db} | [Ingress] | from: [...] ports: [{port: 5432, protocol: TCP}] | -- | Only TCP/5432 allowed |
| Allow egress to CIDR | matchLabels: {app: api} | [Egress] | -- | to: [{ipBlock: {cidr: 10.0.0.0/8}}] | Outbound only to 10.0.0.0/8 |
| Allow DNS egress | {} (all pods) | [Egress] | -- | to: [] ports: [{port: 53, protocol: UDP}, {port: 53, protocol: TCP}] | Allow DNS lookups for all pods |
| Deny external egress | {} (all pods) | [Egress] | -- | to: [{podSelector: {}}, {namespaceSelector: {}}] | Only cluster-internal egress allowed |
START: What traffic control do you need?
|
+-- Want to block ALL traffic by default?
| +-- YES --> Apply "default deny all" policy to namespace (see Step 1)
| | Then add specific allow policies per service
| +-- NO |
| v
+-- Want to restrict INBOUND traffic to specific pods?
| +-- YES --> Use podSelector + ingress rules
| | +-- From same namespace only? --> Use podSelector in "from"
| | +-- From another namespace? --> Use namespaceSelector in "from"
| | +-- From external IPs? --> Use ipBlock in "from"
| +-- NO |
| v
+-- Want to restrict OUTBOUND traffic from specific pods?
| +-- YES --> Use podSelector + egress rules
| | +-- IMPORTANT: Always allow DNS (port 53 UDP/TCP) in egress
| +-- NO |
| v
+-- Want to combine ingress + egress?
+-- YES --> Set policyTypes: [Ingress, Egress] with both rule sets
+-- NO --> No NetworkPolicy needed (default: all traffic allowed)
Before writing any policy, confirm your cluster's CNI plugin enforces NetworkPolicy. Applying policies to a cluster without enforcement gives a false sense of security. [src1]
# Check which CNI is installed
kubectl get pods -n kube-system -l k8s-app -o wide | grep -E 'calico|cilium|weave|antrea'
Verify: If you see calico-node, cilium, weave-net, or antrea-agent pods running in kube-system, NetworkPolicy enforcement is active.
Start with a deny-all baseline. This ensures no pod can communicate unless explicitly permitted. [src2] [src6]
# default-deny-all.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Verify: kubectl get networkpolicy -n production -- should show default-deny-all.
After applying default-deny egress, pods cannot resolve DNS. This breaks virtually everything. Add a DNS allow policy immediately. [src3]
# allow-dns.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Verify: kubectl exec -n production <pod> -- nslookup kubernetes.default -- should resolve successfully.
Now explicitly open the traffic paths your application requires. [src1] [src4]
# allow-frontend-to-api.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
tier: web
ports:
- protocol: TCP
port: 8080
Verify: kubectl exec -n production <frontend-pod> -- curl -s -o /dev/null -w "%{http_code}" http://api:8080/health -- should return 200.
Confirm that traffic NOT explicitly allowed is blocked. [src4]
# From a pod that should NOT have access:
kubectl exec -n production <unauthorized-pod> -- curl --connect-timeout 3 http://api:8080/health
# Expected: connection timeout (exit code 28)
Verify: The command should time out after 3 seconds.
Cross-namespace policies require labeled namespaces. Kubernetes 1.21+ automatically labels namespaces with kubernetes.io/metadata.name. For older clusters, add labels manually. [src1]
# Label namespaces for policy selectors
kubectl label namespace monitoring team=monitoring
kubectl label namespace logging team=logging
# Verify labels
kubectl get namespace --show-labels | grep -E 'monitoring|logging'
Verify: kubectl get namespace monitoring -o jsonpath='{.metadata.labels}' -- should include team: monitoring.
# Input: A namespace where you want zero-trust networking
# Output: All pod traffic blocked; only explicitly allowed flows work
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production # Change to your namespace
spec:
podSelector: {} # {} = select ALL pods in this namespace
policyTypes:
- Ingress # Block all inbound
- Egress # Block all outbound
# Input: Backend pods that should only accept traffic from frontend on port 8080
# Output: Only frontend pods can reach backend on TCP/8080
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-allow-frontend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
tier: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend # Only pods with this label
ports:
- protocol: TCP
port: 8080 # Only this port
# Input: Prometheus in 'monitoring' namespace needs to scrape all pods
# Output: Pods in production allow ingress from monitoring namespace on metrics port
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring-scrape
namespace: production
spec:
podSelector: {} # All pods in production
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
ports:
- protocol: TCP
port: 9090
# Input: API pods that should only reach a specific external service + DNS
# Output: Egress limited to 10.0.0.0/8 (cluster), external API, and DNS
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-restricted-egress
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Egress
egress:
# Rule 1: Allow DNS
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
# Rule 2: Allow cluster-internal traffic
- to:
- ipBlock:
cidr: 10.0.0.0/8
# Rule 3: Allow specific external API
- to:
- ipBlock:
cidr: 203.0.113.0/24
ports:
- protocol: TCP
port: 443
# --- OR logic: two SEPARATE list items ---
# Allows from: (any pod in namespace labeled team=frontend)
# OR (any pod labeled role=client in THIS namespace)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: or-logic-example
namespace: production
spec:
podSelector:
matchLabels:
app: db
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector: # First list item
matchLabels:
team: frontend
- podSelector: # Second list item (OR)
matchLabels:
role: client
---
# --- AND logic: SINGLE list item with both selectors ---
# Allows from: pods labeled role=client
# AND in namespaces labeled team=frontend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: and-logic-example
namespace: production
spec:
podSelector:
matchLabels:
app: db
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector: # Single list item
matchLabels:
team: frontend
podSelector: # Same list item (AND)
matchLabels:
role: client
# BAD -- Policy is accepted but NEVER enforced (Flannel alone)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
# Result: All traffic still flows freely. Zero security benefit.
# GOOD -- Verify CNI first, then apply
kubectl get ds -n kube-system | grep -E 'calico|cilium|weave|antrea'
# If no result: install a policy-enforcing CNI before deploying NetworkPolicies
# BAD -- Blocks ALL egress including DNS; every pod loses name resolution
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-egress
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
# Result: api pod cannot resolve 'database' hostname
# GOOD -- Allow DNS + specific service egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-egress-with-dns
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Egress
egress:
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
# BAD -- Intended: only frontend pods in staging namespace
# Actual: ALL pods in staging OR ALL frontend pods in ANY namespace
ingress:
- from:
- namespaceSelector: # List item 1 (OR)
matchLabels:
env: staging
- podSelector: # List item 2 (OR) -- NOT AND!
matchLabels:
app: frontend
# GOOD -- Single list item = AND logic
ingress:
- from:
- namespaceSelector: # Same list item = AND
matchLabels:
env: staging
podSelector: # Both must match
matchLabels:
app: frontend
# BAD -- Pod IPs are ephemeral; this breaks on reschedule
ingress:
- from:
- ipBlock:
cidr: 10.244.1.5/32 # This pod IP will change!
# GOOD -- Labels are stable across pod restarts
ingress:
- from:
- podSelector:
matchLabels:
app: api # Labels survive reschedules
kubectl get ds -n kube-system | grep -E 'calico|cilium|weave|antrea'. [src1]kubectl describe networkpolicy. [src4]kubectl get namespace --show-labels and use kubernetes.io/metadata.name (auto-applied in K8s 1.21+). [src1]policyTypes when you only have egress rules causes Kubernetes to infer [Ingress] only. Your egress rules are silently ignored. Fix: always explicitly set policyTypes. [src1]hostNetwork: true bypass NetworkPolicy entirely. Fix: use node-level firewall rules (iptables, nftables) for host-networked pods. [src1]podSelector: {} selects ALL pods; namespaceSelector: {} selects ALL namespaces. Fix: always verify selector specificity with kubectl describe networkpolicy. [src6]# List all network policies in a namespace
kubectl get networkpolicy -n <namespace>
# Describe a specific policy (shows parsed selectors and rules)
kubectl describe networkpolicy <policy-name> -n <namespace>
# Test connectivity between pods
kubectl exec -n <namespace> <source-pod> -- \
curl --connect-timeout 3 -s -o /dev/null -w "%{http_code}" http://<target-service>:<port>/
# Test DNS resolution (verify DNS egress works)
kubectl exec -n <namespace> <pod> -- nslookup kubernetes.default.svc.cluster.local
# Debug with Calico: check Felix logs for policy decisions
kubectl logs -n kube-system -l k8s-app=calico-node --tail=50 | grep -i policy
# Debug with Cilium: use Hubble for real-time flow observation
kubectl exec -n kube-system <cilium-pod> -- hubble observe --namespace <namespace> --verdict DROPPED
# Verify CNI plugin is running and healthy
kubectl get pods -n kube-system -l k8s-app --field-selector=status.phase!=Running
# Export all policies in a namespace as YAML (for review)
kubectl get networkpolicy -n <namespace> -o yaml
# Test with a temporary debug pod
kubectl run nettest --image=nicolaka/netshoot --rm -it --restart=Never -n <namespace> -- \
curl --connect-timeout 3 <target-service>:<port>
| Version | Status | Breaking Changes | Migration Notes |
|---|---|---|---|
| networking.k8s.io/v1 | Stable (GA) since K8s 1.7 | None | Current API; use this version |
| extensions/v1beta1 | Removed in K8s 1.16 | API group changed | Change apiVersion to networking.k8s.io/v1 |
| K8s 1.21+ | Feature | kubernetes.io/metadata.name auto-label | Simplifies cross-namespace namespaceSelector |
| K8s 1.25+ | Feature | endPort field GA | Allows port ranges: port: 32000, endPort: 32100 |
| AdminNetworkPolicy | Alpha (K8s 1.27+) | New cluster-scoped API | Future: admin-enforced policies that override namespace policies |
| Use When | Don't Use When | Use Instead |
|---|---|---|
| You need L3/L4 pod traffic isolation | You need L7 (HTTP path/header) filtering | Cilium L7 Policy, Istio AuthorizationPolicy |
| Implementing zero-trust within a cluster | You need cluster-wide default policies | Calico GlobalNetworkPolicy, Cilium ClusterNetworkPolicy |
| Restricting pod egress to specific CIDRs | You need to block traffic by DNS name | Cilium DNS-aware policies |
| Multi-tenant namespace isolation | You need to rate-limit traffic | Service mesh or CNI-specific rate limiting |
| Compliance requires microsegmentation | You only need external ingress routing | Kubernetes Ingress / Gateway API |
| Your CNI supports NetworkPolicy | Your cluster uses Flannel without a policy plugin | Install Calico or Cilium alongside Flannel |