Kubernetes Network Policies

Type: Software Reference Confidence: 0.94 Sources: 7 Verified: 2026-02-27 Freshness: 2026-02-27

TL;DR

Constraints

Quick Reference

PatternpodSelectorpolicyTypesIngress RulesEgress RulesEffect
Default deny all ingress{} (all pods)[Ingress][] (empty)--Block all inbound traffic namespace-wide
Default deny all egress{} (all pods)[Egress]--[] (empty)Block all outbound traffic namespace-wide
Default deny both{} (all pods)[Ingress, Egress][] (empty)[] (empty)Block all traffic in both directions
Allow all ingress{} (all pods)[Ingress][{from: [podSelector: {}]}]--Override deny: allow all inbound
Allow specific pod ingressmatchLabels: {app: db}[Ingress]from: [{podSelector: {app: api}}]--Only pods labeled app=api can reach app=db
Allow cross-namespacematchLabels: {app: db}[Ingress]from: [{namespaceSelector: {team: frontend}}]--Pods in namespaces labeled team=frontend can reach db
Allow specific portmatchLabels: {app: db}[Ingress]from: [...] ports: [{port: 5432, protocol: TCP}]--Only TCP/5432 allowed
Allow egress to CIDRmatchLabels: {app: api}[Egress]--to: [{ipBlock: {cidr: 10.0.0.0/8}}]Outbound only to 10.0.0.0/8
Allow DNS egress{} (all pods)[Egress]--to: [] ports: [{port: 53, protocol: UDP}, {port: 53, protocol: TCP}]Allow DNS lookups for all pods
Deny external egress{} (all pods)[Egress]--to: [{podSelector: {}}, {namespaceSelector: {}}]Only cluster-internal egress allowed

Decision Tree

START: What traffic control do you need?
|
+-- Want to block ALL traffic by default?
|   +-- YES --> Apply "default deny all" policy to namespace (see Step 1)
|   |           Then add specific allow policies per service
|   +-- NO  |
|           v
+-- Want to restrict INBOUND traffic to specific pods?
|   +-- YES --> Use podSelector + ingress rules
|   |   +-- From same namespace only? --> Use podSelector in "from"
|   |   +-- From another namespace?   --> Use namespaceSelector in "from"
|   |   +-- From external IPs?        --> Use ipBlock in "from"
|   +-- NO  |
|           v
+-- Want to restrict OUTBOUND traffic from specific pods?
|   +-- YES --> Use podSelector + egress rules
|   |   +-- IMPORTANT: Always allow DNS (port 53 UDP/TCP) in egress
|   +-- NO  |
|           v
+-- Want to combine ingress + egress?
    +-- YES --> Set policyTypes: [Ingress, Egress] with both rule sets
    +-- NO  --> No NetworkPolicy needed (default: all traffic allowed)

Step-by-Step Guide

1. Verify your CNI supports NetworkPolicy

Before writing any policy, confirm your cluster's CNI plugin enforces NetworkPolicy. Applying policies to a cluster without enforcement gives a false sense of security. [src1]

# Check which CNI is installed
kubectl get pods -n kube-system -l k8s-app -o wide | grep -E 'calico|cilium|weave|antrea'

Verify: If you see calico-node, cilium, weave-net, or antrea-agent pods running in kube-system, NetworkPolicy enforcement is active.

2. Apply a default-deny policy to your namespace

Start with a deny-all baseline. This ensures no pod can communicate unless explicitly permitted. [src2] [src6]

# default-deny-all.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress

Verify: kubectl get networkpolicy -n production -- should show default-deny-all.

3. Allow DNS egress for all pods

After applying default-deny egress, pods cannot resolve DNS. This breaks virtually everything. Add a DNS allow policy immediately. [src3]

# allow-dns.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Egress
  egress:
    - to: []
      ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53

Verify: kubectl exec -n production <pod> -- nslookup kubernetes.default -- should resolve successfully.

4. Create allow policies for your services

Now explicitly open the traffic paths your application requires. [src1] [src4]

# allow-frontend-to-api.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-api
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
      tier: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend
              tier: web
      ports:
        - protocol: TCP
          port: 8080

Verify: kubectl exec -n production <frontend-pod> -- curl -s -o /dev/null -w "%{http_code}" http://api:8080/health -- should return 200.

5. Test blocked traffic

Confirm that traffic NOT explicitly allowed is blocked. [src4]

# From a pod that should NOT have access:
kubectl exec -n production <unauthorized-pod> -- curl --connect-timeout 3 http://api:8080/health
# Expected: connection timeout (exit code 28)

Verify: The command should time out after 3 seconds.

6. Label your namespaces for cross-namespace policies

Cross-namespace policies require labeled namespaces. Kubernetes 1.21+ automatically labels namespaces with kubernetes.io/metadata.name. For older clusters, add labels manually. [src1]

# Label namespaces for policy selectors
kubectl label namespace monitoring team=monitoring
kubectl label namespace logging team=logging

# Verify labels
kubectl get namespace --show-labels | grep -E 'monitoring|logging'

Verify: kubectl get namespace monitoring -o jsonpath='{.metadata.labels}' -- should include team: monitoring.

Code Examples

YAML: Default deny all traffic in a namespace

# Input:  A namespace where you want zero-trust networking
# Output: All pod traffic blocked; only explicitly allowed flows work

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production        # Change to your namespace
spec:
  podSelector: {}              # {} = select ALL pods in this namespace
  policyTypes:
    - Ingress                  # Block all inbound
    - Egress                   # Block all outbound

YAML: Allow ingress from specific pods on specific ports

# Input:  Backend pods that should only accept traffic from frontend on port 8080
# Output: Only frontend pods can reach backend on TCP/8080

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-allow-frontend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
      tier: api
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend    # Only pods with this label
      ports:
        - protocol: TCP
          port: 8080           # Only this port

YAML: Allow cross-namespace monitoring access

# Input:  Prometheus in 'monitoring' namespace needs to scrape all pods
# Output: Pods in production allow ingress from monitoring namespace on metrics port

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-monitoring-scrape
  namespace: production
spec:
  podSelector: {}              # All pods in production
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: monitoring
      ports:
        - protocol: TCP
          port: 9090

YAML: Restrict egress to specific CIDRs and DNS

# Input:  API pods that should only reach a specific external service + DNS
# Output: Egress limited to 10.0.0.0/8 (cluster), external API, and DNS

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-restricted-egress
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
    - Egress
  egress:
    # Rule 1: Allow DNS
    - ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53
    # Rule 2: Allow cluster-internal traffic
    - to:
        - ipBlock:
            cidr: 10.0.0.0/8
    # Rule 3: Allow specific external API
    - to:
        - ipBlock:
            cidr: 203.0.113.0/24
      ports:
        - protocol: TCP
          port: 443

YAML: AND vs OR selector logic (critical distinction)

# --- OR logic: two SEPARATE list items ---
# Allows from: (any pod in namespace labeled team=frontend)
#           OR  (any pod labeled role=client in THIS namespace)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: or-logic-example
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: db
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:          # First list item
            matchLabels:
              team: frontend
        - podSelector:                # Second list item (OR)
            matchLabels:
              role: client
---
# --- AND logic: SINGLE list item with both selectors ---
# Allows from: pods labeled role=client
#          AND  in namespaces labeled team=frontend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: and-logic-example
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: db
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:          # Single list item
            matchLabels:
              team: frontend
          podSelector:                # Same list item (AND)
            matchLabels:
              role: client

Anti-Patterns

Wrong: Applying NetworkPolicy without a supporting CNI

# BAD -- Policy is accepted but NEVER enforced (Flannel alone)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
spec:
  podSelector: {}
  policyTypes:
    - Ingress
# Result: All traffic still flows freely. Zero security benefit.

Correct: Verify CNI enforcement before relying on policies

# GOOD -- Verify CNI first, then apply
kubectl get ds -n kube-system | grep -E 'calico|cilium|weave|antrea'
# If no result: install a policy-enforcing CNI before deploying NetworkPolicies

Wrong: Blocking egress without allowing DNS

# BAD -- Blocks ALL egress including DNS; every pod loses name resolution
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-egress
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
    - Egress
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: database
      ports:
        - protocol: TCP
          port: 5432
# Result: api pod cannot resolve 'database' hostname

Correct: Always include DNS in egress policies

# GOOD -- Allow DNS + specific service egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-egress-with-dns
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
    - Egress
  egress:
    - ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53
    - to:
        - podSelector:
            matchLabels:
              app: database
      ports:
        - protocol: TCP
          port: 5432

Wrong: Confusing AND vs OR selectors (YAML indentation trap)

# BAD -- Intended: only frontend pods in staging namespace
# Actual: ALL pods in staging OR ALL frontend pods in ANY namespace
  ingress:
    - from:
        - namespaceSelector:         # List item 1 (OR)
            matchLabels:
              env: staging
        - podSelector:               # List item 2 (OR) -- NOT AND!
            matchLabels:
              app: frontend

Correct: Use single list item for AND logic

# GOOD -- Single list item = AND logic
  ingress:
    - from:
        - namespaceSelector:         # Same list item = AND
            matchLabels:
              env: staging
          podSelector:               # Both must match
            matchLabels:
              app: frontend

Wrong: Using ipBlock to target pods

# BAD -- Pod IPs are ephemeral; this breaks on reschedule
  ingress:
    - from:
        - ipBlock:
            cidr: 10.244.1.5/32     # This pod IP will change!

Correct: Use label selectors for pod-to-pod policies

# GOOD -- Labels are stable across pod restarts
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: api              # Labels survive reschedules

Common Pitfalls

Diagnostic Commands

# List all network policies in a namespace
kubectl get networkpolicy -n <namespace>

# Describe a specific policy (shows parsed selectors and rules)
kubectl describe networkpolicy <policy-name> -n <namespace>

# Test connectivity between pods
kubectl exec -n <namespace> <source-pod> -- \
  curl --connect-timeout 3 -s -o /dev/null -w "%{http_code}" http://<target-service>:<port>/

# Test DNS resolution (verify DNS egress works)
kubectl exec -n <namespace> <pod> -- nslookup kubernetes.default.svc.cluster.local

# Debug with Calico: check Felix logs for policy decisions
kubectl logs -n kube-system -l k8s-app=calico-node --tail=50 | grep -i policy

# Debug with Cilium: use Hubble for real-time flow observation
kubectl exec -n kube-system <cilium-pod> -- hubble observe --namespace <namespace> --verdict DROPPED

# Verify CNI plugin is running and healthy
kubectl get pods -n kube-system -l k8s-app --field-selector=status.phase!=Running

# Export all policies in a namespace as YAML (for review)
kubectl get networkpolicy -n <namespace> -o yaml

# Test with a temporary debug pod
kubectl run nettest --image=nicolaka/netshoot --rm -it --restart=Never -n <namespace> -- \
  curl --connect-timeout 3 <target-service>:<port>

Version History & Compatibility

VersionStatusBreaking ChangesMigration Notes
networking.k8s.io/v1Stable (GA) since K8s 1.7NoneCurrent API; use this version
extensions/v1beta1Removed in K8s 1.16API group changedChange apiVersion to networking.k8s.io/v1
K8s 1.21+Featurekubernetes.io/metadata.name auto-labelSimplifies cross-namespace namespaceSelector
K8s 1.25+FeatureendPort field GAAllows port ranges: port: 32000, endPort: 32100
AdminNetworkPolicyAlpha (K8s 1.27+)New cluster-scoped APIFuture: admin-enforced policies that override namespace policies

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
You need L3/L4 pod traffic isolationYou need L7 (HTTP path/header) filteringCilium L7 Policy, Istio AuthorizationPolicy
Implementing zero-trust within a clusterYou need cluster-wide default policiesCalico GlobalNetworkPolicy, Cilium ClusterNetworkPolicy
Restricting pod egress to specific CIDRsYou need to block traffic by DNS nameCilium DNS-aware policies
Multi-tenant namespace isolationYou need to rate-limit trafficService mesh or CNI-specific rate limiting
Compliance requires microsegmentationYou only need external ingress routingKubernetes Ingress / Gateway API
Your CNI supports NetworkPolicyYour cluster uses Flannel without a policy pluginInstall Calico or Cilium alongside Flannel

Important Caveats

Related Units