Kubernetes Deployment + Service + Ingress Reference

Type: Software Reference Confidence: 0.94 Sources: 7 Verified: 2026-02-27 Freshness: 2026-02-27

TL;DR

Constraints

Quick Reference

Resource Type Overview

ResourceAPI VersionPurposeKey Spec FieldsNamespaced
Deploymentapps/v1Manages Pod replicas with rolling updatesreplicas, selector, template, strategyYes
ReplicaSetapps/v1Ensures N identical Pods run (managed by Deployment)replicas, selector, templateYes
Servicev1Stable network endpoint for a set of Podstype, selector, ports, clusterIPYes
Ingressnetworking.k8s.io/v1HTTP/HTTPS routing from external to ServicesingressClassName, rules, tlsYes
IngressClassnetworking.k8s.io/v1Defines which controller handles an Ingresscontroller, parametersNo
ConfigMapv1Non-sensitive configuration data for Podsdata, binaryDataYes
Secretv1Sensitive data (passwords, TLS certs, tokens)data, typeYes
HorizontalPodAutoscalerautoscaling/v2Auto-scales Deployment replicas by metricsminReplicas, maxReplicas, metricsYes

Service Type Comparison

TypeScopePort RangeLoad BalancingCostUse Case
ClusterIP (default)Internal onlyAnykube-proxy (iptables/IPVS)FreeMicroservice-to-microservice communication
NodePortExternal via node IP30000-32767None (single node hit)FreeDev/testing, on-prem without LB
LoadBalancerExternal via cloud LBAnyCloud provider LB$$$ per LBProduction single-service exposure
ExternalNameDNS alias (CNAME)N/AN/AFreeProxy to external services (e.g., RDS)
Headless (clusterIP: None)Internal DNS onlyAnyNone (direct Pod IPs)FreeStatefulSets, service discovery

Ingress Path Types

pathTypeMatching BehaviorExample PathMatches /foo?Matches /foo/bar?
ExactCase-sensitive exact match/fooYesNo
PrefixPath prefix match (segment boundary)/fooYesYes
ImplementationSpecificController-dependent/fooDependsDepends

Decision Tree

START: How should external traffic reach your application?
│
├── Internal microservice only (no external traffic)?
│   ├── YES → Use Service type: ClusterIP (default)
│   └── NO  ↓
│
├── Need to expose a single TCP/UDP service (not HTTP)?
│   ├── YES → Cloud cluster? → LoadBalancer. On-prem? → NodePort
│   └── NO  ↓
│
├── Need HTTP/HTTPS routing with host/path rules?
│   ├── YES → New project? → Gateway API. Existing? → Ingress
│   └── NO  ↓
│
├── Need TLS termination?
│   ├── YES → Add tls section + cert-manager annotation
│   └── NO  → Ingress with HTTP-only rules
│
└── Which Ingress controller?
    ├── General purpose → NGINX Ingress Controller
    ├── Auto-discovery, Let's Encrypt → Traefik
    ├── High performance → HAProxy
    └── Cloud native → AWS ALB / GCE Ingress

Step-by-Step Guide

1. Create a Deployment

A Deployment declares the desired state for your application Pods, including the container image, replicas, and update strategy. [src1]

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:1.2.3
        ports:
        - containerPort: 8080
        resources:
          requests: { cpu: 100m, memory: 128Mi }
          limits: { cpu: 500m, memory: 256Mi }
        readinessProbe:
          httpGet: { path: /healthz, port: 8080 }
          initialDelaySeconds: 5
          periodSeconds: 10

Verify: kubectl apply -f deployment.yaml && kubectl rollout status deployment/myappdeployment "myapp" successfully rolled out

2. Create a Service

A Service provides a stable ClusterIP and DNS name for the set of Pods matched by the selector. [src2]

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  type: ClusterIP
  selector:
    app: myapp
  ports:
  - name: http
    port: 80
    targetPort: 8080

Verify: kubectl get svc myapp → Shows ClusterIP assigned. kubectl get endpoints myapp → Shows Pod IPs.

3. Create an Ingress

An Ingress defines HTTP routing rules that map external hostnames and paths to backend Services. Requires an Ingress controller. [src3]

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  tls:
  - hosts: [myapp.example.com]
    secretName: myapp-tls
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp
            port: { number: 80 }

Verify: kubectl get ingress myapp → Shows ADDRESS and HOST.

4. Configure cert-manager for automatic TLS

cert-manager watches for Ingress resources with the appropriate annotation and automatically provisions TLS certificates. [src5]

# cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v2.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-prod-key
    solvers:
    - http01:
        ingress:
          ingressClassName: nginx

Verify: kubectl get certificate -AReady: True.

5. Perform a rolling update

Update the container image to trigger a zero-downtime rolling update. [src1]

kubectl set image deployment/myapp myapp=myapp:1.3.0
kubectl rollout status deployment/myapp
# Roll back if needed:
kubectl rollout undo deployment/myapp

Verify: kubectl rollout history deployment/myapp → Shows revision history.

Code Examples

YAML: Complete Deployment + ClusterIP Service + Ingress

# Input:  Container image myapp:1.2.3 on port 8080
# Output: App accessible at https://myapp.example.com
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels: { app: myapp }
spec:
  replicas: 3
  selector: { matchLabels: { app: myapp } }
  template:
    metadata: { labels: { app: myapp } }
    spec:
      containers:
      - name: myapp
        image: myapp:1.2.3
        ports: [{ containerPort: 8080 }]
        resources:
          requests: { cpu: 100m, memory: 128Mi }
          limits: { cpu: 500m, memory: 256Mi }
        readinessProbe:
          httpGet: { path: /healthz, port: 8080 }

YAML: NodePort Service (on-prem / no cloud LB)

apiVersion: v1
kind: Service
metadata:
  name: myapp-nodeport
spec:
  type: NodePort
  selector: { app: myapp }
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 30080

YAML: LoadBalancer Service (cloud environments)

apiVersion: v1
kind: Service
metadata:
  name: myapp-lb
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
  type: LoadBalancer
  selector: { app: myapp }
  ports:
  - name: https
    port: 443
    targetPort: 8080

YAML: Multi-service Ingress with path-based routing

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: multi-service
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  ingressClassName: nginx
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service: { name: frontend, port: { number: 80 } }
      - path: /api(/|$)(.*)
        pathType: ImplementationSpecific
        backend:
          service: { name: api, port: { number: 80 } }

Anti-Patterns

Wrong: Missing resource requests and limits

# BAD -- no resource requests/limits; Pod can OOMKill others
spec:
  containers:
  - name: myapp
    image: myapp:1.2.3
    # No resources block -- BestEffort QoS

Correct: Always set resource requests and limits

# GOOD -- explicit resource boundaries
spec:
  containers:
  - name: myapp
    image: myapp:1.2.3
    resources:
      requests: { cpu: 100m, memory: 128Mi }
      limits: { cpu: 500m, memory: 256Mi }

Wrong: Using :latest image tag

# BAD -- :latest is mutable; rollback impossible
spec:
  containers:
  - name: myapp
    image: myapp:latest

Correct: Pin image to specific version

# GOOD -- immutable version tag; rollback works
spec:
  containers:
  - name: myapp
    image: myapp:1.2.3

Wrong: No readiness probe with rolling update

# BAD -- traffic sent to Pods before they're ready
spec:
  containers:
  - name: myapp
    image: myapp:1.2.3
    # No readinessProbe

Correct: Configure readiness probe

# GOOD -- Pod receives traffic only after probe succeeds
spec:
  containers:
  - name: myapp
    image: myapp:1.2.3
    readinessProbe:
      httpGet: { path: /healthz, port: 8080 }
      initialDelaySeconds: 5
      periodSeconds: 10

Wrong: Selector mismatch between Deployment and Service

# BAD -- "my-app" != "myapp"; no traffic reaches Pods
spec:
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 8080

Correct: Ensure labels match exactly

# GOOD -- selector matches Pod template labels
spec:
  selector:
    app: myapp
  ports:
  - port: 80
    targetPort: 8080

Wrong: Ingress without ingressClassName

# BAD -- wrong or no controller may handle it
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service: { name: myapp, port: { number: 80 } }

Correct: Always specify ingressClassName

# GOOD -- explicit controller selection
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service: { name: myapp, port: { number: 80 } }

Common Pitfalls

Diagnostic Commands

# Check Deployment rollout status
kubectl rollout status deployment/myapp

# View rollout history
kubectl rollout history deployment/myapp

# Check Service endpoints (empty = selector mismatch)
kubectl get endpoints myapp

# Verify Ingress has ADDRESS (empty = no controller)
kubectl get ingress myapp

# Debug Pod issues
kubectl describe pod -l app=myapp
kubectl logs -l app=myapp --tail=50

# Check installed Ingress controllers
kubectl get ingressclass

# Test connectivity from inside cluster
kubectl run curl --image=curlimages/curl --rm -it -- curl http://myapp.default.svc.cluster.local

# View resource usage (requires metrics-server)
kubectl top pods -l app=myapp

# Check events for failures
kubectl get events --sort-by=.metadata.creationTimestamp

Version History & Compatibility

API/ResourceVersionStatusKey Change
Deployment (apps/v1)k8s 1.9+Stable (GA)apps/v1beta2 removed in 1.16
Service (v1)k8s 1.0+Stable (GA)Dual-stack IPv4/IPv6 GA in 1.23
Ingress (networking.k8s.io/v1)k8s 1.19+Frozen (GA)extensions/v1beta1 removed in 1.22; API frozen
IngressClassk8s 1.18+Stable (GA)Required to specify controller
Gateway API (v1)k8s 1.26+Stable (GA)HTTPRoute, Gateway; successor to Ingress
Gateway API (v1alpha2)k8s 1.24+ExperimentalTCPRoute, TLSRoute, GRPCRoute

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Deploying stateless HTTP/HTTPS appsRunning stateful workloads (DBs, queues)StatefulSet + Headless Service
Need rolling updates with zero downtimeNeed batch processing or scheduled jobsJob / CronJob
Routing external HTTP traffic by host/pathExposing non-HTTP TCP/UDP externallyLoadBalancer Service or Gateway API TCPRoute
Existing cluster already uses IngressStarting a new greenfield project (2026+)Gateway API (HTTPRoute + Gateway)
Single-team, simple routing rulesNeed cross-team networking role separationGateway API (role-oriented model)

Important Caveats

Related Units