kubectl apply -f deployment.yaml -f service.yaml -f ingress.yaml| Resource | API Version | Purpose | Key Spec Fields | Namespaced |
|---|---|---|---|---|
| Deployment | apps/v1 | Manages Pod replicas with rolling updates | replicas, selector, template, strategy | Yes |
| ReplicaSet | apps/v1 | Ensures N identical Pods run (managed by Deployment) | replicas, selector, template | Yes |
| Service | v1 | Stable network endpoint for a set of Pods | type, selector, ports, clusterIP | Yes |
| Ingress | networking.k8s.io/v1 | HTTP/HTTPS routing from external to Services | ingressClassName, rules, tls | Yes |
| IngressClass | networking.k8s.io/v1 | Defines which controller handles an Ingress | controller, parameters | No |
| ConfigMap | v1 | Non-sensitive configuration data for Pods | data, binaryData | Yes |
| Secret | v1 | Sensitive data (passwords, TLS certs, tokens) | data, type | Yes |
| HorizontalPodAutoscaler | autoscaling/v2 | Auto-scales Deployment replicas by metrics | minReplicas, maxReplicas, metrics | Yes |
| Type | Scope | Port Range | Load Balancing | Cost | Use Case |
|---|---|---|---|---|---|
| ClusterIP (default) | Internal only | Any | kube-proxy (iptables/IPVS) | Free | Microservice-to-microservice communication |
| NodePort | External via node IP | 30000-32767 | None (single node hit) | Free | Dev/testing, on-prem without LB |
| LoadBalancer | External via cloud LB | Any | Cloud provider LB | $$$ per LB | Production single-service exposure |
| ExternalName | DNS alias (CNAME) | N/A | N/A | Free | Proxy to external services (e.g., RDS) |
| Headless (clusterIP: None) | Internal DNS only | Any | None (direct Pod IPs) | Free | StatefulSets, service discovery |
| pathType | Matching Behavior | Example Path | Matches /foo? | Matches /foo/bar? |
|---|---|---|---|---|
| Exact | Case-sensitive exact match | /foo | Yes | No |
| Prefix | Path prefix match (segment boundary) | /foo | Yes | Yes |
| ImplementationSpecific | Controller-dependent | /foo | Depends | Depends |
START: How should external traffic reach your application?
│
├── Internal microservice only (no external traffic)?
│ ├── YES → Use Service type: ClusterIP (default)
│ └── NO ↓
│
├── Need to expose a single TCP/UDP service (not HTTP)?
│ ├── YES → Cloud cluster? → LoadBalancer. On-prem? → NodePort
│ └── NO ↓
│
├── Need HTTP/HTTPS routing with host/path rules?
│ ├── YES → New project? → Gateway API. Existing? → Ingress
│ └── NO ↓
│
├── Need TLS termination?
│ ├── YES → Add tls section + cert-manager annotation
│ └── NO → Ingress with HTTP-only rules
│
└── Which Ingress controller?
├── General purpose → NGINX Ingress Controller
├── Auto-discovery, Let's Encrypt → Traefik
├── High performance → HAProxy
└── Cloud native → AWS ALB / GCE Ingress
A Deployment declares the desired state for your application Pods, including the container image, replicas, and update strategy. [src1]
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.2.3
ports:
- containerPort: 8080
resources:
requests: { cpu: 100m, memory: 128Mi }
limits: { cpu: 500m, memory: 256Mi }
readinessProbe:
httpGet: { path: /healthz, port: 8080 }
initialDelaySeconds: 5
periodSeconds: 10
Verify: kubectl apply -f deployment.yaml && kubectl rollout status deployment/myapp → deployment "myapp" successfully rolled out
A Service provides a stable ClusterIP and DNS name for the set of Pods matched by the selector. [src2]
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: ClusterIP
selector:
app: myapp
ports:
- name: http
port: 80
targetPort: 8080
Verify: kubectl get svc myapp → Shows ClusterIP assigned. kubectl get endpoints myapp → Shows Pod IPs.
An Ingress defines HTTP routing rules that map external hostnames and paths to backend Services. Requires an Ingress controller. [src3]
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts: [myapp.example.com]
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp
port: { number: 80 }
Verify: kubectl get ingress myapp → Shows ADDRESS and HOST.
cert-manager watches for Ingress resources with the appropriate annotation and automatically provisions TLS certificates. [src5]
# cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v2.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
ingressClassName: nginx
Verify: kubectl get certificate -A → Ready: True.
Update the container image to trigger a zero-downtime rolling update. [src1]
kubectl set image deployment/myapp myapp=myapp:1.3.0
kubectl rollout status deployment/myapp
# Roll back if needed:
kubectl rollout undo deployment/myapp
Verify: kubectl rollout history deployment/myapp → Shows revision history.
# Input: Container image myapp:1.2.3 on port 8080
# Output: App accessible at https://myapp.example.com
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels: { app: myapp }
spec:
replicas: 3
selector: { matchLabels: { app: myapp } }
template:
metadata: { labels: { app: myapp } }
spec:
containers:
- name: myapp
image: myapp:1.2.3
ports: [{ containerPort: 8080 }]
resources:
requests: { cpu: 100m, memory: 128Mi }
limits: { cpu: 500m, memory: 256Mi }
readinessProbe:
httpGet: { path: /healthz, port: 8080 }
apiVersion: v1
kind: Service
metadata:
name: myapp-nodeport
spec:
type: NodePort
selector: { app: myapp }
ports:
- port: 80
targetPort: 8080
nodePort: 30080
apiVersion: v1
kind: Service
metadata:
name: myapp-lb
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector: { app: myapp }
ports:
- name: https
port: 443
targetPort: 8080
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service: { name: frontend, port: { number: 80 } }
- path: /api(/|$)(.*)
pathType: ImplementationSpecific
backend:
service: { name: api, port: { number: 80 } }
# BAD -- no resource requests/limits; Pod can OOMKill others
spec:
containers:
- name: myapp
image: myapp:1.2.3
# No resources block -- BestEffort QoS
# GOOD -- explicit resource boundaries
spec:
containers:
- name: myapp
image: myapp:1.2.3
resources:
requests: { cpu: 100m, memory: 128Mi }
limits: { cpu: 500m, memory: 256Mi }
# BAD -- :latest is mutable; rollback impossible
spec:
containers:
- name: myapp
image: myapp:latest
# GOOD -- immutable version tag; rollback works
spec:
containers:
- name: myapp
image: myapp:1.2.3
# BAD -- traffic sent to Pods before they're ready
spec:
containers:
- name: myapp
image: myapp:1.2.3
# No readinessProbe
# GOOD -- Pod receives traffic only after probe succeeds
spec:
containers:
- name: myapp
image: myapp:1.2.3
readinessProbe:
httpGet: { path: /healthz, port: 8080 }
initialDelaySeconds: 5
periodSeconds: 10
# BAD -- "my-app" != "myapp"; no traffic reaches Pods
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
# GOOD -- selector matches Pod template labels
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
# BAD -- wrong or no controller may handle it
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service: { name: myapp, port: { number: 80 } }
# GOOD -- explicit controller selection
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
spec:
ingressClassName: nginx
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service: { name: myapp, port: { number: 80 } }
kubectl get endpoints myapp to verify Pod IPs are listed. [src2]kubectl get ingressclass. [src3]kubectl describe certificate. [src5]kubectl get svc --all-namespaces. [src2]kubectl describe pod <name> for exact error. [src4]kubectl top pod to check usage, set limits 20-50% above peak. [src4]# Check Deployment rollout status
kubectl rollout status deployment/myapp
# View rollout history
kubectl rollout history deployment/myapp
# Check Service endpoints (empty = selector mismatch)
kubectl get endpoints myapp
# Verify Ingress has ADDRESS (empty = no controller)
kubectl get ingress myapp
# Debug Pod issues
kubectl describe pod -l app=myapp
kubectl logs -l app=myapp --tail=50
# Check installed Ingress controllers
kubectl get ingressclass
# Test connectivity from inside cluster
kubectl run curl --image=curlimages/curl --rm -it -- curl http://myapp.default.svc.cluster.local
# View resource usage (requires metrics-server)
kubectl top pods -l app=myapp
# Check events for failures
kubectl get events --sort-by=.metadata.creationTimestamp
| API/Resource | Version | Status | Key Change |
|---|---|---|---|
| Deployment (apps/v1) | k8s 1.9+ | Stable (GA) | apps/v1beta2 removed in 1.16 |
| Service (v1) | k8s 1.0+ | Stable (GA) | Dual-stack IPv4/IPv6 GA in 1.23 |
| Ingress (networking.k8s.io/v1) | k8s 1.19+ | Frozen (GA) | extensions/v1beta1 removed in 1.22; API frozen |
| IngressClass | k8s 1.18+ | Stable (GA) | Required to specify controller |
| Gateway API (v1) | k8s 1.26+ | Stable (GA) | HTTPRoute, Gateway; successor to Ingress |
| Gateway API (v1alpha2) | k8s 1.24+ | Experimental | TCPRoute, TLSRoute, GRPCRoute |
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Deploying stateless HTTP/HTTPS apps | Running stateful workloads (DBs, queues) | StatefulSet + Headless Service |
| Need rolling updates with zero downtime | Need batch processing or scheduled jobs | Job / CronJob |
| Routing external HTTP traffic by host/path | Exposing non-HTTP TCP/UDP externally | LoadBalancer Service or Gateway API TCPRoute |
| Existing cluster already uses Ingress | Starting a new greenfield project (2026+) | Gateway API (HTTPRoute + Gateway) |
| Single-team, simple routing rules | Need cross-team networking role separation | Gateway API (role-oriented model) |