kompose convert (v1.34+) to auto-generate Kubernetes manifests, then refine with health checks, resource limits, secrets, and NetworkPolicies — expect 70-80% auto-conversion.kompose convert -f docker-compose.ymlbuild: directives — images must be pre-built and pushed to a registry. No readiness probes or resource limits are generated.build: directives — all images must be pre-built and pushed to a container registry before conversion [src1, src2]resources.requests and resources.limits — a single pod without limits can consume all node resources [src3]depends_on has no Kubernetes equivalent — use init containers, readiness probes, or application-level connection retry logic [src6]| Docker Compose | Kubernetes Equivalent | Notes |
|---|---|---|
services: |
Deployment + Service | Each compose service becomes one Deployment + one Service [src1] |
image: |
containers[].image |
Direct mapping in pod spec [src1] |
build: |
(skipped by Kompose) | Pre-build and push to registry (Docker Hub, ECR, GCR, GHCR) [src6] |
ports: |
Service (ClusterIP/LoadBalancer) | Or use Gateway API HTTPRoute [src1, src8] |
volumes: (named) |
PersistentVolumeClaim | Kompose creates PVCs automatically [src1, src2] |
volumes: (bind mount) |
ConfigMap or hostPath | ConfigMap for config files; avoid hostPath in production [src6] |
environment: |
env: or ConfigMap |
Sensitive values must use K8s Secret [src3] |
env_file: |
ConfigMap from file | kubectl create configmap --from-env-file=.env [src1] |
depends_on: |
(no equivalent) | Use init containers or readiness probes [src6] |
restart: always |
restartPolicy: Always |
Default in Deployments [src7] |
networks: |
K8s networking (flat) + NetworkPolicy | All pods reachable by service name; use NetworkPolicy to restrict [src7] |
deploy.replicas: |
spec.replicas: |
Direct mapping; add HPA for auto-scaling [src2] |
deploy.resources: |
resources.requests/limits |
CPU and memory [src3] |
healthcheck: |
liveness/readiness/startupProbe | Add manually — Kompose may not convert [src6] |
secrets: |
Secret | kubectl create secret generic or Sealed Secrets for GitOps [src3] |
START
├── Is this for local development only?
│ ├── YES → Keep Docker Compose (K8s is overkill) [src3]
│ └── NO → Continue ↓
├── How many services?
│ ├── 1-3 → Kompose convert [src1, src2]
│ ├── 4-10 → Kompose + Helm charts or Kustomize [src4]
│ └── 10+ → Full Helm chart architecture with subcharts, or Move2Kube [src4, src5]
├── Do you need auto-scaling?
│ ├── YES → Kubernetes HPA [src7]
│ └── NO → Fixed replicas
├── Where will K8s run?
│ ├── Local → Minikube, kind, or k3d [src1]
│ ├── Cloud → EKS, GKE, or AKS [src3]
│ └── Self-hosted → kubeadm or k3s
├── Environment-specific configs?
│ ├── YES (simple overlays) → Kustomize [src3]
│ ├── YES (complex templating) → Helm values.yaml per env [src4]
│ └── NO → Plain manifests
├── Need advanced source analysis?
│ ├── YES → Move2Kube (plan → transform) [src5]
│ └── NO → Kompose is sufficient
└── DEFAULT → kompose convert → refine → Helm/Kustomize → deploy
# Install Kompose (v1.38 as of Feb 2026)
# macOS:
brew install kompose
# Linux:
curl -L https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-amd64 -o kompose
chmod +x kompose && sudo mv kompose /usr/local/bin/
# Convert
kompose convert -f docker-compose.yml
# Output: deployment.yaml, service.yaml, pvc.yaml per service
# Alternative: convert directly to Helm chart
kompose convert -f docker-compose.yml --chart
Verify: ls *.yaml — one Deployment + Service per compose service. [src1, src2]
Kompose skips build: directives — all images must be in a registry. [src6]
# Build and push each service image
docker build -t myregistry/web:1.0 ./web && docker push myregistry/web:1.0
docker build -t myregistry/api:1.0 ./api && docker push myregistry/api:1.0
# Update generated YAML image references
sed -i 's|web:latest|myregistry/web:1.0|g' web-deployment.yaml
sed -i 's|api:latest|myregistry/api:1.0|g' api-deployment.yaml
Kompose output lacks production essentials — add manually. [src3, src6]
# Add to each container spec in deployment.yaml
containers:
- name: web
image: myregistry/web:1.0
ports:
- containerPort: 8080
startupProbe:
httpGet:
path: /health
port: 8080
failureThreshold: 30
periodSeconds: 2
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
# Non-sensitive → ConfigMap
kubectl create configmap app-config \
--from-literal=NODE_ENV=production \
--from-literal=LOG_LEVEL=info
# Sensitive → Secret
kubectl create secret generic app-secrets \
--from-literal=DATABASE_URL='postgres://user:pass@db:5432/myapp' \
--from-literal=API_KEY='sk-xxx'
# For GitOps: use Sealed Secrets to safely commit encrypted secrets
kubeseal --format yaml < secret.yaml > sealed-secret.yaml
# Reference in Deployment
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secrets
Gateway API is the recommended approach for new clusters (K8s 1.31+). [src7, src8]
# Option A: Gateway API (recommended for new deployments)
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: app-route
spec:
parentRefs:
- name: my-gateway
hostnames: ["myapp.com"]
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: web
port: 8080
# Option B: Legacy Ingress (still widely supported)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts: [myapp.com]
secretName: myapp-tls
rules:
- host: myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
Compose networks: isolation does not carry over — by default all K8s pods can communicate. [src7]
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-allow-web-only
spec:
podSelector:
matchLabels:
app: api
ingress:
- from:
- podSelector:
matchLabels:
app: web
ports:
- port: 3000
# Option A: Helm chart
helm create myapp
# Replace templates/ with refined manifests
# Parameterize values in values.yaml
helm install myapp ./myapp -f values-production.yaml
helm upgrade myapp ./myapp -f values-production.yaml
# Option B: Kustomize overlays (built into kubectl)
kubectl apply -k overlays/production/
Full script: python-programmatic-kompose-conversion-with-valida.py (33 lines)
# Input: docker-compose.yml path
# Output: Validated Kubernetes manifests
import subprocess
import yaml
from pathlib import Path
def convert_and_validate(compose_file: str, output_dir: str = "./k8s") -> list[str]:
"""Convert docker-compose.yml to K8s manifests and validate."""
Path(output_dir).mkdir(exist_ok=True)
result = subprocess.run(
["kompose", "convert", "-f", compose_file, "--out", output_dir],
capture_output=True, text=True
)
if result.returncode != 0:
raise RuntimeError(f"Kompose failed: {result.stderr}")
manifests = list(Path(output_dir).glob("*.yaml"))
issues = []
for f in manifests:
doc = yaml.safe_load(f.read_text())
kind = doc.get("kind", "Unknown")
if kind == "Deployment":
containers = doc["spec"]["template"]["spec"]["containers"]
for c in containers:
if "livenessProbe" not in c:
issues.append(f"{f.name}: {c['name']} missing livenessProbe")
if "resources" not in c:
issues.append(f"{f.name}: {c['name']} missing resource limits")
if issues:
print("Production readiness issues:")
for i in issues:
print(f" - {i}")
return [str(f) for f in manifests]
Full script: bash-end-to-end-migration-script.sh (27 lines)
#!/bin/bash
# Input: docker-compose.yml
# Output: Deployed Kubernetes application
set -euo pipefail
COMPOSE_FILE="${1:-docker-compose.yml}"
NAMESPACE="${2:-default}"
OUTPUT_DIR="./k8s-manifests"
echo "=== Docker Compose to Kubernetes Migration ==="
mkdir -p "$OUTPUT_DIR"
kompose convert -f "$COMPOSE_FILE" --out "$OUTPUT_DIR"
for f in "$OUTPUT_DIR"/*.yaml; do
kubectl apply --dry-run=client -f "$f" 2>&1 | grep -v "configured" || true
done
kubectl apply -f "$OUTPUT_DIR/" -n "$NAMESPACE"
for deployment in $(kubectl get deployments -n "$NAMESPACE" -o name); do
echo "Waiting for $deployment..."
kubectl rollout status "$deployment" -n "$NAMESPACE" --timeout=120s
done
echo "=== Migration complete ==="
kubectl get all -n "$NAMESPACE"
For larger projects with multiple Dockerfiles and source code, Move2Kube (CNCF Konveyor) provides a more comprehensive migration path. [src5]
# Install Move2Kube (v0.3.15+)
curl -L https://github.com/konveyor/move2kube/releases/latest/download/move2kube-linux-amd64 -o move2kube
chmod +x move2kube && sudo mv move2kube /usr/local/bin/
# Step 1: Plan — analyzes Compose + Dockerfiles + source
move2kube plan -s ./my-compose-project
# Step 2: Transform — interactive Q&A, generates K8s manifests + Helm + CI/CD
move2kube transform -s ./my-compose-project
# Output: deploy/yamls/, deploy/cicd/, deploy/scripts/
# BAD — hostPath ties pods to specific nodes and breaks scaling
volumes:
- name: data
hostPath:
path: /data/myapp
# GOOD — PVCs are portable and work with cloud storage [src1]
volumes:
- name: data
persistentVolumeClaim:
claimName: myapp-data
# BAD — secrets visible in kubectl describe
env:
- name: DATABASE_PASSWORD
value: "my-secret-password"
# GOOD — secrets stored securely [src3]
env:
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: db-secrets
key: password
# BAD — Kubernetes has no depends_on
# Compose: depends_on: [db, redis]
# This causes startup crashes when dependencies aren't ready
# GOOD — init container waits for DB before app starts [src6]
initContainers:
- name: wait-for-db
image: busybox:1.36
command: ['sh', '-c', 'until nc -z db-service 5432; do sleep 2; done']
# BAD — pod can consume all node resources and trigger OOM kills
containers:
- name: api
image: myregistry/api:1.0
# No resources block = unlimited
# GOOD — bounded resource usage, enables HPA [src3]
containers:
- name: api
image: myregistry/api:1.0
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
build: directives: Images must be pre-built and in a registry. Fix: build, tag, push before conversion. [src6]readinessProbe, livenessProbe, and startupProbe to every container. [src3, src6]depends_on not converted: K8s doesn't guarantee startup order. Fix: init containers or connection retry logic. [src6]resources.requests and resources.limits. [src3]networks: doesn't create K8s NetworkPolicies. Fix: implement NetworkPolicies to restrict pod-to-pod traffic. [src7]# Validate compose file before conversion
docker compose config -f docker-compose.yml
# Convert and see what Kompose generates (stdout preview)
kompose convert -f docker-compose.yml --stdout
# Dry-run deployment (validates manifests without applying)
kubectl apply --dry-run=client -f k8s-manifests/
# Check pod status after deployment
kubectl get pods -o wide
kubectl describe pod <pod-name>
kubectl logs <pod-name> --previous # logs from crashed container
# Verify services and endpoints
kubectl get svc
kubectl get endpoints
# Test service connectivity
kubectl run test --rm -it --image=busybox:1.36 -- wget -qO- http://web-service:8080/health
# Check Gateway API routes
kubectl get httproutes
kubectl describe httproute app-route
# Verify NetworkPolicies
kubectl get networkpolicies
kubectl describe networkpolicy api-allow-web-only
# Check resource usage vs limits
kubectl top pods
kubectl top nodes
| Tool | Version | Status | Key Changes |
|---|---|---|---|
| Kompose | 1.38 (Jan 2026) | Current | Compose Spec v3 full support, Helm chart output, ARM64 [src2] |
| Kubernetes | 1.35 (Dec 2025) | Current | Timbernetes release, continued Gateway API improvements [src7] |
| Kubernetes | 1.33 (Apr 2025) | Supported | Sidecar containers GA, native init container restartPolicy:Always [src7] |
| Helm | 3.16+ | Current | OCI registry support, improved schema validation [src4] |
| Docker Compose | v2 (Go) | Current | docker compose (space) replaces docker-compose (hyphen) |
| Move2Kube | 0.3.15 (Mar 2025) | Current | Enhanced Compose parsing, Helm + Kustomize output [src5] |
| Gateway API | v1.4 (Nov 2025) | Current | External auth filter, BackendTLSPolicy stable [src8] |
| Migrate When | Don't Migrate When | Use Instead |
|---|---|---|
| Production workloads need scaling | Local development only | Keep Docker Compose |
| Multiple environments (staging, prod) | Team < 3 engineers, single-server | Docker Compose + systemd |
| Need rolling updates, self-healing | Budget doesn't cover K8s ops | Railway, Fly.io, or Render |
| Cloud-managed K8s available (EKS/GKE/AKS) | Single stateless container | Cloud Run, App Runner, Fargate |
| Need service mesh or mTLS | No compliance/multi-tenancy needs | Docker Compose + Traefik |
| GitOps workflow required | Prototype or proof-of-concept | Docker Compose |
restartPolicy: Always on init containers for proper sidecar lifecycle management.