docker compose up -d with properly configured KAFKA_ADVERTISED_LISTENERS for external client access.KAFKA_ADVERTISED_LISTENERS (clients get unreachable internal hostname) and forgetting KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 in single-broker setups.kafka-storage random-uuid| Service | Image | Ports | Volumes | Key Env |
|---|---|---|---|---|
| Kafka (KRaft) | confluentinc/cp-kafka:7.7.1 | 9092:9092, 29092:29092 | kafka-data:/var/lib/kafka/data | KAFKA_PROCESS_ROLES=broker,controller |
| Kafka (KRaft) | apache/kafka:3.9.1 | 9092:9092 | kafka-data:/opt/kafka/data | KAFKA_NODE_ID=1 |
| Kafka (ZooKeeper) | confluentinc/cp-kafka:7.4.4 | 9092:9092, 29092:29092 | kafka-data:/var/lib/kafka/data | KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 |
| ZooKeeper | confluentinc/cp-zookeeper:7.4.4 | 2181:2181 | zk-data:/var/lib/zookeeper/data | ZOOKEEPER_CLIENT_PORT=2181 |
| Schema Registry | confluentinc/cp-schema-registry:7.7.1 | 8081:8081 | -- | SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:29092 |
| Kafka UI | ghcr.io/kafbat/kafka-ui:latest | 8080:8080 | -- | KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:29092 |
| Kafka Connect | confluentinc/cp-kafka-connect:7.7.1 | 8083:8083 | connect-plugins:/usr/share/java | CONNECT_BOOTSTRAP_SERVERS=kafka:29092 |
| REST Proxy | confluentinc/cp-kafka-rest:7.7.1 | 8082:8082 | -- | KAFKA_REST_BOOTSTRAP_SERVERS=kafka:29092 |
| Component | Prefix | Example Property | Environment Variable |
|---|---|---|---|
| Kafka broker | KAFKA_ | advertised.listeners | KAFKA_ADVERTISED_LISTENERS |
| ZooKeeper | ZOOKEEPER_ | client.port | ZOOKEEPER_CLIENT_PORT |
| Schema Registry | SCHEMA_REGISTRY_ | kafkastore.bootstrap.servers | SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS |
| Kafka Connect | CONNECT_ | bootstrap.servers | CONNECT_BOOTSTRAP_SERVERS |
| REST Proxy | KAFKA_REST_ | bootstrap.servers | KAFKA_REST_BOOTSTRAP_SERVERS |
Rule: Prefix + uppercase + replace . with _. [src1]
START: Which Kafka deployment do you need?
├── Kafka version 4.0+?
│ ├── YES → Use KRaft mode only (ZooKeeper removed)
│ └── NO ↓
├── Kafka 3.3 - 3.9?
│ ├── YES → KRaft recommended (production-ready since 3.3.1)
│ │ Use ZooKeeper only if migrating existing cluster
│ └── NO ↓
├── Kafka 2.x or 3.0-3.2?
│ ├── YES → Use ZooKeeper mode (KRaft not production-ready)
│ └── NO ↓
└── Need Schema Registry / Connect / UI?
├── Dev/testing → Add services to same compose file
├── Staging/prod → Separate compose files or Kubernetes
└── Schema Registry → Always co-deploy with Kafka broker
This is the standard setup for Kafka 3.3+ and the only option for Kafka 4.0+. KRaft eliminates ZooKeeper entirely. [src2] [src3]
# docker-compose.yml -- Kafka KRaft mode (no ZooKeeper)
services:
kafka:
image: confluentinc/cp-kafka:7.7.1
hostname: kafka
container_name: kafka
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
KAFKA_LISTENERS: PLAINTEXT://kafka:29092,CONTROLLER://kafka:9093,PLAINTEXT_HOST://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
volumes:
- kafka-data:/tmp/kraft-combined-logs
volumes:
kafka-data:
Verify: docker compose up -d && docker compose exec kafka kafka-topics --bootstrap-server localhost:9092 --list → empty list (no error)
Use only for Kafka 2.x or migrating existing ZooKeeper-based clusters. [src1] [src5]
# docker-compose.yml -- Kafka with ZooKeeper (legacy)
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.4.4
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:7.4.4
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Verify: docker compose up -d && docker compose exec kafka kafka-topics --bootstrap-server localhost:9092 --list → empty list (no error)
Schema Registry manages Avro, Protobuf, and JSON schemas for Kafka topics. [src1]
schema-registry:
image: confluentinc/cp-schema-registry:7.7.1
depends_on:
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka:29092
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
Verify: curl -s http://localhost:8081/subjects → []
Kafka UI provides a web dashboard for topics, consumers, messages, and cluster monitoring. [src7]
kafka-ui:
image: ghcr.io/kafbat/kafka-ui:latest
depends_on:
- kafka
ports:
- "8080:8080"
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schema-registry:8081
Verify: Open http://localhost:8080 → Kafka UI dashboard with cluster info
Create topics and test with the built-in CLI tools. [src2]
# Create a topic
docker compose exec kafka kafka-topics \
--bootstrap-server localhost:9092 \
--create --topic test-topic --partitions 3 --replication-factor 1
# Produce messages
echo "hello world" | docker compose exec -T kafka kafka-console-producer \
--bootstrap-server localhost:9092 --topic test-topic
# Consume messages
docker compose exec kafka kafka-console-consumer \
--bootstrap-server localhost:9092 --topic test-topic \
--from-beginning --max-messages 1
Verify: Consumer outputs hello world
Full script: kraft-full-stack.yml (72 lines)
# docker-compose.yml -- Full Kafka dev stack (KRaft mode)
# Services: Kafka broker, Schema Registry, Kafka UI
# Usage: docker compose up -d
# See full script for complete configuration
services:
kafka:
image: apache/kafka:3.9.1
container_name: kafka
ports:
- "9092:9092"
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
Full script: kraft-multi-broker.yml (95 lines)
# 3-broker KRaft cluster for staging/testing
# Generate CLUSTER_ID: docker compose exec kafka-1 kafka-storage random-uuid
# See full script for complete 3-broker configuration
from confluent_kafka import Producer, Consumer # confluent-kafka==2.6.1
import json
# Producer
producer = Producer({'bootstrap.servers': 'localhost:9092'})
producer.produce('test-topic', key='key1', value=json.dumps({'msg': 'hello'}))
producer.flush()
# Consumer
consumer = Consumer({
'bootstrap.servers': 'localhost:9092',
'group.id': 'test-group',
'auto.offset.reset': 'earliest'
})
consumer.subscribe(['test-topic'])
msg = consumer.poll(timeout=10.0)
if msg and not msg.error():
print(f"Received: {msg.value().decode('utf-8')}")
consumer.close()
# BAD -- other containers cannot reach "localhost"
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
# GOOD -- separate listeners for Docker network and host machine
environment:
KAFKA_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
# BAD -- default replication factor is 3, single broker only has 1
environment:
KAFKA_BROKER_ID: 1
# Missing: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# GOOD -- match replication factor to broker count
environment:
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
# BAD -- cannot use both modes simultaneously
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_PROCESS_ROLES: broker,controller
# GOOD (KRaft)
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
# BAD -- Schema Registry starts before Kafka is ready
services:
schema-registry:
image: confluentinc/cp-schema-registry:7.7.1
# No depends_on, no health check
# GOOD -- wait for Kafka to be healthy
services:
kafka:
healthcheck:
test: kafka-topics --bootstrap-server localhost:9092 --list
interval: 10s
timeout: 5s
retries: 5
schema-registry:
depends_on:
kafka:
condition: service_healthy
localhost:9092. Fix: Add a PLAINTEXT_HOST://localhost:9092 listener for host access. [src4]KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 and KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1. [src1]volumes: - kafka-data:/var/lib/kafka/data. [src2]depends_on with condition: service_healthy and add a health check to Kafka. [src5]ZOOKEEPER_SERVER_ID. Fix: Assign unique IDs (1, 2, 3) to each node. [src1]ports: - "19092:9092". [src4]CLUSTER_ID values. Fix: Generate one UUID with kafka-storage random-uuid and use it for all brokers. [src3]docker-compose (v1, hyphenated) which is deprecated. Fix: Use docker compose (v2, space-separated). [src2]# Check if Kafka broker is running and responsive
docker compose exec kafka kafka-broker-api-versions --bootstrap-server localhost:9092
# List all topics
docker compose exec kafka kafka-topics --bootstrap-server localhost:9092 --list
# Describe a specific topic (partitions, replicas, ISR)
docker compose exec kafka kafka-topics --bootstrap-server localhost:9092 --describe --topic test-topic
# Check consumer group lag
docker compose exec kafka kafka-consumer-groups --bootstrap-server localhost:9092 --describe --group my-group
# View broker logs for errors
docker compose logs kafka --tail 50
# Test Schema Registry connectivity
curl -s http://localhost:8081/subjects | python3 -m json.tool
# Check ZooKeeper status (ZooKeeper mode only)
docker compose exec zookeeper echo ruok | nc localhost 2181
# Check Kafka container resource usage
docker stats kafka --no-stream
# Verify advertised listeners from client perspective
docker compose exec kafka kafka-configs --bootstrap-server localhost:9092 --entity-type brokers --entity-name 1 --describe --all | grep advertised
| Kafka Version | ZooKeeper Support | KRaft Status | Docker Image | Notes |
|---|---|---|---|---|
| 4.0+ | Removed | Required (only mode) | apache/kafka:4.0.0 | ZooKeeper code fully removed |
| 3.9.x | Deprecated | Production-ready | apache/kafka:3.9.1, cp-kafka:7.8.x | Last release with ZK support |
| 3.7-3.8 | Deprecated | Production-ready | apache/kafka:3.8.0, cp-kafka:7.7.1 | KRaft recommended for new clusters |
| 3.3-3.6 | Supported | Production-ready | cp-kafka:7.3-7.6 | KRaft GA since 3.3.1 |
| 3.0-3.2 | Supported | Preview | cp-kafka:7.0-7.2 | KRaft not recommended for production |
| 2.x | Required | Not available | cp-kafka:6.x | Must use ZooKeeper |
Confluent Platform to Kafka version mapping: CP 7.7.x = Kafka 3.7.x, CP 7.8.x = Kafka 3.8.x. [src6]
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Local development and testing | Production with 99.99% SLA requirements | Managed Kafka (Confluent Cloud, AWS MSK) |
| CI/CD pipeline integration testing | Multi-datacenter replication needed | Kubernetes + Strimzi/Confluent Operator |
| Learning Kafka architecture | Need auto-scaling based on load | Serverless Kafka (Confluent, Redpanda) |
| Prototyping event-driven architectures | Security compliance requires audit logging | Enterprise Confluent Platform |
| Docker-based staging environments | Team lacks Docker/container expertise | Native Kafka install or managed service |
apache/kafka image is fully Apache 2.0 licensed.wurstmeister/kafka image is unmaintained since 2023. Migrate to confluentinc/cp-kafka or apache/kafka._schemas). Losing this topic loses all schema history. Back up the topic or use Schema Registry export.