Docker Compose Kafka + Zookeeper: Complete Setup Reference

Type: Software Reference Confidence: 0.94 Sources: 7 Verified: 2026-02-27 Freshness: 2026-02-27

TL;DR

Constraints

Quick Reference

Service Configuration Summary

ServiceImagePortsVolumesKey Env
Kafka (KRaft)confluentinc/cp-kafka:7.7.19092:9092, 29092:29092kafka-data:/var/lib/kafka/dataKAFKA_PROCESS_ROLES=broker,controller
Kafka (KRaft)apache/kafka:3.9.19092:9092kafka-data:/opt/kafka/dataKAFKA_NODE_ID=1
Kafka (ZooKeeper)confluentinc/cp-kafka:7.4.49092:9092, 29092:29092kafka-data:/var/lib/kafka/dataKAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
ZooKeeperconfluentinc/cp-zookeeper:7.4.42181:2181zk-data:/var/lib/zookeeper/dataZOOKEEPER_CLIENT_PORT=2181
Schema Registryconfluentinc/cp-schema-registry:7.7.18081:8081--SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:29092
Kafka UIghcr.io/kafbat/kafka-ui:latest8080:8080--KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:29092
Kafka Connectconfluentinc/cp-kafka-connect:7.7.18083:8083connect-plugins:/usr/share/javaCONNECT_BOOTSTRAP_SERVERS=kafka:29092
REST Proxyconfluentinc/cp-kafka-rest:7.7.18082:8082--KAFKA_REST_BOOTSTRAP_SERVERS=kafka:29092

Environment Variable Naming Convention (Confluent images)

ComponentPrefixExample PropertyEnvironment Variable
Kafka brokerKAFKA_advertised.listenersKAFKA_ADVERTISED_LISTENERS
ZooKeeperZOOKEEPER_client.portZOOKEEPER_CLIENT_PORT
Schema RegistrySCHEMA_REGISTRY_kafkastore.bootstrap.serversSCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
Kafka ConnectCONNECT_bootstrap.serversCONNECT_BOOTSTRAP_SERVERS
REST ProxyKAFKA_REST_bootstrap.serversKAFKA_REST_BOOTSTRAP_SERVERS

Rule: Prefix + uppercase + replace . with _. [src1]

Decision Tree

START: Which Kafka deployment do you need?
├── Kafka version 4.0+?
│   ├── YES → Use KRaft mode only (ZooKeeper removed)
│   └── NO ↓
├── Kafka 3.3 - 3.9?
│   ├── YES → KRaft recommended (production-ready since 3.3.1)
│   │           Use ZooKeeper only if migrating existing cluster
│   └── NO ↓
├── Kafka 2.x or 3.0-3.2?
│   ├── YES → Use ZooKeeper mode (KRaft not production-ready)
│   └── NO ↓
└── Need Schema Registry / Connect / UI?
    ├── Dev/testing → Add services to same compose file
    ├── Staging/prod → Separate compose files or Kubernetes
    └── Schema Registry → Always co-deploy with Kafka broker

Step-by-Step Guide

1. Create KRaft mode docker-compose.yml (recommended)

This is the standard setup for Kafka 3.3+ and the only option for Kafka 4.0+. KRaft eliminates ZooKeeper entirely. [src2] [src3]

# docker-compose.yml -- Kafka KRaft mode (no ZooKeeper)
services:
  kafka:
    image: confluentinc/cp-kafka:7.7.1
    hostname: kafka
    container_name: kafka
    ports:
      - "9092:9092"
      - "29092:29092"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
      KAFKA_LISTENERS: PLAINTEXT://kafka:29092,CONTROLLER://kafka:9093,PLAINTEXT_HOST://0.0.0.0:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
    volumes:
      - kafka-data:/tmp/kraft-combined-logs
volumes:
  kafka-data:

Verify: docker compose up -d && docker compose exec kafka kafka-topics --bootstrap-server localhost:9092 --list → empty list (no error)

2. Create ZooKeeper mode docker-compose.yml (legacy)

Use only for Kafka 2.x or migrating existing ZooKeeper-based clusters. [src1] [src5]

# docker-compose.yml -- Kafka with ZooKeeper (legacy)
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.4.4
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
  kafka:
    image: confluentinc/cp-kafka:7.4.4
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://0.0.0.0:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

Verify: docker compose up -d && docker compose exec kafka kafka-topics --bootstrap-server localhost:9092 --list → empty list (no error)

3. Add Schema Registry

Schema Registry manages Avro, Protobuf, and JSON schemas for Kafka topics. [src1]

  schema-registry:
    image: confluentinc/cp-schema-registry:7.7.1
    depends_on:
      - kafka
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: kafka:29092
      SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081

Verify: curl -s http://localhost:8081/subjects[]

4. Add Kafka UI

Kafka UI provides a web dashboard for topics, consumers, messages, and cluster monitoring. [src7]

  kafka-ui:
    image: ghcr.io/kafbat/kafka-ui:latest
    depends_on:
      - kafka
    ports:
      - "8080:8080"
    environment:
      KAFKA_CLUSTERS_0_NAME: local
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
      KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schema-registry:8081

Verify: Open http://localhost:8080 → Kafka UI dashboard with cluster info

5. Create topics and test messaging

Create topics and test with the built-in CLI tools. [src2]

# Create a topic
docker compose exec kafka kafka-topics \
  --bootstrap-server localhost:9092 \
  --create --topic test-topic --partitions 3 --replication-factor 1

# Produce messages
echo "hello world" | docker compose exec -T kafka kafka-console-producer \
  --bootstrap-server localhost:9092 --topic test-topic

# Consume messages
docker compose exec kafka kafka-console-consumer \
  --bootstrap-server localhost:9092 --topic test-topic \
  --from-beginning --max-messages 1

Verify: Consumer outputs hello world

Code Examples

YAML: Full Stack KRaft + Schema Registry + Kafka UI

Full script: kraft-full-stack.yml (72 lines)

# docker-compose.yml -- Full Kafka dev stack (KRaft mode)
# Services: Kafka broker, Schema Registry, Kafka UI
# Usage: docker compose up -d
# See full script for complete configuration

YAML: Official Apache Image Minimal KRaft

services:
  kafka:
    image: apache/kafka:3.9.1
    container_name: kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk

YAML: Multi-Broker KRaft Cluster (3 brokers)

Full script: kraft-multi-broker.yml (95 lines)

# 3-broker KRaft cluster for staging/testing
# Generate CLUSTER_ID: docker compose exec kafka-1 kafka-storage random-uuid
# See full script for complete 3-broker configuration

Python: Produce and Consume with confluent-kafka

from confluent_kafka import Producer, Consumer  # confluent-kafka==2.6.1
import json

# Producer
producer = Producer({'bootstrap.servers': 'localhost:9092'})
producer.produce('test-topic', key='key1', value=json.dumps({'msg': 'hello'}))
producer.flush()

# Consumer
consumer = Consumer({
    'bootstrap.servers': 'localhost:9092',
    'group.id': 'test-group',
    'auto.offset.reset': 'earliest'
})
consumer.subscribe(['test-topic'])
msg = consumer.poll(timeout=10.0)
if msg and not msg.error():
    print(f"Received: {msg.value().decode('utf-8')}")
consumer.close()

Anti-Patterns

Wrong: Using localhost as advertised listener inside Docker network

# BAD -- other containers cannot reach "localhost"
environment:
  KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092

Correct: Dual listeners for internal and external access

# GOOD -- separate listeners for Docker network and host machine
environment:
  KAFKA_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://0.0.0.0:9092
  KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
  KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT

Wrong: Default replication factor on single broker

# BAD -- default replication factor is 3, single broker only has 1
environment:
  KAFKA_BROKER_ID: 1
  # Missing: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

Correct: Set replication factor to 1 for dev

# GOOD -- match replication factor to broker count
environment:
  KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
  KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1

Wrong: Mixing KRaft and ZooKeeper config

# BAD -- cannot use both modes simultaneously
environment:
  KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
  KAFKA_PROCESS_ROLES: broker,controller

Correct: Choose one mode

# GOOD (KRaft)
environment:
  KAFKA_NODE_ID: 1
  KAFKA_PROCESS_ROLES: broker,controller
  KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
  CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk

Wrong: No health check or dependency ordering

# BAD -- Schema Registry starts before Kafka is ready
services:
  schema-registry:
    image: confluentinc/cp-schema-registry:7.7.1
    # No depends_on, no health check

Correct: Use depends_on with health checks

# GOOD -- wait for Kafka to be healthy
services:
  kafka:
    healthcheck:
      test: kafka-topics --bootstrap-server localhost:9092 --list
      interval: 10s
      timeout: 5s
      retries: 5
  schema-registry:
    depends_on:
      kafka:
        condition: service_healthy

Common Pitfalls

Diagnostic Commands

# Check if Kafka broker is running and responsive
docker compose exec kafka kafka-broker-api-versions --bootstrap-server localhost:9092

# List all topics
docker compose exec kafka kafka-topics --bootstrap-server localhost:9092 --list

# Describe a specific topic (partitions, replicas, ISR)
docker compose exec kafka kafka-topics --bootstrap-server localhost:9092 --describe --topic test-topic

# Check consumer group lag
docker compose exec kafka kafka-consumer-groups --bootstrap-server localhost:9092 --describe --group my-group

# View broker logs for errors
docker compose logs kafka --tail 50

# Test Schema Registry connectivity
curl -s http://localhost:8081/subjects | python3 -m json.tool

# Check ZooKeeper status (ZooKeeper mode only)
docker compose exec zookeeper echo ruok | nc localhost 2181

# Check Kafka container resource usage
docker stats kafka --no-stream

# Verify advertised listeners from client perspective
docker compose exec kafka kafka-configs --bootstrap-server localhost:9092 --entity-type brokers --entity-name 1 --describe --all | grep advertised

Version History & Compatibility

Kafka VersionZooKeeper SupportKRaft StatusDocker ImageNotes
4.0+RemovedRequired (only mode)apache/kafka:4.0.0ZooKeeper code fully removed
3.9.xDeprecatedProduction-readyapache/kafka:3.9.1, cp-kafka:7.8.xLast release with ZK support
3.7-3.8DeprecatedProduction-readyapache/kafka:3.8.0, cp-kafka:7.7.1KRaft recommended for new clusters
3.3-3.6SupportedProduction-readycp-kafka:7.3-7.6KRaft GA since 3.3.1
3.0-3.2SupportedPreviewcp-kafka:7.0-7.2KRaft not recommended for production
2.xRequiredNot availablecp-kafka:6.xMust use ZooKeeper

Confluent Platform to Kafka version mapping: CP 7.7.x = Kafka 3.7.x, CP 7.8.x = Kafka 3.8.x. [src6]

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Local development and testingProduction with 99.99% SLA requirementsManaged Kafka (Confluent Cloud, AWS MSK)
CI/CD pipeline integration testingMulti-datacenter replication neededKubernetes + Strimzi/Confluent Operator
Learning Kafka architectureNeed auto-scaling based on loadServerless Kafka (Confluent, Redpanda)
Prototyping event-driven architecturesSecurity compliance requires audit loggingEnterprise Confluent Platform
Docker-based staging environmentsTeam lacks Docker/container expertiseNative Kafka install or managed service

Important Caveats

Related Units