vm.max_map_count=262144 on the host.docker compose up -d with a properly configured docker-compose.yml pinning all services to the same 8.17.0 (or latest 8.x) tag.ELASTIC_PASSWORD and bootstrap user passwords causes startup failures and locked-out clusters.docker.elastic.co.vm.max_map_count=262144 on the Docker host before starting Elasticsearch -- container will crash without itlatest Docker tag -- Elastic does not publish a latest tagES_JAVA_OPTS -Xms and -Xmx to equal values, max 50% of available RAM, never exceeding 31 GBlogstash_system user for pipeline output -- it lacks index write permissions; create a dedicated logstash_internal userELASTIC_PASSWORD and configure user passwords| Service | Image | Ports | Volumes | Key Env |
|---|---|---|---|---|
| Elasticsearch | docker.elastic.co/elasticsearch/elasticsearch:8.17.0 | 9200:9200 (API), 9300:9300 (transport) | es-data:/usr/share/elasticsearch/data | discovery.type=single-node, ES_JAVA_OPTS=-Xms1g -Xmx1g, ELASTIC_PASSWORD, xpack.security.enabled=true |
| Logstash | docker.elastic.co/logstash/logstash:8.17.0 | 5044:5044 (Beats), 5000:5000/tcp (TCP input), 9600:9600 (monitoring) | ./logstash/pipeline:/usr/share/logstash/pipeline:ro, ./logstash/config/logstash.yml | LS_JAVA_OPTS=-Xms256m -Xmx256m, LOGSTASH_INTERNAL_PASSWORD |
| Kibana | docker.elastic.co/kibana/kibana:8.17.0 | 5601:5601 | ./kibana/config/kibana.yml | KIBANA_SYSTEM_PASSWORD, ELASTICSEARCH_HOSTS=https://elasticsearch:9200 |
| Setup (init) | docker.elastic.co/elasticsearch/elasticsearch:8.17.0 | none | es-certs:/usr/share/elasticsearch/config/certs | ELASTIC_PASSWORD, KIBANA_PASSWORD, LOGSTASH_PASSWORD |
| Variable | Default | Purpose |
|---|---|---|
ELASTIC_VERSION | 8.17.0 | Stack version for all images |
ELASTIC_PASSWORD | changeme | Superuser password |
KIBANA_SYSTEM_PASSWORD | changeme | Kibana service account password |
LOGSTASH_INTERNAL_PASSWORD | changeme | Logstash pipeline output password |
ES_MEM_LIMIT | 1073741824 (1 GB) | Elasticsearch container memory limit |
KB_MEM_LIMIT | 1073741824 (1 GB) | Kibana container memory limit |
LS_MEM_LIMIT | 1073741824 (1 GB) | Logstash container memory limit |
START: What is your deployment scenario?
├── Development/testing on a single machine?
│ ├── YES → Use single-node mode (discovery.type=single-node)
│ │ ├── Need quick prototyping?
│ │ │ ├── YES → Disable security (xpack.security.enabled=false) -- DEV ONLY
│ │ │ └── NO → Keep security on, set ELASTIC_PASSWORD
│ └── NO (production) →
│ ├── Data volume < 100 GB/day?
│ │ ├── YES → Single-node ES is sufficient, enable security + TLS
│ │ └── NO → Multi-node cluster (3+ ES nodes)
│ │ ├── Need high availability?
│ │ │ ├── YES → 3 master-eligible + 2 data nodes minimum
│ │ │ └── NO → 3 combined master+data nodes
│ └── Need TLS between all components?
│ ├── YES → Use setup container with elasticsearch-certutil (see Step 2)
│ └── NO → Basic authentication only (passwords, no TLS)
Elasticsearch requires elevated mmap limits. This MUST be done on the Docker host, not inside the container. [src2]
# Linux: set vm.max_map_count (required for Elasticsearch)
sudo sysctl -w vm.max_map_count=262144
# Make persistent across reboots
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
# Windows (WSL2): run in PowerShell as admin
wsl -d docker-desktop sh -c "sysctl -w vm.max_map_count=262144"
Verify: sysctl vm.max_map_count → expected: vm.max_map_count = 262144
Set up the directory structure with configuration files for each service. [src4]
mkdir -p docker-elk/{elasticsearch,logstash/pipeline,logstash/config,kibana/config}
cat > docker-elk/.env << 'EOF'
ELASTIC_VERSION=8.17.0
ELASTIC_PASSWORD=changeme
KIBANA_SYSTEM_PASSWORD=changeme
LOGSTASH_INTERNAL_PASSWORD=changeme
ES_MEM_LIMIT=1073741824
KB_MEM_LIMIT=1073741824
LS_MEM_LIMIT=1073741824
CLUSTER_NAME=docker-elk
LICENSE=basic
EOF
Verify: cat docker-elk/.env → all variables should be set
This is the core configuration file that defines all ELK services. [src1]
Full script: docker-compose.yml (89 lines)
Verify: docker compose config → validates the Compose file without starting services
The Logstash pipeline defines input sources, processing filters, and the Elasticsearch output. [src3]
Full script: logstash.conf (44 lines)
Verify: docker compose exec logstash logstash --config.test_and_exit → validates pipeline syntax
Configure Kibana to connect to Elasticsearch with authentication. [src5]
# kibana/config/kibana.yml
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elasticsearch:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "${KIBANA_SYSTEM_PASSWORD}"
monitoring.ui.container.elasticsearch.enabled: true
Verify: curl -s http://localhost:5601/api/status | jq .status.overall.level → "available"
Bootstrap the stack by starting Elasticsearch first, then initializing service account passwords. [src4]
cd docker-elk
docker compose up -d
# Wait for Elasticsearch to be healthy
until curl -s -u elastic:changeme http://localhost:9200/_cluster/health | grep -q '"status"'; do
echo "Waiting for Elasticsearch..."; sleep 5
done
# Set kibana_system password
curl -s -X POST -u elastic:changeme \
http://localhost:9200/_security/user/kibana_system/_password \
-H "Content-Type: application/json" \
-d '{"password":"changeme"}'
Verify: docker compose ps → all containers should show "healthy" or "running"
Full script: docker-compose.yml (89 lines)
# docker-compose.yml -- ELK Stack 8.x single-node development
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
container_name: elasticsearch
environment:
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- xpack.security.enabled=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- es-data:/usr/share/elasticsearch/data
ports:
- "127.0.0.1:9200:9200"
- "127.0.0.1:9300:9300"
Full script: logstash.conf (44 lines)
# logstash/pipeline/logstash.conf
input {
beats { port => 5044 }
tcp { port => 5000; codec => json_lines }
syslog { port => 5140 }
}
curl -s -X PUT -u elastic:changeme \
http://localhost:9200/_index_template/logs-template \
-H "Content-Type: application/json" \
-d '{
"index_patterns": ["logs-*"],
"template": {
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"index.lifecycle.name": "logs-policy"
},
"mappings": {
"properties": {
"@timestamp": { "type": "date" },
"message": { "type": "text" },
"level": { "type": "keyword" },
"service": { "type": "keyword" }
}
}
}
}'
curl -s -X PUT -u elastic:changeme \
http://localhost:9200/_ilm/policy/logs-policy \
-H "Content-Type: application/json" \
-d '{
"policy": {
"phases": {
"hot": { "actions": { "rollover": { "max_primary_shard_size": "50gb", "max_age": "30d" } } },
"warm": { "min_age": "30d", "actions": { "shrink": { "number_of_shards": 1 } } },
"delete": { "min_age": "90d", "actions": { "delete": {} } }
}
}
}'
# BAD -- data lost when container restarts
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.17.0
# No volumes defined -- all indices lost on restart
# GOOD -- data survives container restarts and upgrades
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.17.0
volumes:
- es-data:/usr/share/elasticsearch/data
volumes:
es-data:
driver: local
# BAD -- logstash_system is monitoring-only, cannot write indices
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
user => "logstash_system"
password => "${LOGSTASH_PASSWORD}"
}
}
# GOOD -- logstash_internal has logstash_writer role with index write
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
user => "logstash_internal"
password => "${LOGSTASH_INTERNAL_PASSWORD}"
index => "logstash-%{+YYYY.MM.dd}"
}
}
# BAD -- mixing 8.17.0 and 8.15.0 causes version incompatibility
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.17.0
logstash:
image: docker.elastic.co/logstash/logstash:8.15.0
kibana:
image: docker.elastic.co/kibana/kibana:8.16.0
# GOOD -- all services pinned via .env variable
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
logstash:
image: docker.elastic.co/logstash/logstash:${ELASTIC_VERSION}
kibana:
image: docker.elastic.co/kibana/kibana:${ELASTIC_VERSION}
# .env: ELASTIC_VERSION=8.17.0
# BAD -- Xms != Xmx causes heap resizing overhead and GC pauses
environment:
- "ES_JAVA_OPTS=-Xms256m -Xmx2g"
# GOOD -- equal Xms/Xmx eliminates heap resizing, 50% of container memory
environment:
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
max virtual memory areas vm.max_map_count [65530] is too low. Fix: sudo sysctl -w vm.max_map_count=262144 on the Docker host. [src2]mem_limit to at least 1 GB and ES_JAVA_OPTS to 50% of that. [src2]kibana_system password not set. Fix: Use _security/user/kibana_system/_password API after ES starts. [src4]logstash_writer role with write, create_index, and manage privileges. [src3]127.0.0.1:9200:9200. [src1]/usr/share/elasticsearch/data. Fix: Ensure volume directory is owned by UID 1000 (chown -R 1000:1000 ./es-data). [src2]config.reload.automatic: true in logstash.yml. [src3]depends_on with health check on Elasticsearch and wait. [src6]# Check Elasticsearch cluster health
curl -s -u elastic:changeme http://localhost:9200/_cluster/health?pretty
# Check Elasticsearch node stats (memory, disk, CPU)
curl -s -u elastic:changeme http://localhost:9200/_nodes/stats?pretty | jq '.nodes[].os'
# Check Logstash pipeline status
curl -s http://localhost:9600/_node/stats/pipelines?pretty
# Verify Logstash can reach Elasticsearch
docker compose exec logstash curl -s -u logstash_internal:changeme http://elasticsearch:9200
# Check Kibana status
curl -s http://localhost:5601/api/status | jq '.status.overall'
# View Elasticsearch container logs
docker compose logs elasticsearch --tail=50
# View Logstash pipeline errors
docker compose logs logstash --tail=50 | grep -i error
# Check vm.max_map_count on the host
sysctl vm.max_map_count
# List all Elasticsearch indices
curl -s -u elastic:changeme http://localhost:9200/_cat/indices?v
# Test Logstash TCP input
echo '{"message":"test log","level":"info"}' | nc localhost 5000
# Check Elasticsearch disk usage
curl -s -u elastic:changeme http://localhost:9200/_cat/allocation?v
| Version | Status | Breaking Changes | Migration Notes |
|---|---|---|---|
| 8.17.x | Current (Jan 2026) | None | Recommended for new deployments |
| 8.15.x | Supported | None | Standard upgrade path |
| 8.0.x | Supported (baseline) | Security enabled by default, TLS auto-configured | Requires password bootstrap on first start |
| 7.17.x | Maintenance | N/A (last 7.x) | Upgrade to 8.x: enable security, update env vars, re-index if needed |
| 7.x → 8.x | Migration | xpack.security.enabled=true by default, enrollment tokens | Run upgrade assistant in Kibana 7.17 first |
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Centralized log aggregation for <50 GB/day | Ingesting >1 TB/day requiring dedicated hardware | Bare-metal Elastic cluster with Ansible/Terraform |
| Development and testing of log pipelines | Need a managed service with SLA | Elastic Cloud, AWS OpenSearch Service |
| Self-hosted observability on a single server | Only need metrics (no logs) | Prometheus + Grafana stack |
| Air-gapped or on-premises deployment | Need real-time streaming analytics | Apache Kafka + Flink + Elasticsearch |
| Prototyping dashboards before production | Need APM and distributed tracing only | Jaeger or Zipkin with Elasticsearch backend |
elasticsearch-certutil tool in a setup containerelastic superuser should NOT be used in production applications -- create dedicated users with minimum required privilegesdocker compose down (not kill) for clean shutdown