Docker Compose Reference: MinIO (S3-Compatible Object Storage)

Type: Software Reference Confidence: 0.92 Sources: 7 Verified: 2026-02-27 Freshness: 2026-02-27

TL;DR

Constraints

Quick Reference

Service Configuration Summary

ServiceImagePortsVolumesKey Env
minio (standalone)quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z9000:9000, 9001:9001minio_data:/dataMINIO_ROOT_USER, MINIO_ROOT_PASSWORD
minio1-4 (distributed)quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z(internal)data{n}-1:/data1, data{n}-2:/data2MINIO_ROOT_USER, MINIO_ROOT_PASSWORD
nginx (load balancer)nginx:1.27-alpine9000:9000, 9001:9001./nginx.conf:/etc/nginx/nginx.conf:ro--

Environment Variables

VariableRequiredDefaultDescription
MINIO_ROOT_USERYesminioadminRoot access key (admin username)
MINIO_ROOT_PASSWORDYesminioadminRoot secret key (min 8 chars)
MINIO_ROOT_USER_FILENo--Docker secret path for access key
MINIO_ROOT_PASSWORD_FILENo--Docker secret path for secret key
MINIO_BROWSERNoonEnable/disable web console
MINIO_BROWSER_REDIRECT_URLNo--External URL for console (behind proxy)
MINIO_SERVER_URLNo--External URL for S3 API (behind proxy)
MINIO_NOTIFY_WEBHOOK_ENABLE_<target>NooffEnable webhook notifications
MINIO_NOTIFY_WEBHOOK_ENDPOINT_<target>No--Webhook receiver URL
MINIO_DOMAINNo--Enable virtual-hosted-style bucket access
MINIO_REGION_NAMENous-east-1Region name for S3 compatibility

Key Endpoints

EndpointPortPurpose
http://localhost:90009000S3 API (GET/PUT/DELETE objects, list buckets)
http://localhost:90019001MinIO Console (web UI)
http://localhost:9000/minio/health/live9000Liveness health check
http://localhost:9000/minio/health/cluster9000Cluster health check (distributed mode)

Decision Tree

START: What deployment mode do you need?
├── Development/testing with single node?
│   ├── YES → Use Standalone docker-compose.yml (Step 1)
│   └── NO ↓
├── Production with data redundancy?
│   ├── YES → Use Distributed docker-compose.yml with 4 nodes + nginx (Step 2)
│   └── NO ↓
├── Need bucket event notifications?
│   ├── YES → Add webhook env vars + mc event configuration (Step 5)
│   └── NO ↓
├── Need TLS encryption?
│   ├── YES → Mount certs to /root/.minio/certs/ or terminate TLS at reverse proxy
│   └── NO ↓
└── DEFAULT → Start with Standalone, migrate to Distributed when needed

Step-by-Step Guide

1. Deploy MinIO standalone (development)

Create a minimal docker-compose.yml for local development and testing. This runs a single MinIO server with persistent storage. [src2]

# docker-compose.yml -- MinIO Standalone (development only)
services:
  minio:
    image: quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z
    container_name: minio
    ports:
      - "9000:9000"
      - "9001:9001"
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin123
    command: server /data --console-address ":9001"
    volumes:
      - minio_data:/data
    healthcheck:
      test: ["CMD", "mc", "ready", "local"]
      interval: 5s
      timeout: 5s
      retries: 5

volumes:
  minio_data:

Verify: curl -s http://localhost:9000/minio/health/live → returns HTTP 200

2. Deploy MinIO distributed (production)

The official 4-node distributed setup uses erasure coding for data protection. Each node has 2 drives, creating an 8-drive erasure set that tolerates up to 4 drive failures. [src1, src3]

# docker-compose.yml -- MinIO Distributed (4 nodes, erasure coding)
x-minio-common: &minio-common
  image: quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z
  command: server --console-address ":9001" http://minio{1...4}/data{1...2}
  expose:
    - "9000"
    - "9001"
  environment:
    MINIO_ROOT_USER: minioadmin
    MINIO_ROOT_PASSWORD: strongpassword123
  healthcheck:
    test: ["CMD", "mc", "ready", "local"]
    interval: 5s
    timeout: 5s
    retries: 5

services:
  minio1:
    <<: *minio-common
    hostname: minio1
    volumes:
      - data1-1:/data1
      - data1-2:/data2
  minio2:
    <<: *minio-common
    hostname: minio2
    volumes:
      - data2-1:/data1
      - data2-2:/data2
  minio3:
    <<: *minio-common
    hostname: minio3
    volumes:
      - data3-1:/data1
      - data3-2:/data2
  minio4:
    <<: *minio-common
    hostname: minio4
    volumes:
      - data4-1:/data1
      - data4-2:/data2
  nginx:
    image: nginx:1.27-alpine
    ports:
      - "9000:9000"
      - "9001:9001"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - minio1
      - minio2
      - minio3
      - minio4

Verify: curl -s http://localhost:9000/minio/health/cluster → returns HTTP 200

3. Configure MinIO Client (mc)

Install and configure the MinIO Client to manage buckets, users, and policies. [src5]

# Configure alias pointing to your MinIO instance
mc alias set myminio http://localhost:9000 minioadmin minioadmin123

# Create a bucket
mc mb myminio/my-bucket

# Upload a file
mc cp myfile.txt myminio/my-bucket/

# Set bucket policy to public read
mc anonymous set download myminio/my-bucket

Verify: mc admin info myminio → shows server status, disks, uptime

4. Create users and apply bucket policies

Set up access control with users and JSON policies for fine-grained permissions. [src5]

# Create a new user
mc admin user add myminio appuser appuser-secret-key

# Apply a custom policy and attach to user
mc admin policy create myminio app-readwrite /tmp/app-policy.json
mc admin policy attach myminio app-readwrite --user appuser

Verify: mc admin user info myminio appuser → shows attached policy

5. Configure bucket event notifications

Set up webhook notifications for object events (create, delete, access). [src7]

# Add webhook target to MinIO
mc admin config set myminio notify_webhook:primary \
  endpoint="http://webhook-receiver:8080/events"

# Restart and enable events on a bucket
mc admin service restart myminio
mc event add myminio/my-bucket arn:minio:sqs::primary:webhook \
  --event put,get,delete

Verify: Upload a file and check webhook receiver logs for the event payload.

Code Examples

Python (boto3): Connect to MinIO and Upload Files

import boto3  # pip install boto3>=1.35.0
from botocore.config import Config

# Input:  MinIO endpoint, credentials, file to upload
# Output: Uploaded object in MinIO bucket

s3 = boto3.client(
    "s3",
    endpoint_url="http://localhost:9000",
    aws_access_key_id="minioadmin",
    aws_secret_access_key="minioadmin123",
    config=Config(signature_version="s3v4"),
    region_name="us-east-1",
)

# Create bucket (ignore if exists)
try:
    s3.create_bucket(Bucket="my-bucket")
except s3.exceptions.BucketAlreadyOwnedByYou:
    pass

# Upload file
s3.upload_file("local-file.txt", "my-bucket", "remote-file.txt")

JavaScript (AWS SDK v3): Upload and Download Objects

// npm install @aws-sdk/client-s3@^3.500.0
const { S3Client, PutObjectCommand, GetObjectCommand }
  = require("@aws-sdk/client-s3");

// Input:  MinIO endpoint, credentials, file buffer
// Output: Object stored/retrieved from MinIO bucket

const s3 = new S3Client({
  endpoint: "http://localhost:9000",
  region: "us-east-1",
  credentials: {
    accessKeyId: "minioadmin",
    secretAccessKey: "minioadmin123",
  },
  forcePathStyle: true,  // Required for MinIO
});

Go (minio-go): Bucket Operations

// go get github.com/minio/minio-go/v7@latest
package main

import (
    "context"
    "log"
    "github.com/minio/minio-go/v7"
    "github.com/minio/minio-go/v7/pkg/credentials"
)

func main() {
    client, _ := minio.New("localhost:9000", &minio.Options{
        Creds:  credentials.NewStaticV4("minioadmin", "minioadmin123", ""),
        Secure: false,
    })
    ctx := context.Background()
    _ = client.MakeBucket(ctx, "my-bucket", minio.MakeBucketOptions{})
}

Anti-Patterns

Wrong: Using default credentials in production

# BAD -- default credentials are publicly known
services:
  minio:
    image: quay.io/minio/minio
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin

Correct: Use strong credentials via .env or Docker secrets

# GOOD -- credentials from .env file, never committed to VCS
services:
  minio:
    image: quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z
    environment:
      MINIO_ROOT_USER: ${MINIO_ROOT_USER}
      MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}

Wrong: Missing --console-address flag

# BAD -- console binds to random port, inaccessible
command: server /data
ports:
  - "9000:9000"
  - "9001:9001"

Correct: Explicitly set console address

# GOOD -- console reliably available on port 9001
command: server /data --console-address ":9001"
ports:
  - "9000:9000"
  - "9001:9001"

Wrong: Using :latest tag

# BAD -- :latest is mutable, can break on update
image: quay.io/minio/minio:latest

Correct: Pin to specific release

# GOOD -- immutable tag, reproducible deployments
image: quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z

Wrong: Using standalone for production data

# BAD -- no redundancy, single point of failure
command: server /data --console-address ":9001"

Correct: Distributed mode for production

# GOOD -- 4-node cluster with erasure coding
command: server --console-address ":9001" http://minio{1...4}/data{1...2}

Common Pitfalls

Diagnostic Commands

# Check MinIO is running and healthy
curl -sf http://localhost:9000/minio/health/live && echo "OK" || echo "FAIL"

# Check cluster health (distributed mode)
curl -sf http://localhost:9000/minio/health/cluster && echo "OK" || echo "FAIL"

# View MinIO server info (disk, uptime, version)
mc admin info myminio

# Check container logs for errors
docker compose logs minio --tail=50

# Verify bucket exists and list contents
mc ls myminio/my-bucket/

# Check disk usage per bucket
mc du myminio/my-bucket/

# View configured event notifications
mc event ls myminio/my-bucket

# Check MinIO server configuration
mc admin config get myminio

Version History & Compatibility

ReleaseStatusBreaking ChangesMigration Notes
RELEASE.2025-09-06RecommendedNoneStable release, recommended baseline
RELEASE.2025-05-24BreakingAdmin console removed from CE; boringcrypto removedUse mc CLI or separate object-browser; use GOFIPS env
RELEASE.2025-10-15Security fixNoneFixes privilege escalation CVE -- apply immediately
RELEASE.2024-xxPreviousNoneStandard upgrade path; data format compatible

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Local S3-compatible development/testingNeed managed cloud storage with SLAAWS S3, GCS, Azure Blob
Self-hosted object storage for CI/CD artifactsStoring <1 GB in a simple projectDocker volumes or bind mounts
S3 API-compatible backup targetNeed full data lake with analyticsDelta Lake, Apache Iceberg on S3
Air-gapped or on-premise deploymentsNeed POSIX filesystem semanticsNFS, GlusterFS, or Ceph

Important Caveats

Related Units