docker compose up -d with quay.io/minio/minio image and server /data --console-address ":9001" command.--console-address ":9001" causes the console to bind to a random port.:latest in productionmc CLI or the separate object-browser image| Service | Image | Ports | Volumes | Key Env |
|---|---|---|---|---|
| minio (standalone) | quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z | 9000:9000, 9001:9001 | minio_data:/data | MINIO_ROOT_USER, MINIO_ROOT_PASSWORD |
| minio1-4 (distributed) | quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z | (internal) | data{n}-1:/data1, data{n}-2:/data2 | MINIO_ROOT_USER, MINIO_ROOT_PASSWORD |
| nginx (load balancer) | nginx:1.27-alpine | 9000:9000, 9001:9001 | ./nginx.conf:/etc/nginx/nginx.conf:ro | -- |
| Variable | Required | Default | Description |
|---|---|---|---|
MINIO_ROOT_USER | Yes | minioadmin | Root access key (admin username) |
MINIO_ROOT_PASSWORD | Yes | minioadmin | Root secret key (min 8 chars) |
MINIO_ROOT_USER_FILE | No | -- | Docker secret path for access key |
MINIO_ROOT_PASSWORD_FILE | No | -- | Docker secret path for secret key |
MINIO_BROWSER | No | on | Enable/disable web console |
MINIO_BROWSER_REDIRECT_URL | No | -- | External URL for console (behind proxy) |
MINIO_SERVER_URL | No | -- | External URL for S3 API (behind proxy) |
MINIO_NOTIFY_WEBHOOK_ENABLE_<target> | No | off | Enable webhook notifications |
MINIO_NOTIFY_WEBHOOK_ENDPOINT_<target> | No | -- | Webhook receiver URL |
MINIO_DOMAIN | No | -- | Enable virtual-hosted-style bucket access |
MINIO_REGION_NAME | No | us-east-1 | Region name for S3 compatibility |
| Endpoint | Port | Purpose |
|---|---|---|
http://localhost:9000 | 9000 | S3 API (GET/PUT/DELETE objects, list buckets) |
http://localhost:9001 | 9001 | MinIO Console (web UI) |
http://localhost:9000/minio/health/live | 9000 | Liveness health check |
http://localhost:9000/minio/health/cluster | 9000 | Cluster health check (distributed mode) |
START: What deployment mode do you need?
├── Development/testing with single node?
│ ├── YES → Use Standalone docker-compose.yml (Step 1)
│ └── NO ↓
├── Production with data redundancy?
│ ├── YES → Use Distributed docker-compose.yml with 4 nodes + nginx (Step 2)
│ └── NO ↓
├── Need bucket event notifications?
│ ├── YES → Add webhook env vars + mc event configuration (Step 5)
│ └── NO ↓
├── Need TLS encryption?
│ ├── YES → Mount certs to /root/.minio/certs/ or terminate TLS at reverse proxy
│ └── NO ↓
└── DEFAULT → Start with Standalone, migrate to Distributed when needed
Create a minimal docker-compose.yml for local development and testing. This runs a single MinIO server with persistent storage. [src2]
# docker-compose.yml -- MinIO Standalone (development only)
services:
minio:
image: quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z
container_name: minio
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin123
command: server /data --console-address ":9001"
volumes:
- minio_data:/data
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 5s
timeout: 5s
retries: 5
volumes:
minio_data:
Verify: curl -s http://localhost:9000/minio/health/live → returns HTTP 200
The official 4-node distributed setup uses erasure coding for data protection. Each node has 2 drives, creating an 8-drive erasure set that tolerates up to 4 drive failures. [src1, src3]
# docker-compose.yml -- MinIO Distributed (4 nodes, erasure coding)
x-minio-common: &minio-common
image: quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z
command: server --console-address ":9001" http://minio{1...4}/data{1...2}
expose:
- "9000"
- "9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: strongpassword123
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 5s
timeout: 5s
retries: 5
services:
minio1:
<<: *minio-common
hostname: minio1
volumes:
- data1-1:/data1
- data1-2:/data2
minio2:
<<: *minio-common
hostname: minio2
volumes:
- data2-1:/data1
- data2-2:/data2
minio3:
<<: *minio-common
hostname: minio3
volumes:
- data3-1:/data1
- data3-2:/data2
minio4:
<<: *minio-common
hostname: minio4
volumes:
- data4-1:/data1
- data4-2:/data2
nginx:
image: nginx:1.27-alpine
ports:
- "9000:9000"
- "9001:9001"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- minio1
- minio2
- minio3
- minio4
Verify: curl -s http://localhost:9000/minio/health/cluster → returns HTTP 200
Install and configure the MinIO Client to manage buckets, users, and policies. [src5]
# Configure alias pointing to your MinIO instance
mc alias set myminio http://localhost:9000 minioadmin minioadmin123
# Create a bucket
mc mb myminio/my-bucket
# Upload a file
mc cp myfile.txt myminio/my-bucket/
# Set bucket policy to public read
mc anonymous set download myminio/my-bucket
Verify: mc admin info myminio → shows server status, disks, uptime
Set up access control with users and JSON policies for fine-grained permissions. [src5]
# Create a new user
mc admin user add myminio appuser appuser-secret-key
# Apply a custom policy and attach to user
mc admin policy create myminio app-readwrite /tmp/app-policy.json
mc admin policy attach myminio app-readwrite --user appuser
Verify: mc admin user info myminio appuser → shows attached policy
Set up webhook notifications for object events (create, delete, access). [src7]
# Add webhook target to MinIO
mc admin config set myminio notify_webhook:primary \
endpoint="http://webhook-receiver:8080/events"
# Restart and enable events on a bucket
mc admin service restart myminio
mc event add myminio/my-bucket arn:minio:sqs::primary:webhook \
--event put,get,delete
Verify: Upload a file and check webhook receiver logs for the event payload.
import boto3 # pip install boto3>=1.35.0
from botocore.config import Config
# Input: MinIO endpoint, credentials, file to upload
# Output: Uploaded object in MinIO bucket
s3 = boto3.client(
"s3",
endpoint_url="http://localhost:9000",
aws_access_key_id="minioadmin",
aws_secret_access_key="minioadmin123",
config=Config(signature_version="s3v4"),
region_name="us-east-1",
)
# Create bucket (ignore if exists)
try:
s3.create_bucket(Bucket="my-bucket")
except s3.exceptions.BucketAlreadyOwnedByYou:
pass
# Upload file
s3.upload_file("local-file.txt", "my-bucket", "remote-file.txt")
// npm install @aws-sdk/client-s3@^3.500.0
const { S3Client, PutObjectCommand, GetObjectCommand }
= require("@aws-sdk/client-s3");
// Input: MinIO endpoint, credentials, file buffer
// Output: Object stored/retrieved from MinIO bucket
const s3 = new S3Client({
endpoint: "http://localhost:9000",
region: "us-east-1",
credentials: {
accessKeyId: "minioadmin",
secretAccessKey: "minioadmin123",
},
forcePathStyle: true, // Required for MinIO
});
// go get github.com/minio/minio-go/v7@latest
package main
import (
"context"
"log"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
)
func main() {
client, _ := minio.New("localhost:9000", &minio.Options{
Creds: credentials.NewStaticV4("minioadmin", "minioadmin123", ""),
Secure: false,
})
ctx := context.Background()
_ = client.MakeBucket(ctx, "my-bucket", minio.MakeBucketOptions{})
}
# BAD -- default credentials are publicly known
services:
minio:
image: quay.io/minio/minio
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
# GOOD -- credentials from .env file, never committed to VCS
services:
minio:
image: quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
# BAD -- console binds to random port, inaccessible
command: server /data
ports:
- "9000:9000"
- "9001:9001"
# GOOD -- console reliably available on port 9001
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
# BAD -- :latest is mutable, can break on update
image: quay.io/minio/minio:latest
# GOOD -- immutable tag, reproducible deployments
image: quay.io/minio/minio:RELEASE.2025-09-06T17-38-46Z
# BAD -- no redundancy, single point of failure
command: server /data --console-address ":9001"
# GOOD -- 4-node cluster with erasure coding
command: server --console-address ":9001" http://minio{1...4}/data{1...2}
bind: address already in use. Fix: Map to a different host port with "9002:9000". [src2]chmod 777 /path/to/data on the host or use --user $(id -u):$(id -g). [src6]--console-address ":9001" makes the console bind to a random port. Fix: Always include the flag in the command. [src4]endpoint_url uses port 9000. [src2]hostname values match the server command. [src1]forcePathStyle: true. [src6]mc ready local fails during initialization. Fix: Increase start_period to 10s or more. [src3]# Check MinIO is running and healthy
curl -sf http://localhost:9000/minio/health/live && echo "OK" || echo "FAIL"
# Check cluster health (distributed mode)
curl -sf http://localhost:9000/minio/health/cluster && echo "OK" || echo "FAIL"
# View MinIO server info (disk, uptime, version)
mc admin info myminio
# Check container logs for errors
docker compose logs minio --tail=50
# Verify bucket exists and list contents
mc ls myminio/my-bucket/
# Check disk usage per bucket
mc du myminio/my-bucket/
# View configured event notifications
mc event ls myminio/my-bucket
# Check MinIO server configuration
mc admin config get myminio
| Release | Status | Breaking Changes | Migration Notes |
|---|---|---|---|
| RELEASE.2025-09-06 | Recommended | None | Stable release, recommended baseline |
| RELEASE.2025-05-24 | Breaking | Admin console removed from CE; boringcrypto removed | Use mc CLI or separate object-browser; use GOFIPS env |
| RELEASE.2025-10-15 | Security fix | None | Fixes privilege escalation CVE -- apply immediately |
| RELEASE.2024-xx | Previous | None | Standard upgrade path; data format compatible |
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Local S3-compatible development/testing | Need managed cloud storage with SLA | AWS S3, GCS, Azure Blob |
| Self-hosted object storage for CI/CD artifacts | Storing <1 GB in a simple project | Docker volumes or bind mounts |
| S3 API-compatible backup target | Need full data lake with analytics | Delta Lake, Apache Iceberg on S3 |
| Air-gapped or on-premise deployments | Need POSIX filesystem semantics | NFS, GlusterFS, or Ceph |
mc CLI, separate minio/object-browser container, or commercial AIStor editiondocker compose down -v -- never use -v flag unless you intend to destroy all stored data/minio/health/live returns 200 even during quorum loss in distributed mode -- use /minio/health/cluster for full cluster checksMINIO_SERVER_URL and MINIO_BROWSER_REDIRECT_URL to prevent redirect loops and presigned URL mismatchesMINIO_ROOT_USER and MINIO_ROOT_PASSWORD cannot be changed after initial setup without data migration