docker-compose-orchestration
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseDocker Compose Orchestration
Docker Compose 容器编排
A comprehensive skill for orchestrating multi-container applications using Docker Compose. This skill enables rapid development, deployment, and management of containerized applications with service definitions, networking strategies, volume management, health checks, and production-ready configurations.
这是一份使用Docker Compose编排多容器应用的全面指南,涵盖了服务定义、网络策略、卷管理、健康检查以及生产就绪配置,可实现容器化应用的快速开发、部署与管理。
When to Use This Skill
适用场景
Use this skill when:
- Building multi-container applications (microservices, full-stack apps)
- Setting up development environments with databases, caching, and services
- Orchestrating frontend, backend, and database services together
- Managing service dependencies and startup order
- Configuring networks and inter-service communication
- Implementing persistent storage with volumes
- Deploying applications to development, staging, or production
- Creating reproducible development environments
- Managing application lifecycle (start, stop, rebuild, scale)
- Monitoring application health and implementing health checks
- Migrating from single containers to multi-service architectures
- Testing distributed systems locally
在以下场景中使用本指南:
- 构建多容器应用(微服务、全栈应用)
- 搭建包含数据库、缓存和服务的开发环境
- 编排前端、后端和数据库服务
- 管理服务依赖和启动顺序
- 配置网络与服务间通信
- 使用卷实现持久化存储
- 将应用部署到开发、预发布或生产环境
- 创建可复现的开发环境
- 管理应用生命周期(启动、停止、重建、扩容)
- 监控应用健康状态并实现健康检查
- 从单容器架构迁移到多服务架构
- 本地测试分布式系统
Core Concepts
核心概念
Docker Compose Philosophy
Docker Compose 设计理念
Docker Compose simplifies multi-container application management through:
- Declarative Configuration: Define entire application stacks in YAML
- Service Abstraction: Each component is a service with its own configuration
- Automatic Networking: Services can communicate by name automatically
- Volume Management: Persistent data and shared storage across containers
- Environment Isolation: Each project gets its own network namespace
- Reproducibility: Same configuration works across all environments
Docker Compose通过以下方式简化多容器应用管理:
- 声明式配置:使用YAML定义整个应用栈
- 服务抽象:每个组件都是独立配置的服务
- 自动网络:服务可通过名称自动通信
- 卷管理:跨容器的持久化数据与共享存储
- 环境隔离:每个项目拥有独立的网络命名空间
- 可复现性:同一配置可在所有环境中运行
Key Docker Compose Entities
Docker Compose 核心实体
- Services: Individual containers and their configurations
- Networks: Communication channels between services
- Volumes: Persistent storage and data sharing
- Configs: Non-sensitive configuration files
- Secrets: Sensitive data (passwords, API keys)
- Projects: Collection of services under a single namespace
- Services(服务):独立容器及其配置
- Networks(网络):服务间的通信通道
- Volumes(卷):持久化存储与数据共享
- Configs(配置):非敏感配置文件
- Secrets(密钥):敏感数据(密码、API密钥)
- Projects(项目):单个命名空间下的服务集合
Compose File Structure
Compose 文件结构
yaml
version: "3.8" # Compose file format version
services: # Define containers
service-name:
# Service configuration
networks: # Define custom networks
network-name:
# Network configuration
volumes: # Define named volumes
volume-name:
# Volume configuration
configs: # Application configs (optional)
config-name:
# Config source
secrets: # Sensitive data (optional)
secret-name:
# Secret sourceyaml
version: "3.8" # Compose 文件格式版本
services: # 定义容器
service-name:
# 服务配置
networks: # 定义自定义网络
network-name:
# 网络配置
volumes: # 定义命名卷
volume-name:
# 卷配置
configs: # 应用配置(可选)
config-name:
# 配置源
secrets: # 敏感数据(可选)
secret-name:
# 密钥源Service Definition Patterns
服务定义模式
Basic Service Definition
基础服务定义
yaml
services:
web:
image: nginx:alpine # Use existing image
container_name: my-web # Custom container name
restart: unless-stopped # Restart policy
ports:
- "80:80" # Host:Container port mapping
environment:
- ENV_VAR=value # Environment variables
volumes:
- ./html:/usr/share/nginx/html # Volume mount
networks:
- frontend # Connect to networkyaml
services:
web:
image: nginx:alpine # 使用现有镜像
container_name: my-web # 自定义容器名称
restart: unless-stopped # 重启策略
ports:
- "80:80" # 主机:容器端口映射
environment:
- ENV_VAR=value # 环境变量
volumes:
- ./html:/usr/share/nginx/html # 卷挂载
networks:
- frontend # 连接到网络Build-Based Service
基于构建的服务
yaml
services:
app:
build:
context: ./app # Build context directory
dockerfile: Dockerfile # Custom Dockerfile
args: # Build arguments
NODE_ENV: development
target: development # Multi-stage build target
image: myapp:latest # Tag resulting image
ports:
- "3000:3000"yaml
services:
app:
build:
context: ./app # 构建上下文目录
dockerfile: Dockerfile # 自定义Dockerfile
args: # 构建参数
NODE_ENV: development
target: development # 多阶段构建目标
image: myapp:latest # 生成镜像的标签
ports:
- "3000:3000"Service with Dependencies
带依赖的服务
yaml
services:
web:
image: nginx
depends_on:
db:
condition: service_healthy # Wait for health check
redis:
condition: service_started # Wait for start only
db:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
redis:
image: redis:alpineyaml
services:
web:
image: nginx
depends_on:
db:
condition: service_healthy # 等待健康检查通过
redis:
condition: service_started # 仅等待启动完成
db:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
redis:
image: redis:alpineService with Advanced Configuration
高级配置服务
yaml
services:
backend:
build: ./backend
command: npm run dev # Override default command
working_dir: /app # Set working directory
user: "1000:1000" # Run as specific user
hostname: api-server # Custom hostname
domainname: example.com # Domain name
env_file:
- .env # Load env from file
- .env.local
environment:
DATABASE_URL: "postgresql://db:5432/myapp"
REDIS_URL: "redis://cache:6379"
volumes:
- ./backend:/app # Source code mount
- /app/node_modules # Preserve node_modules
- app-data:/data # Named volume
ports:
- "3000:3000" # Application port
- "9229:9229" # Debug port
expose:
- "8080" # Expose to other services only
networks:
- backend
- frontend
labels:
- "com.example.description=Backend API"
- "com.example.version=1.0"
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
deploy:
resources:
limits:
cpus: '2'
memory: 1G
reservations:
cpus: '0.5'
memory: 512Myaml
services:
backend:
build: ./backend
command: npm run dev # 覆盖默认命令
working_dir: /app # 设置工作目录
user: "1000:1000" # 以指定用户运行
hostname: api-server # 自定义主机名
domainname: example.com # 域名
env_file:
- .env # 从文件加载环境变量
- .env.local
environment:
DATABASE_URL: "postgresql://db:5432/myapp"
REDIS_URL: "redis://cache:6379"
volumes:
- ./backend:/app # 源码挂载
- /app/node_modules # 保留node_modules
- app-data:/data # 命名卷
ports:
- "3000:3000" # 应用端口
- "9229:9229" # 调试端口
expose:
- "8080" # 仅对其他服务暴露
networks:
- backend
- frontend
labels:
- "com.example.description=Backend API"
- "com.example.version=1.0"
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
deploy:
resources:
limits:
cpus: '2'
memory: 1G
reservations:
cpus: '0.5'
memory: 512MMulti-Container Application Patterns
多容器应用模式
Pattern 1: Full-Stack Web Application
模式1:全栈Web应用
Scenario: React frontend + Node.js backend + PostgreSQL database
yaml
version: "3.8"
services:
# Frontend React Application
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: development
ports:
- "3000:3000"
volumes:
- ./frontend/src:/app/src
- /app/node_modules
environment:
- REACT_APP_API_URL=http://localhost:4000/api
- CHOKIDAR_USEPOLLING=true # For hot reload
networks:
- frontend
depends_on:
- backend
# Backend Node.js API
backend:
build:
context: ./backend
dockerfile: Dockerfile
ports:
- "4000:4000"
- "9229:9229" # Debugger
volumes:
- ./backend:/app
- /app/node_modules
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:password@db:5432/myapp
- REDIS_URL=redis://cache:6379
- JWT_SECRET=dev-secret
env_file:
- ./backend/.env.local
networks:
- frontend
- backend
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
command: npm run dev
# PostgreSQL Database
db:
image: postgres:15-alpine
container_name: postgres-db
restart: unless-stopped
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=myapp
volumes:
- postgres-data:/var/lib/postgresql/data
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- backend
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
# Redis Cache
cache:
image: redis:7-alpine
container_name: redis-cache
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- redis-data:/data
networks:
- backend
command: redis-server --appendonly yes
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 5
networks:
frontend:
driver: bridge
backend:
driver: bridge
volumes:
postgres-data:
driver: local
redis-data:
driver: local场景:React前端 + Node.js后端 + PostgreSQL数据库
yaml
version: "3.8"
services:
# 前端React应用
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: development
ports:
- "3000:3000"
volumes:
- ./frontend/src:/app/src
- /app/node_modules
environment:
- REACT_APP_API_URL=http://localhost:4000/api
- CHOKIDAR_USEPOLLING=true # 热重载配置
networks:
- frontend
depends_on:
- backend
# 后端Node.js API
backend:
build:
context: ./backend
dockerfile: Dockerfile
ports:
- "4000:4000"
- "9229:9229" # 调试端口
volumes:
- ./backend:/app
- /app/node_modules
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:password@db:5432/myapp
- REDIS_URL=redis://cache:6379
- JWT_SECRET=dev-secret
env_file:
- ./backend/.env.local
networks:
- frontend
- backend
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
command: npm run dev
# PostgreSQL数据库
db:
image: postgres:15-alpine
container_name: postgres-db
restart: unless-stopped
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=myapp
volumes:
- postgres-data:/var/lib/postgresql/data
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- backend
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
# Redis缓存
cache:
image: redis:7-alpine
container_name: redis-cache
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- redis-data:/data
networks:
- backend
command: redis-server --appendonly yes
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 5
networks:
frontend:
driver: bridge
backend:
driver: bridge
volumes:
postgres-data:
driver: local
redis-data:
driver: localPattern 2: Microservices Architecture
模式2:微服务架构
Scenario: Multiple services with reverse proxy and service discovery
yaml
version: "3.8"
services:
# NGINX Reverse Proxy
proxy:
image: nginx:alpine
container_name: reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./ssl:/etc/nginx/ssl:ro
networks:
- public
depends_on:
- auth-service
- user-service
- order-service
restart: unless-stopped
# Authentication Service
auth-service:
build: ./services/auth
container_name: auth-service
expose:
- "8001"
environment:
- SERVICE_NAME=auth
- DATABASE_URL=postgresql://db:5432/auth_db
- JWT_SECRET=${JWT_SECRET}
networks:
- public
- internal
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8001/health"]
interval: 30s
timeout: 10s
retries: 3
# User Service
user-service:
build: ./services/user
container_name: user-service
expose:
- "8002"
environment:
- SERVICE_NAME=user
- DATABASE_URL=postgresql://db:5432/user_db
- AUTH_SERVICE_URL=http://auth-service:8001
networks:
- public
- internal
depends_on:
- auth-service
- db
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8002/health"]
interval: 30s
timeout: 10s
retries: 3
# Order Service
order-service:
build: ./services/order
container_name: order-service
expose:
- "8003"
environment:
- SERVICE_NAME=order
- DATABASE_URL=postgresql://db:5432/order_db
- USER_SERVICE_URL=http://user-service:8002
- RABBITMQ_URL=amqp://rabbitmq:5672
networks:
- public
- internal
depends_on:
- user-service
- db
- rabbitmq
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8003/health"]
interval: 30s
timeout: 10s
retries: 3
# Shared PostgreSQL Database
db:
image: postgres:15-alpine
container_name: postgres-db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=${DB_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data
- ./database/init-multi-db.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- internal
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
# RabbitMQ Message Broker
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
ports:
- "5672:5672" # AMQP
- "15672:15672" # Management UI
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_PASSWORD}
volumes:
- rabbitmq-data:/var/lib/rabbitmq
networks:
- internal
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 30s
timeout: 10s
retries: 5
networks:
public:
driver: bridge
internal:
driver: bridge
internal: true # No external access
volumes:
postgres-data:
rabbitmq-data:场景:包含反向代理和服务发现的多服务架构
yaml
version: "3.8"
services:
# NGINX反向代理
proxy:
image: nginx:alpine
container_name: reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./ssl:/etc/nginx/ssl:ro
networks:
- public
depends_on:
- auth-service
- user-service
- order-service
restart: unless-stopped
# 认证服务
auth-service:
build: ./services/auth
container_name: auth-service
expose:
- "8001"
environment:
- SERVICE_NAME=auth
- DATABASE_URL=postgresql://db:5432/auth_db
- JWT_SECRET=${JWT_SECRET}
networks:
- public
- internal
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8001/health"]
interval: 30s
timeout: 10s
retries: 3
# 用户服务
user-service:
build: ./services/user
container_name: user-service
expose:
- "8002"
environment:
- SERVICE_NAME=user
- DATABASE_URL=postgresql://db:5432/user_db
- AUTH_SERVICE_URL=http://auth-service:8001
networks:
- public
- internal
depends_on:
- auth-service
- db
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8002/health"]
interval: 30s
timeout: 10s
retries: 3
# 订单服务
order-service:
build: ./services/order
container_name: order-service
expose:
- "8003"
environment:
- SERVICE_NAME=order
- DATABASE_URL=postgresql://db:5432/order_db
- USER_SERVICE_URL=http://user-service:8002
- RABBITMQ_URL=amqp://rabbitmq:5672
networks:
- public
- internal
depends_on:
- user-service
- db
- rabbitmq
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8003/health"]
interval: 30s
timeout: 10s
retries: 3
# 共享PostgreSQL数据库
db:
image: postgres:15-alpine
container_name: postgres-db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=${DB_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data
- ./database/init-multi-db.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- internal
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
# RabbitMQ消息队列
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
ports:
- "5672:5672" # AMQP端口
- "15672:15672" # 管理UI端口
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_PASSWORD}
volumes:
- rabbitmq-data:/var/lib/rabbitmq
networks:
- internal
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 30s
timeout: 10s
retries: 5
networks:
public:
driver: bridge
internal:
driver: bridge
internal: true # 禁止外部访问
volumes:
postgres-data:
rabbitmq-data:Pattern 3: Development Environment with Hot Reload
模式3:带热重载的开发环境
Scenario: Development setup with live code reloading and debugging
yaml
version: "3.8"
services:
# Development Frontend
frontend-dev:
build:
context: ./frontend
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
- "9222:9222" # Chrome DevTools
volumes:
- ./frontend:/app
- /app/node_modules
- /app/.next # Next.js build cache
environment:
- NODE_ENV=development
- WATCHPACK_POLLING=true
- NEXT_PUBLIC_API_URL=http://localhost:4000
networks:
- dev-network
stdin_open: true
tty: true
command: npm run dev
# Development Backend
backend-dev:
build:
context: ./backend
dockerfile: Dockerfile.dev
ports:
- "4000:4000"
- "9229:9229" # Node.js debugger
volumes:
- ./backend:/app
- /app/node_modules
environment:
- NODE_ENV=development
- DEBUG=app:*
- DATABASE_URL=postgresql://postgres:dev@db:5432/dev_db
networks:
- dev-network
depends_on:
- db
- mailhog
command: npm run dev:debug
# PostgreSQL with pgAdmin
db:
image: postgres:15-alpine
environment:
- POSTGRES_PASSWORD=dev
- POSTGRES_DB=dev_db
ports:
- "5432:5432"
volumes:
- dev-db-data:/var/lib/postgresql/data
networks:
- dev-network
pgadmin:
image: dpage/pgadmin4:latest
environment:
- PGADMIN_DEFAULT_EMAIL=admin@dev.local
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
networks:
- dev-network
depends_on:
- db
# MailHog for Email Testing
mailhog:
image: mailhog/mailhog:latest
ports:
- "1025:1025" # SMTP
- "8025:8025" # Web UI
networks:
- dev-network
networks:
dev-network:
driver: bridge
volumes:
dev-db-data:场景:支持代码热重载和调试的开发环境
yaml
version: "3.8"
services:
# 开发前端
frontend-dev:
build:
context: ./frontend
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
- "9222:9222" # Chrome调试端口
volumes:
- ./frontend:/app
- /app/node_modules
- /app/.next # Next.js构建缓存
environment:
- NODE_ENV=development
- WATCHPACK_POLLING=true
- NEXT_PUBLIC_API_URL=http://localhost:4000
networks:
- dev-network
stdin_open: true
tty: true
command: npm run dev
# 开发后端
backend-dev:
build:
context: ./backend
dockerfile: Dockerfile.dev
ports:
- "4000:4000"
- "9229:9229" # Node.js调试端口
volumes:
- ./backend:/app
- /app/node_modules
environment:
- NODE_ENV=development
- DEBUG=app:*
- DATABASE_URL=postgresql://postgres:dev@db:5432/dev_db
networks:
- dev-network
depends_on:
- db
- mailhog
command: npm run dev:debug
# 带pgAdmin的PostgreSQL
db:
image: postgres:15-alpine
environment:
- POSTGRES_PASSWORD=dev
- POSTGRES_DB=dev_db
ports:
- "5432:5432"
volumes:
- dev-db-data:/var/lib/postgresql/data
networks:
- dev-network
pgadmin:
image: dpage/pgadmin4:latest
environment:
- PGADMIN_DEFAULT_EMAIL=admin@dev.local
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
networks:
- dev-network
depends_on:
- db
# 用于邮件测试的MailHog
mailhog:
image: mailhog/mailhog:latest
ports:
- "1025:1025" # SMTP端口
- "8025:8025" # Web UI端口
networks:
- dev-network
networks:
dev-network:
driver: bridge
volumes:
dev-db-data:Networking Strategies
网络策略
Default Bridge Network
默认桥接网络
yaml
services:
web:
image: nginx
# Automatically connected to default network
app:
image: myapp
# Can communicate with 'web' via service nameyaml
services:
web:
image: nginx
# 自动连接到默认网络
app:
image: myapp
# 可通过服务名称与'web'通信Custom Bridge Networks
自定义桥接网络
yaml
version: "3.8"
services:
frontend:
image: react-app
networks:
- public
backend:
image: api-server
networks:
- public # Accessible from frontend
- private # Accessible from database
database:
image: postgres
networks:
- private # Isolated from frontend
networks:
public:
driver: bridge
private:
driver: bridge
internal: true # No internet accessyaml
version: "3.8"
services:
frontend:
image: react-app
networks:
- public
backend:
image: api-server
networks:
- public # 可被前端访问
- private # 可被数据库访问
database:
image: postgres
networks:
- private # 与前端隔离
networks:
public:
driver: bridge
private:
driver: bridge
internal: true # 禁止访问互联网Network Aliases
网络别名
yaml
services:
api:
image: api-server
networks:
backend:
aliases:
- api-server
- api.internal
- api-v1.internal
networks:
backend:
driver: bridgeyaml
services:
api:
image: api-server
networks:
backend:
aliases:
- api-server
- api.internal
- api-v1.internal
networks:
backend:
driver: bridgeHost Network Mode
主机网络模式
yaml
services:
app:
image: myapp
network_mode: "host" # Use host network stack
# No port mapping needed, uses host ports directlyyaml
services:
app:
image: myapp
network_mode: "host" # 使用主机网络栈
# 无需端口映射,直接使用主机端口Custom Network Configuration
自定义网络配置
yaml
networks:
custom-network:
driver: bridge
driver_opts:
com.docker.network.bridge.name: br-custom
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
gateway: 172.28.0.1
labels:
- "com.example.description=Custom network"yaml
networks:
custom-network:
driver: bridge
driver_opts:
com.docker.network.bridge.name: br-custom
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
gateway: 172.28.0.1
labels:
- "com.example.description=Custom network"Volume Management
卷管理
Named Volumes
命名卷
yaml
version: "3.8"
services:
db:
image: postgres:15
volumes:
- postgres-data:/var/lib/postgresql/data # Named volume
backup:
image: postgres:15
volumes:
- postgres-data:/backup:ro # Read-only mount
command: pg_dump -U postgres > /backup/dump.sql
volumes:
postgres-data:
driver: local
driver_opts:
type: none
o: bind
device: /path/on/hostyaml
version: "3.8"
services:
db:
image: postgres:15
volumes:
- postgres-data:/var/lib/postgresql/data # 命名卷
backup:
image: postgres:15
volumes:
- postgres-data:/backup:ro # 只读挂载
command: pg_dump -U postgres > /backup/dump.sql
volumes:
postgres-data:
driver: local
driver_opts:
type: none
o: bind
device: /path/on/hostBind Mounts
绑定挂载
yaml
services:
web:
image: nginx
volumes:
# Relative path bind mount
- ./html:/usr/share/nginx/html
# Absolute path bind mount
- /var/log/nginx:/var/log/nginx
# Read-only bind mount
- ./config/nginx.conf:/etc/nginx/nginx.conf:royaml
services:
web:
image: nginx
volumes:
# 相对路径绑定挂载
- ./html:/usr/share/nginx/html
# 绝对路径绑定挂载
- /var/log/nginx:/var/log/nginx
# 只读绑定挂载
- ./config/nginx.conf:/etc/nginx/nginx.conf:rotmpfs Mounts (In-Memory)
tmpfs挂载(内存中)
yaml
services:
app:
image: myapp
tmpfs:
- /tmp
- /run
# Or with options:
volumes:
- type: tmpfs
target: /app/cache
tmpfs:
size: 1000000000 # 1GByaml
services:
app:
image: myapp
tmpfs:
- /tmp
- /run
# 或使用选项:
volumes:
- type: tmpfs
target: /app/cache
tmpfs:
size: 1000000000 # 1GBVolume Sharing Between Services
服务间共享卷
yaml
services:
app:
image: myapp
volumes:
- shared-data:/data
worker:
image: worker
volumes:
- shared-data:/data
backup:
image: backup-tool
volumes:
- shared-data:/backup:ro
volumes:
shared-data:yaml
services:
app:
image: myapp
volumes:
- shared-data:/data
worker:
image: worker
volumes:
- shared-data:/data
backup:
image: backup-tool
volumes:
- shared-data:/backup:ro
volumes:
shared-data:Advanced Volume Configuration
高级卷配置
yaml
volumes:
data:
driver: local
driver_opts:
type: "nfs"
o: "addr=10.40.0.199,nolock,soft,rw"
device: ":/docker/example"
cache:
driver: local
driver_opts:
type: tmpfs
device: tmpfs
o: "size=100m,uid=1000"
external-volume:
external: true # Volume created outside Compose
name: my-existing-volumeyaml
volumes:
data:
driver: local
driver_opts:
type: "nfs"
o: "addr=10.40.0.199,nolock,soft,rw"
device: ":/docker/example"
cache:
driver: local
driver_opts:
type: tmpfs
device: tmpfs
o: "size=100m,uid=1000"
external-volume:
external: true # 在Compose外部创建的卷
name: my-existing-volumeHealth Checks
健康检查
HTTP Health Check
HTTP健康检查
yaml
services:
web:
image: nginx
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40syaml
services:
web:
image: nginx
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40sDatabase Health Check
数据库健康检查
yaml
services:
postgres:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
mysql:
image: mysql:8
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 3
mongodb:
image: mongo:6
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5yaml
services:
postgres:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
mysql:
image: mysql:8
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 3
mongodb:
image: mongo:6
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5Application Health Check
应用健康检查
yaml
services:
app:
build: ./app
healthcheck:
test: ["CMD", "node", "healthcheck.js"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
api:
build: ./api
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3yaml
services:
app:
build: ./app
healthcheck:
test: ["CMD", "node", "healthcheck.js"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
api:
build: ./api
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3Complex Health Checks
复杂健康检查
yaml
services:
redis:
image: redis:alpine
healthcheck:
test: |
sh -c '
redis-cli ping | grep PONG &&
redis-cli --raw incr ping | grep 1
'
interval: 10s
timeout: 3s
retries: 5yaml
services:
redis:
image: redis:alpine
healthcheck:
test: |
sh -c '
redis-cli ping | grep PONG &&
redis-cli --raw incr ping | grep 1
'
interval: 10s
timeout: 3s
retries: 5Development vs Production Configurations
开发与生产环境配置
Base Configuration (compose.yaml)
基础配置(compose.yaml)
yaml
version: "3.8"
services:
web:
image: myapp:latest
environment:
- NODE_ENV=production
networks:
- app-network
db:
image: postgres:15-alpine
networks:
- app-network
networks:
app-network:
driver: bridgeyaml
version: "3.8"
services:
web:
image: myapp:latest
environment:
- NODE_ENV=production
networks:
- app-network
db:
image: postgres:15-alpine
networks:
- app-network
networks:
app-network:
driver: bridgeDevelopment Override (compose.override.yaml)
开发环境覆盖配置(compose.override.yaml)
yaml
undefinedyaml
undefinedAutomatically merged with compose.yaml in development
在开发环境中自动与compose.yaml合并
version: "3.8"
services:
web:
build:
context: .
target: development
volumes:
- ./src:/app/src # Live code reload
- /app/node_modules
ports:
- "3000:3000" # Expose for local access
- "9229:9229" # Debugger port
environment:
- NODE_ENV=development
- DEBUG=*
command: npm run dev
db:
ports:
- "5432:5432" # Expose for local tools
environment:
- POSTGRES_PASSWORD=dev
volumes:
- ./init-dev.sql:/docker-entrypoint-initdb.d/init.sql
undefinedversion: "3.8"
services:
web:
build:
context: .
target: development
volumes:
- ./src:/app/src # 代码热重载
- /app/node_modules
ports:
- "3000:3000" # 本地访问端口
- "9229:9229" # 调试端口
environment:
- NODE_ENV=development
- DEBUG=*
command: npm run dev
db:
ports:
- "5432:5432" # 暴露给本地工具
environment:
- POSTGRES_PASSWORD=dev
volumes:
- ./init-dev.sql:/docker-entrypoint-initdb.d/init.sql
undefinedProduction Configuration (compose.prod.yaml)
生产环境配置(compose.prod.yaml)
yaml
version: "3.8"
services:
web:
image: myapp:${VERSION:-latest}
restart: always
environment:
- NODE_ENV=production
deploy:
replicas: 3
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1G
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
rollback_config:
parallelism: 1
delay: 5s
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
db:
image: postgres:15-alpine
restart: always
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
secrets:
- db_password
volumes:
- postgres-data:/var/lib/postgresql/data
deploy:
resources:
limits:
cpus: '2'
memory: 4G
# Production additions
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/prod.conf:/etc/nginx/nginx.conf:ro
- ssl-certs:/etc/nginx/ssl:ro
restart: always
depends_on:
- web
secrets:
db_password:
external: true
volumes:
postgres-data:
driver: local
ssl-certs:
external: trueyaml
version: "3.8"
services:
web:
image: myapp:${VERSION:-latest}
restart: always
environment:
- NODE_ENV=production
deploy:
replicas: 3
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1G
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
rollback_config:
parallelism: 1
delay: 5s
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
db:
image: postgres:15-alpine
restart: always
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
secrets:
- db_password
volumes:
- postgres-data:/var/lib/postgresql/data
deploy:
resources:
limits:
cpus: '2'
memory: 4G
# 生产环境新增服务
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/prod.conf:/etc/nginx/nginx.conf:ro
- ssl-certs:/etc/nginx/ssl:ro
restart: always
depends_on:
- web
secrets:
db_password:
external: true
volumes:
postgres-data:
driver: local
ssl-certs:
external: trueStaging Configuration (compose.staging.yaml)
预发布环境配置(compose.staging.yaml)
yaml
version: "3.8"
services:
web:
image: myapp:staging-${VERSION:-latest}
restart: unless-stopped
environment:
- NODE_ENV=staging
deploy:
replicas: 2
resources:
limits:
cpus: '1'
memory: 1G
db:
environment:
- POSTGRES_PASSWORD=${DB_PASSWORD}
volumes:
- staging-db-data:/var/lib/postgresql/data
volumes:
staging-db-data:yaml
version: "3.8"
services:
web:
image: myapp:staging-${VERSION:-latest}
restart: unless-stopped
environment:
- NODE_ENV=staging
deploy:
replicas: 2
resources:
limits:
cpus: '1'
memory: 1G
db:
environment:
- POSTGRES_PASSWORD=${DB_PASSWORD}
volumes:
- staging-db-data:/var/lib/postgresql/data
volumes:
staging-db-data:Essential Docker Compose Commands
Docker Compose 核心命令
Project Management
项目管理
bash
undefinedbash
undefinedStart services
启动服务
docker compose up # Foreground
docker compose up -d # Detached (background)
docker compose up --build # Rebuild images
docker compose up --force-recreate # Recreate containers
docker compose up --scale web=3 # Scale service to 3 instances
docker compose up # 前台运行
docker compose up -d # 后台运行(分离模式)
docker compose up --build # 重建镜像
docker compose up --force-recreate # 重新创建容器
docker compose up --scale web=3 # 将服务扩容到3个实例
Stop services
停止服务
docker compose stop # Stop containers
docker compose down # Stop and remove containers/networks
docker compose down -v # Also remove volumes
docker compose down --rmi all # Also remove images
docker compose stop # 停止容器
docker compose down # 停止并删除容器/网络
docker compose down -v # 同时删除卷
docker compose down --rmi all # 同时删除镜像
Restart services
重启服务
docker compose restart # Restart all services
docker compose restart web # Restart specific service
undefineddocker compose restart # 重启所有服务
docker compose restart web # 重启指定服务
undefinedService Management
服务管理
bash
undefinedbash
undefinedBuild services
构建服务
docker compose build # Build all services
docker compose build web # Build specific service
docker compose build --no-cache # Build without cache
docker compose build --pull # Pull latest base images
docker compose build # 构建所有服务
docker compose build web # 构建指定服务
docker compose build --no-cache # 不使用缓存构建
docker compose build --pull # 拉取最新基础镜像
View services
查看服务
docker compose ps # List containers
docker compose ps -a # Include stopped containers
docker compose top # Display running processes
docker compose images # List images
docker compose ps # 列出容器
docker compose ps -a # 包含已停止的容器
docker compose top # 显示运行中的进程
docker compose images # 列出镜像
Logs
日志查看
docker compose logs # View all logs
docker compose logs -f # Follow logs
docker compose logs web # Service-specific logs
docker compose logs --tail=100 web # Last 100 lines
undefineddocker compose logs # 查看所有日志
docker compose logs -f # 实时跟踪日志
docker compose logs web # 指定服务的日志
docker compose logs --tail=100 web # 最后100行日志
undefinedExecution and Debugging
执行与调试
bash
undefinedbash
undefinedExecute commands
执行命令
docker compose exec web sh # Interactive shell
docker compose exec web npm test # Run command
docker compose exec -u root web sh # Run as root
docker compose exec web sh # 交互式shell
docker compose exec web npm test # 运行命令
docker compose exec -u root web sh # 以root用户运行
Run one-off commands
运行一次性命令
docker compose run web npm install # Run command in new container
docker compose run --rm web test # Remove container after
docker compose run --no-deps web sh # Don't start dependencies
undefineddocker compose run web npm install # 在新容器中运行命令
docker compose run --rm web test # 运行后删除容器
docker compose run --no-deps web sh # 不启动依赖服务
undefinedConfiguration Management
配置管理
bash
undefinedbash
undefinedMultiple compose files
多Compose文件
docker compose -f compose.yaml -f compose.prod.yaml up
docker compose -f compose.yaml -f compose.prod.yaml up
Environment-specific deployment
环境特定部署
docker compose --env-file .env.prod up
docker compose -p myproject up # Custom project name
docker compose --env-file .env.prod up
docker compose -p myproject up # 自定义项目名称
Configuration validation
配置验证
docker compose config # Validate and view config
docker compose config --quiet # Only validation
docker compose config --services # List services
docker compose config --volumes # List volumes
undefineddocker compose config # 验证并查看配置
docker compose config --quiet # 仅验证配置
docker compose config --services # 列出服务
docker compose config --volumes # 列出卷
undefined15+ Compose Examples
16+ Compose 示例
Example 1: NGINX + PHP + MySQL (LAMP Stack)
示例1:NGINX + PHP + MySQL(LAMP栈)
yaml
version: "3.8"
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./public:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- lamp
depends_on:
- php
php:
build:
context: ./php
dockerfile: Dockerfile
volumes:
- ./public:/var/www/html
networks:
- lamp
depends_on:
- mysql
mysql:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: myapp
MYSQL_USER: user
MYSQL_PASSWORD: password
volumes:
- mysql-data:/var/lib/mysql
networks:
- lamp
networks:
lamp:
volumes:
mysql-data:yaml
version: "3.8"
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./public:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- lamp
depends_on:
- php
php:
build:
context: ./php
dockerfile: Dockerfile
volumes:
- ./public:/var/www/html
networks:
- lamp
depends_on:
- mysql
mysql:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: myapp
MYSQL_USER: user
MYSQL_PASSWORD: password
volumes:
- mysql-data:/var/lib/mysql
networks:
- lamp
networks:
lamp:
volumes:
mysql-data:Example 2: Django + PostgreSQL + Redis + Celery
示例2:Django + PostgreSQL + Redis + Celery
yaml
version: "3.8"
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/django_db
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
db:
image: postgres:15-alpine
environment:
POSTGRES_DB: django_db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
- postgres-data:/var/lib/postgresql/data
redis:
image: redis:alpine
volumes:
- redis-data:/data
celery:
build: .
command: celery -A myproject worker -l info
volumes:
- .:/code
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/django_db
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
celery-beat:
build: .
command: celery -A myproject beat -l info
volumes:
- .:/code
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/django_db
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
volumes:
postgres-data:
redis-data:yaml
version: "3.8"
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/django_db
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
db:
image: postgres:15-alpine
environment:
POSTGRES_DB: django_db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
- postgres-data:/var/lib/postgresql/data
redis:
image: redis:alpine
volumes:
- redis-data:/data
celery:
build: .
command: celery -A myproject worker -l info
volumes:
- .:/code
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/django_db
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
celery-beat:
build: .
command: celery -A myproject beat -l info
volumes:
- .:/code
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/django_db
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
volumes:
postgres-data:
redis-data:Example 3: React + Node.js + MongoDB + NGINX
示例3:React + Node.js + MongoDB + NGINX
yaml
version: "3.8"
services:
frontend:
build:
context: ./frontend
args:
REACT_APP_API_URL: http://localhost/api
volumes:
- ./frontend:/app
- /app/node_modules
environment:
- CHOKIDAR_USEPOLLING=true
networks:
- app-network
backend:
build: ./backend
ports:
- "5000:5000"
volumes:
- ./backend:/app
- /app/node_modules
environment:
- MONGODB_URI=mongodb://mongo:27017/myapp
- JWT_SECRET=dev-secret
depends_on:
- mongo
networks:
- app-network
mongo:
image: mongo:6
ports:
- "27017:27017"
volumes:
- mongo-data:/data/db
- mongo-config:/data/configdb
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=secret
networks:
- app-network
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- frontend
- backend
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
mongo-data:
mongo-config:yaml
version: "3.8"
services:
frontend:
build:
context: ./frontend
args:
REACT_APP_API_URL: http://localhost/api
volumes:
- ./frontend:/app
- /app/node_modules
environment:
- CHOKIDAR_USEPOLLING=true
networks:
- app-network
backend:
build: ./backend
ports:
- "5000:5000"
volumes:
- ./backend:/app
- /app/node_modules
environment:
- MONGODB_URI=mongodb://mongo:27017/myapp
- JWT_SECRET=dev-secret
depends_on:
- mongo
networks:
- app-network
mongo:
image: mongo:6
ports:
- "27017:27017"
volumes:
- mongo-data:/data/db
- mongo-config:/data/configdb
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=secret
networks:
- app-network
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- frontend
- backend
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
mongo-data:
mongo-config:Example 4: Spring Boot + MySQL + Adminer
示例4:Spring Boot + MySQL + Adminer
yaml
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://db:3306/springdb?useSSL=false
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=secret
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
depends_on:
db:
condition: service_healthy
networks:
- spring-network
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: springdb
volumes:
- mysql-data:/var/lib/mysql
networks:
- spring-network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
adminer:
image: adminer:latest
ports:
- "8081:8080"
environment:
ADMINER_DEFAULT_SERVER: db
networks:
- spring-network
networks:
spring-network:
volumes:
mysql-data:yaml
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://db:3306/springdb?useSSL=false
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=secret
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
depends_on:
db:
condition: service_healthy
networks:
- spring-network
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: springdb
volumes:
- mysql-data:/var/lib/mysql
networks:
- spring-network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
adminer:
image: adminer:latest
ports:
- "8081:8080"
environment:
ADMINER_DEFAULT_SERVER: db
networks:
- spring-network
networks:
spring-network:
volumes:
mysql-data:Example 5: WordPress + MySQL + phpMyAdmin
示例5:WordPress + MySQL + phpMyAdmin
yaml
version: "3.8"
services:
wordpress:
image: wordpress:latest
ports:
- "8000:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress-data:/var/www/html
depends_on:
- db
networks:
- wordpress-network
db:
image: mysql:8.0
environment:
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
MYSQL_ROOT_PASSWORD: rootpassword
volumes:
- db-data:/var/lib/mysql
networks:
- wordpress-network
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- "8080:80"
environment:
PMA_HOST: db
PMA_USER: root
PMA_PASSWORD: rootpassword
depends_on:
- db
networks:
- wordpress-network
networks:
wordpress-network:
volumes:
wordpress-data:
db-data:yaml
version: "3.8"
services:
wordpress:
image: wordpress:latest
ports:
- "8000:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress-data:/var/www/html
depends_on:
- db
networks:
- wordpress-network
db:
image: mysql:8.0
environment:
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
MYSQL_ROOT_PASSWORD: rootpassword
volumes:
- db-data:/var/lib/mysql
networks:
- wordpress-network
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- "8080:80"
environment:
PMA_HOST: db
PMA_USER: root
PMA_PASSWORD: rootpassword
depends_on:
- db
networks:
- wordpress-network
networks:
wordpress-network:
volumes:
wordpress-data:
db-data:Example 6: Elasticsearch + Kibana + Logstash (ELK Stack)
示例6:Elasticsearch + Kibana + Logstash(ELK栈)
yaml
version: "3.8"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.10.0
container_name: elasticsearch
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ports:
- "9200:9200"
- "9300:9300"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- elk
logstash:
image: docker.elastic.co/logstash/logstash:8.10.0
container_name: logstash
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:8.10.0
container_name: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch-data:yaml
version: "3.8"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.10.0
container_name: elasticsearch
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ports:
- "9200:9200"
- "9300:9300"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- elk
logstash:
image: docker.elastic.co/logstash/logstash:8.10.0
container_name: logstash
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:8.10.0
container_name: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch-data:Example 7: GitLab + GitLab Runner
示例7:GitLab + GitLab Runner
yaml
version: "3.8"
services:
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
restart: unless-stopped
hostname: gitlab.local
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.local'
gitlab_rails['gitlab_shell_ssh_port'] = 2222
ports:
- "80:80"
- "443:443"
- "2222:22"
volumes:
- gitlab-config:/etc/gitlab
- gitlab-logs:/var/log/gitlab
- gitlab-data:/var/opt/gitlab
networks:
- gitlab-network
gitlab-runner:
image: gitlab/gitlab-runner:latest
container_name: gitlab-runner
restart: unless-stopped
volumes:
- gitlab-runner-config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
networks:
- gitlab-network
depends_on:
- gitlab
networks:
gitlab-network:
volumes:
gitlab-config:
gitlab-logs:
gitlab-data:
gitlab-runner-config:yaml
version: "3.8"
services:
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
restart: unless-stopped
hostname: gitlab.local
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.local'
gitlab_rails['gitlab_shell_ssh_port'] = 2222
ports:
- "80:80"
- "443:443"
- "2222:22"
volumes:
- gitlab-config:/etc/gitlab
- gitlab-logs:/var/log/gitlab
- gitlab-data:/var/opt/gitlab
networks:
- gitlab-network
gitlab-runner:
image: gitlab/gitlab-runner:latest
container_name: gitlab-runner
restart: unless-stopped
volumes:
- gitlab-runner-config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
networks:
- gitlab-network
depends_on:
- gitlab
networks:
gitlab-network:
volumes:
gitlab-config:
gitlab-logs:
gitlab-data:
gitlab-runner-config:Example 8: Jenkins + Docker-in-Docker
示例8:Jenkins + Docker-in-Docker
yaml
version: "3.8"
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
user: root
ports:
- "8080:8080"
- "50000:50000"
volumes:
- jenkins-data:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
environment:
- JAVA_OPTS=-Djenkins.install.runSetupWizard=false
networks:
- jenkins-network
jenkins-agent:
image: jenkins/inbound-agent:latest
container_name: jenkins-agent
environment:
- JENKINS_URL=http://jenkins:8080
- JENKINS_AGENT_NAME=agent1
- JENKINS_SECRET=${AGENT_SECRET}
- JENKINS_AGENT_WORKDIR=/home/jenkins/agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- jenkins-network
depends_on:
- jenkins
networks:
jenkins-network:
volumes:
jenkins-data:yaml
version: "3.8"
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
user: root
ports:
- "8080:8080"
- "50000:50000"
volumes:
- jenkins-data:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
environment:
- JAVA_OPTS=-Djenkins.install.runSetupWizard=false
networks:
- jenkins-network
jenkins-agent:
image: jenkins/inbound-agent:latest
container_name: jenkins-agent
environment:
- JENKINS_URL=http://jenkins:8080
- JENKINS_AGENT_NAME=agent1
- JENKINS_SECRET=${AGENT_SECRET}
- JENKINS_AGENT_WORKDIR=/home/jenkins/agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- jenkins-network
depends_on:
- jenkins
networks:
jenkins-network:
volumes:
jenkins-data:Example 9: Prometheus + Grafana + Node Exporter
示例9:Prometheus + Grafana + Node Exporter
yaml
version: "3.8"
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
networks:
- monitoring
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_INSTALL_PLUGINS=grafana-piechart-panel
volumes:
- grafana-data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning:ro
networks:
- monitoring
depends_on:
- prometheus
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
ports:
- "9100:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
networks:
- monitoring
networks:
monitoring:
volumes:
prometheus-data:
grafana-data:yaml
version: "3.8"
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
networks:
- monitoring
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_INSTALL_PLUGINS=grafana-piechart-panel
volumes:
- grafana-data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning:ro
networks:
- monitoring
depends_on:
- prometheus
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
ports:
- "9100:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
networks:
- monitoring
networks:
monitoring:
volumes:
prometheus-data:
grafana-data:Example 10: RabbitMQ + Multiple Consumers
示例10:RabbitMQ + 多消费者
yaml
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
ports:
- "5672:5672" # AMQP
- "15672:15672" # Management UI
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: secret
volumes:
- rabbitmq-data:/var/lib/rabbitmq
- ./rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf:ro
networks:
- messaging
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 30s
timeout: 10s
retries: 5
producer:
build: ./services/producer
environment:
RABBITMQ_URL: amqp://admin:secret@rabbitmq:5672
depends_on:
rabbitmq:
condition: service_healthy
networks:
- messaging
consumer-1:
build: ./services/consumer
environment:
RABBITMQ_URL: amqp://admin:secret@rabbitmq:5672
WORKER_ID: 1
depends_on:
rabbitmq:
condition: service_healthy
networks:
- messaging
deploy:
replicas: 3
consumer-2:
build: ./services/consumer
environment:
RABBITMQ_URL: amqp://admin:secret@rabbitmq:5672
WORKER_ID: 2
depends_on:
rabbitmq:
condition: service_healthy
networks:
- messaging
networks:
messaging:
volumes:
rabbitmq-data:yaml
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
ports:
- "5672:5672" # AMQP端口
- "15672:15672" # 管理UI端口
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: secret
volumes:
- rabbitmq-data:/var/lib/rabbitmq
- ./rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf:ro
networks:
- messaging
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 30s
timeout: 10s
retries: 5
producer:
build: ./services/producer
environment:
RABBITMQ_URL: amqp://admin:secret@rabbitmq:5672
depends_on:
rabbitmq:
condition: service_healthy
networks:
- messaging
consumer-1:
build: ./services/consumer
environment:
RABBITMQ_URL: amqp://admin:secret@rabbitmq:5672
WORKER_ID: 1
depends_on:
rabbitmq:
condition: service_healthy
networks:
- messaging
deploy:
replicas: 3
consumer-2:
build: ./services/consumer
environment:
RABBITMQ_URL: amqp://admin:secret@rabbitmq:5672
WORKER_ID: 2
depends_on:
rabbitmq:
condition: service_healthy
networks:
- messaging
networks:
messaging:
volumes:
rabbitmq-data:Example 11: Traefik Reverse Proxy
示例11:Traefik反向代理
yaml
version: "3.8"
services:
traefik:
image: traefik:v2.10
container_name: traefik
command:
- --api.insecure=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
ports:
- "80:80"
- "443:443"
- "8080:8080" # Traefik dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.yml:/etc/traefik/traefik.yml:ro
- ./traefik/dynamic:/etc/traefik/dynamic:ro
networks:
- traefik-network
whoami:
image: traefik/whoami
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`whoami.local`)"
- "traefik.http.routers.whoami.entrypoints=web"
networks:
- traefik-network
app:
image: nginx:alpine
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`app.local`)"
- "traefik.http.routers.app.entrypoints=web"
- "traefik.http.services.app.loadbalancer.server.port=80"
networks:
- traefik-network
networks:
traefik-network:
driver: bridgeyaml
version: "3.8"
services:
traefik:
image: traefik:v2.10
container_name: traefik
command:
- --api.insecure=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
ports:
- "80:80"
- "443:443"
- "8080:8080" # Traefik控制台
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.yml:/etc/traefik/traefik.yml:ro
- ./traefik/dynamic:/etc/traefik/dynamic:ro
networks:
- traefik-network
whoami:
image: traefik/whoami
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`whoami.local`)"
- "traefik.http.routers.whoami.entrypoints=web"
networks:
- traefik-network
app:
image: nginx:alpine
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`app.local`)"
- "traefik.http.routers.app.entrypoints=web"
- "traefik.http.services.app.loadbalancer.server.port=80"
networks:
- traefik-network
networks:
traefik-network:
driver: bridgeExample 12: MinIO + PostgreSQL Backup
示例12:MinIO + PostgreSQL备份
yaml
version: "3.8"
services:
minio:
image: minio/minio:latest
container_name: minio
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
volumes:
- minio-data:/data
networks:
- storage
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secret
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- storage
backup:
image: postgres:15-alpine
environment:
POSTGRES_HOST: postgres
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secret
MINIO_ENDPOINT: minio:9000
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
volumes:
- ./scripts/backup.sh:/backup.sh:ro
entrypoint: ["/bin/sh", "/backup.sh"]
depends_on:
- postgres
- minio
networks:
- storage
networks:
storage:
volumes:
minio-data:
postgres-data:yaml
version: "3.8"
services:
minio:
image: minio/minio:latest
container_name: minio
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
volumes:
- minio-data:/data
networks:
- storage
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secret
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- storage
backup:
image: postgres:15-alpine
environment:
POSTGRES_HOST: postgres
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secret
MINIO_ENDPOINT: minio:9000
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
volumes:
- ./scripts/backup.sh:/backup.sh:ro
entrypoint: ["/bin/sh", "/backup.sh"]
depends_on:
- postgres
- minio
networks:
- storage
networks:
storage:
volumes:
minio-data:
postgres-data:Example 13: Apache Kafka + Zookeeper
示例13:Apache Kafka + Zookeeper
yaml
version: "3.8"
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
volumes:
- zookeeper-data:/var/lib/zookeeper/data
- zookeeper-logs:/var/lib/zookeeper/log
networks:
- kafka-network
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- kafka-data:/var/lib/kafka/data
networks:
- kafka-network
kafka-ui:
image: provectuslabs/kafka-ui:latest
container_name: kafka-ui
depends_on:
- kafka
ports:
- "8080:8080"
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
KAFKA_CLUSTERS_0_ZOOKEEPER: zookeeper:2181
networks:
- kafka-network
networks:
kafka-network:
volumes:
zookeeper-data:
zookeeper-logs:
kafka-data:yaml
version: "3.8"
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
volumes:
- zookeeper-data:/var/lib/zookeeper/data
- zookeeper-logs:/var/lib/zookeeper/log
networks:
- kafka-network
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- kafka-data:/var/lib/kafka/data
networks:
- kafka-network
kafka-ui:
image: provectuslabs/kafka-ui:latest
container_name: kafka-ui
depends_on:
- kafka
ports:
- "8080:8080"
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
KAFKA_CLUSTERS_0_ZOOKEEPER: zookeeper:2181
networks:
- kafka-network
networks:
kafka-network:
volumes:
zookeeper-data:
zookeeper-logs:
kafka-data:Example 14: Keycloak + PostgreSQL (Identity & Access Management)
示例14:Keycloak + PostgreSQL(身份与访问管理)
yaml
version: "3.8"
services:
postgres:
image: postgres:15-alpine
container_name: keycloak-db
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- keycloak-network
keycloak:
image: quay.io/keycloak/keycloak:latest
container_name: keycloak
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: password
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
command: start-dev
ports:
- "8080:8080"
depends_on:
- postgres
networks:
- keycloak-network
networks:
keycloak-network:
volumes:
postgres-data:yaml
version: "3.8"
services:
postgres:
image: postgres:15-alpine
container_name: keycloak-db
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- keycloak-network
keycloak:
image: quay.io/keycloak/keycloak:latest
container_name: keycloak
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: password
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
command: start-dev
ports:
- "8080:8080"
depends_on:
- postgres
networks:
- keycloak-network
networks:
keycloak-network:
volumes:
postgres-data:Example 15: Portainer (Docker Management UI)
示例15:Portainer(Docker管理UI)
yaml
version: "3.8"
services:
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
ports:
- "9000:9000"
- "8000:8000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer-data:/data
networks:
- portainer-network
networks:
portainer-network:
volumes:
portainer-data:yaml
version: "3.8"
services:
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
ports:
- "9000:9000"
- "8000:8000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer-data:/data
networks:
- portainer-network
networks:
portainer-network:
volumes:
portainer-data:Example 16: SonarQube + PostgreSQL (Code Quality)
示例16:SonarQube + PostgreSQL(代码质量)
yaml
version: "3.8"
services:
sonarqube:
image: sonarqube:community
container_name: sonarqube
depends_on:
- db
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube-conf:/opt/sonarqube/conf
- sonarqube-data:/opt/sonarqube/data
- sonarqube-logs:/opt/sonarqube/logs
- sonarqube-extensions:/opt/sonarqube/extensions
ports:
- "9000:9000"
networks:
- sonarqube-network
db:
image: postgres:15-alpine
container_name: sonarqube-db
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
POSTGRES_DB: sonar
volumes:
- postgresql-data:/var/lib/postgresql/data
networks:
- sonarqube-network
networks:
sonarqube-network:
volumes:
sonarqube-conf:
sonarqube-data:
sonarqube-logs:
sonarqube-extensions:
postgresql-data:yaml
version: "3.8"
services:
sonarqube:
image: sonarqube:community
container_name: sonarqube
depends_on:
- db
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube-conf:/opt/sonarqube/conf
- sonarqube-data:/opt/sonarqube/data
- sonarqube-logs:/opt/sonarqube/logs
- sonarqube-extensions:/opt/sonarqube/extensions
ports:
- "9000:9000"
networks:
- sonarqube-network
db:
image: postgres:15-alpine
container_name: sonarqube-db
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
POSTGRES_DB: sonar
volumes:
- postgresql-data:/var/lib/postgresql/data
networks:
- sonarqube-network
networks:
sonarqube-network:
volumes:
sonarqube-conf:
sonarqube-data:
sonarqube-logs:
sonarqube-extensions:
postgresql-data:Best Practices
最佳实践
Service Configuration
服务配置
- Use Specific Image Tags: Avoid in production
latest - Health Checks: Always define health checks for critical services
- Resource Limits: Set CPU and memory limits in production
- Restart Policies: Use appropriate restart policies
- Environment Variables: Use files for sensitive data
.env - Named Volumes: Use named volumes for data persistence
- Network Isolation: Separate frontend/backend networks
- Logging Configuration: Set up proper log rotation
- 使用特定镜像标签:生产环境避免使用标签
latest - 健康检查:为关键服务定义健康检查
- 资源限制:生产环境设置CPU和内存限制
- 重启策略:使用合适的重启策略
- 环境变量:使用文件存储敏感数据
.env - 命名卷:使用命名卷实现数据持久化
- 网络隔离:分离前端/后端网络
- 日志配置:设置合理的日志轮转策略
Development Workflow
开发工作流
- Hot Reload: Mount source code as volumes for live updates
- Debug Ports: Expose debugger ports in development
- Override Files: Use for local config
compose.override.yaml - Build Caching: Structure Dockerfiles for efficient caching
- Separate Concerns: One process per container
- Service Naming: Use descriptive, consistent service names
- 热重载:挂载源码卷实现实时更新
- 调试端口:开发环境暴露调试端口
- 覆盖文件:使用进行本地配置
compose.override.yaml - 构建缓存:优化Dockerfile结构以提升构建缓存效率
- 关注点分离:每个容器运行一个进程
- 服务命名:使用清晰、一致的服务名称
Security
安全
- Secrets Management: Use Docker secrets or external secret managers
- Non-Root Users: Run containers as non-root users
- Read-Only Filesystems: Mount volumes as read-only when possible
- Network Segmentation: Use multiple networks for isolation
- Environment Isolation: Never commit sensitive files
.env - Image Scanning: Scan images for vulnerabilities
- Minimal Base Images: Use Alpine or distroless images
- 密钥管理:使用Docker密钥或外部密钥管理器
- 非root用户:以非root用户运行容器
- 只读文件系统:尽可能以只读方式挂载卷
- 网络分段:使用多网络实现隔离
- 环境隔离:切勿提交敏感的文件
.env - 镜像扫描:扫描镜像中的漏洞
- 轻量基础镜像:使用Alpine或无发行版镜像
Production Deployment
生产部署
- Image Versioning: Tag images with semantic versions
- Rolling Updates: Configure gradual rollout strategies
- Monitoring: Integrate with monitoring solutions
- Backup Strategy: Implement automated backups
- High Availability: Deploy replicas of critical services
- Load Balancing: Use reverse proxies for load distribution
- Configuration Management: Externalize configuration
- Disaster Recovery: Test backup and restore procedures
- 镜像版本化:使用语义化版本标记镜像
- 滚动更新:配置渐进式发布策略
- 监控:集成监控解决方案
- 备份策略:实现自动化备份
- 高可用:部署关键服务的副本
- 负载均衡:使用反向代理实现负载分发
- 配置管理:外部化配置
- 灾难恢复:测试备份与恢复流程
Troubleshooting
故障排查
Common Issues
常见问题
Services can't communicate
- Check network configuration
- Verify service names are correct
- Ensure services are on same network
- Check firewall rules
Volumes not persisting
- Verify named volumes are defined
- Check volume mount paths
- Ensure proper permissions
- Review Docker volume driver
Services failing health checks
- Increase start_period
- Verify health check command
- Check service logs
- Ensure dependencies are ready
Port conflicts
- Check for existing services on ports
- Use different host ports
- Review port mapping syntax
Build failures
- Clear build cache:
docker compose build --no-cache - Check Dockerfile syntax
- Verify build context
- Review build arguments
服务无法通信
- 检查网络配置
- 验证服务名称是否正确
- 确保服务在同一网络
- 检查防火墙规则
卷数据未持久化
- 验证命名卷是否已定义
- 检查卷挂载路径
- 确保权限设置正确
- 查看Docker卷驱动
服务健康检查失败
- 增加时长
start_period - 验证健康检查命令
- 查看服务日志
- 确保依赖服务已就绪
端口冲突
- 检查端口上的现有服务
- 使用不同的主机端口
- 检查端口映射语法
构建失败
- 清理构建缓存:
docker compose build --no-cache - 检查Dockerfile语法
- 验证构建上下文
- 查看构建参数
Debugging Commands
调试命令
bash
undefinedbash
undefinedView detailed container information
查看容器详细信息
docker compose ps -a
docker compose logs -f service-name
docker inspect container-name
docker compose ps -a
docker compose logs -f service-name
docker inspect container-name
Execute commands in running containers
在运行中的容器中执行命令
docker compose exec service-name sh
docker compose exec service-name env
docker compose exec service-name sh
docker compose exec service-name env
Check network connectivity
检查网络连通性
docker compose exec service-name ping other-service
docker compose exec service-name netstat -tulpn
docker compose exec service-name ping other-service
docker compose exec service-name netstat -tulpn
Review configuration
查看配置
docker compose config
docker compose config --services
docker compose config --volumes
docker compose config
docker compose config --services
docker compose config --volumes
Clean up resources
清理资源
docker compose down -v
docker system prune -a --volumes
undefineddocker compose down -v
docker system prune -a --volumes
undefinedAdvanced Usage
高级用法
Multi-Stage Builds for Optimization
多阶段构建优化
yaml
services:
app:
build:
context: .
dockerfile: Dockerfile
target: production
# Dockerfile uses multi-stage buildsdockerfile
undefinedyaml
services:
app:
build:
context: .
dockerfile: Dockerfile
target: production
# Dockerfile使用多阶段构建dockerfile
undefinedDevelopment stage
开发阶段
FROM node:18-alpine AS development
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
FROM node:18-alpine AS development
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
Build stage
构建阶段
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
Production stage
生产阶段
FROM node:18-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
EXPOSE 3000
CMD ["node", "dist/index.js"]
undefinedFROM node:18-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
EXPOSE 3000
CMD ["node", "dist/index.js"]
undefinedEnvironment-Specific Deployments
环境特定部署
bash
undefinedbash
undefinedDevelopment
开发环境
docker compose up
docker compose up
Staging
预发布环境
docker compose -f compose.yaml -f compose.staging.yaml up
docker compose -f compose.yaml -f compose.staging.yaml up
Production
生产环境
docker compose -f compose.yaml -f compose.prod.yaml up -d
docker compose -f compose.yaml -f compose.prod.yaml up -d
With environment file
使用环境文件
docker compose --env-file .env.prod -f compose.yaml -f compose.prod.yaml up -d
undefineddocker compose --env-file .env.prod -f compose.yaml -f compose.prod.yaml up -d
undefinedScaling Services
服务扩容
bash
undefinedbash
undefinedScale specific service
扩容指定服务
docker compose up -d --scale worker=5
docker compose up -d --scale worker=5
Scale multiple services
扩容多个服务
docker compose up -d --scale worker=5 --scale consumer=3
undefineddocker compose up -d --scale worker=5 --scale consumer=3
undefinedConditional Service Activation with Profiles
使用Profiles条件激活服务
yaml
services:
web:
image: nginx
# Always starts
debug:
image: debug-tools
profiles:
- debug # Only starts with --profile debug
test:
build: .
profiles:
- test # Only starts with --profile testbash
undefinedyaml
services:
web:
image: nginx
# 始终启动
debug:
image: debug-tools
profiles:
- debug # 仅在--profile debug时启动
test:
build: .
profiles:
- test # 仅在--profile test时启动bash
undefinedStart with debug profile
使用debug profile启动
docker compose --profile debug up
docker compose --profile debug up
Start with multiple profiles
使用多个profiles启动
docker compose --profile debug --profile test up
undefineddocker compose --profile debug --profile test up
undefinedQuick Reference
快速参考
Essential Commands
核心命令
bash
undefinedbash
undefinedStart and manage
启动与管理
docker compose up -d # Start detached
docker compose down # Stop and remove
docker compose restart # Restart all
docker compose stop # Stop without removing
docker compose up -d # 后台启动
docker compose down # 停止并删除
docker compose restart # 重启所有服务
docker compose stop # 停止容器(不删除)
Build and pull
构建与拉取
docker compose build # Build all images
docker compose pull # Pull all images
docker compose build --no-cache # Clean build
docker compose build # 构建所有镜像
docker compose pull # 拉取所有镜像
docker compose build --no-cache # 无缓存构建
View and monitor
查看与监控
docker compose ps # List containers
docker compose logs -f # Follow logs
docker compose top # Running processes
docker compose events # Real-time events
docker compose ps # 列出容器
docker compose logs -f # 实时跟踪日志
docker compose top # 运行中的进程
docker compose events # 实时事件
Execute and debug
执行与调试
docker compose exec service sh # Interactive shell
docker compose run --rm service cmd # One-off command
undefineddocker compose exec service sh # 交互式shell
docker compose run --rm service cmd # 一次性命令
undefinedFile Structure
文件结构
project/
├── compose.yaml # Base configuration
├── compose.override.yaml # Local overrides (auto-loaded)
├── compose.prod.yaml # Production config
├── compose.staging.yaml # Staging config
├── .env # Default environment
├── .env.prod # Production environment
├── services/
│ ├── frontend/
│ │ ├── Dockerfile
│ │ └── src/
│ ├── backend/
│ │ ├── Dockerfile
│ │ └── src/
│ └── worker/
│ ├── Dockerfile
│ └── src/
└── docker/
├── nginx/
│ └── nginx.conf
└── scripts/
└── init.sqlproject/
├── compose.yaml # 基础配置
├── compose.override.yaml # 本地覆盖配置(自动加载)
├── compose.prod.yaml # 生产环境配置
├── compose.staging.yaml # 预发布环境配置
├── .env # 默认环境变量
├── .env.prod # 生产环境变量
├── services/
│ ├── frontend/
│ │ ├── Dockerfile
│ │ └── src/
│ ├── backend/
│ │ ├── Dockerfile
│ │ └── src/
│ └── worker/
│ ├── Dockerfile
│ └── src/
└── docker/
├── nginx/
│ └── nginx.conf
└── scripts/
└── init.sqlResources
资源
- Docker Compose Documentation: https://docs.docker.com/compose/
- Compose File Specification: https://docs.docker.com/compose/compose-file/
- Docker Hub: https://hub.docker.com/
- Awesome Compose Examples: https://github.com/docker/awesome-compose
- Docker Compose GitHub: https://github.com/docker/compose
- Best Practices Guide: https://docs.docker.com/develop/dev-best-practices/
Skill Version: 1.0.0
Last Updated: October 2025
Skill Category: DevOps, Container Orchestration, Application Deployment
Compatible With: Docker Compose v3.8+, Docker Engine 20.10+
- Docker Compose 官方文档: https://docs.docker.com/compose/
- Compose 文件规范: https://docs.docker.com/compose/compose-file/
- Docker Hub: https://hub.docker.com/
- Awesome Compose 示例: https://github.com/docker/awesome-compose
- Docker Compose GitHub: https://github.com/docker/compose
- 最佳实践指南: https://docs.docker.com/develop/dev-best-practices/
指南版本: 1.0.0
最后更新: 2025年10月
指南分类: DevOps, 容器编排, 应用部署
兼容版本: Docker Compose v3.8+, Docker Engine 20.10+