docker-ros2-development

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Docker-Based ROS2 Development Skill

基于Docker的ROS2开发技能

When to Use This Skill

何时使用本技能

  • Writing Dockerfiles for ROS2 workspaces with colcon builds
  • Setting up docker-compose for multi-container robotic systems
  • Debugging DDS discovery failures between containers (CycloneDDS, FastDDS)
  • Configuring GPU passthrough with NVIDIA Container Toolkit for perception nodes
  • Forwarding X11 or Wayland displays for rviz2 and rqt tools
  • Managing USB device passthrough for cameras, LiDARs, and serial devices
  • Building CI/CD pipelines with Docker-based ROS2 builds and test runners
  • Creating devcontainer configurations for VS Code with ROS2 extensions
  • Optimizing Docker layer caching for colcon workspace builds
  • Designing dev-vs-deploy container strategies with multi-stage builds
  • 为带有colcon构建的ROS2工作区编写Dockerfile
  • 为多容器机器人系统设置docker-compose
  • 调试容器间的DDS发现失败问题(CycloneDDS、FastDDS)
  • 为感知节点配置基于NVIDIA Container Toolkit的GPU透传
  • 为rviz2和rqt工具转发X11或Wayland显示
  • 管理摄像头、激光雷达和串口设备的USB透传
  • 构建基于Docker的ROS2构建和测试运行器的CI/CD流水线
  • 为带有ROS2扩展的VS Code创建开发容器配置
  • 优化colcon工作区构建的Docker层缓存
  • 通过多阶段构建设计开发与部署分离的容器策略

ROS2 Docker Image Hierarchy

ROS2 Docker镜像层级

Official OSRF images follow a layered hierarchy. Always choose the smallest base that satisfies dependencies.
┌──────────────────────────────────────────────────────────────────┐
│  ros:<distro>-desktop-full  (~3.5 GB)                            │
│  ┌────────────────────────────────────────────────────────────┐  │
│  │  ros:<distro>-desktop     (~2.8 GB)                        │  │
│  │  ┌──────────────────────────────────────────────────────┐  │  │
│  │  │  ros:<distro>-perception (~2.2 GB)                    │  │  │
│  │  │  ┌────────────────────────────────────────────────┐   │  │  │
│  │  │  │  ros:<distro>-ros-base  (~1.1 GB)              │   │  │  │
│  │  │  │  ┌──────────────────────────────────────────┐  │   │  │  │
│  │  │  │  │  ros:<distro>-ros-core (~700 MB)         │  │   │  │  │
│  │  │  │  └──────────────────────────────────────────┘  │   │  │  │
│  │  │  └────────────────────────────────────────────────┘   │  │  │
│  │  └──────────────────────────────────────────────────────┘  │  │
│  └────────────────────────────────────────────────────────────┘  │
└──────────────────────────────────────────────────────────────────┘
Image TagBase OSSizeContentsUse Case
ros:humble-ros-core
Ubuntu 22.04~700 MBrclcpp, rclpy, rosout, launchMinimal runtime for single nodes
ros:humble-ros-base
Ubuntu 22.04~1.1 GBros-core + common_interfaces, rosbag2Most production deployments
ros:humble-perception
Ubuntu 22.04~2.2 GBros-base + image_transport, cv_bridge, PCLCamera/lidar perception pipelines
ros:humble-desktop
Ubuntu 22.04~2.8 GBperception + rviz2, rqt, demosDevelopment with GUI tools
ros:jazzy-ros-core
Ubuntu 24.04~750 MBrclcpp, rclpy, rosout, launchMinimal runtime (Jazzy/Noble)
ros:jazzy-ros-base
Ubuntu 24.04~1.2 GBros-core + common_interfaces, rosbag2Production deployments (Jazzy)
官方OSRF镜像遵循分层结构。始终选择能满足依赖的最小基础镜像。
┌──────────────────────────────────────────────────────────────────┐
│  ros:<distro>-desktop-full  (~3.5 GB)                            │
│  ┌────────────────────────────────────────────────────────────┐  │
│  │  ros:<distro>-desktop     (~2.8 GB)                        │  │
│  │  ┌──────────────────────────────────────────────────────┐  │  │
│  │  │  ros:<distro>-perception (~2.2 GB)                    │  │  │
│  │  │  ┌────────────────────────────────────────────────┐   │  │  │
│  │  │  │  ros:<distro>-ros-base  (~1.1 GB)              │   │  │  │
│  │  │  │  ┌──────────────────────────────────────────┐  │   │  │  │
│  │  │  │  │  ros:<distro>-ros-core (~700 MB)         │  │   │  │  │
│  │  │  │  └──────────────────────────────────────────┘  │   │  │  │
│  │  │  └────────────────────────────────────────────────┘   │  │  │
│  │  └──────────────────────────────────────────────────────┘  │  │
│  └────────────────────────────────────────────────────────────┘  │
└──────────────────────────────────────────────────────────────────┘
镜像标签基础操作系统大小内容使用场景
ros:humble-ros-core
Ubuntu 22.04~700 MBrclcpp, rclpy, rosout, launch单节点的最小运行时环境
ros:humble-ros-base
Ubuntu 22.04~1.1 GBros-core + common_interfaces, rosbag2大多数生产部署场景
ros:humble-perception
Ubuntu 22.04~2.2 GBros-base + image_transport, cv_bridge, PCL摄像头/激光雷达感知流水线
ros:humble-desktop
Ubuntu 22.04~2.8 GBperception + rviz2, rqt, demos带GUI工具的开发环境
ros:jazzy-ros-core
Ubuntu 24.04~750 MBrclcpp, rclpy, rosout, launch最小运行时环境(Jazzy/Noble)
ros:jazzy-ros-base
Ubuntu 24.04~1.2 GBros-core + common_interfaces, rosbag2生产部署场景(Jazzy)

Multi-Stage Dockerfiles for ROS2

用于ROS2的多阶段Dockerfile

Dev Stage

开发阶段

The development stage includes build tools, debuggers, and editor support for interactive use.
dockerfile
FROM ros:humble-desktop AS dev
RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential cmake gdb python3-pip \
    python3-colcon-common-extensions python3-rosdep \
    ros-humble-ament-lint-auto ros-humble-ament-cmake-pytest \
    ccache \
    && rm -rf /var/lib/apt/lists/*
ENV CCACHE_DIR=/ccache
ENV CC="ccache gcc"
ENV CXX="ccache g++"
开发阶段包含构建工具、调试器和编辑器支持,用于交互式使用。
dockerfile
FROM ros:humble-desktop AS dev
RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential cmake gdb python3-pip \
    python3-colcon-common-extensions python3-rosdep \
    ros-humble-ament-lint-auto ros-humble-ament-cmake-pytest \
    ccache \
    && rm -rf /var/lib/apt/lists/*
ENV CCACHE_DIR=/ccache
ENV CC="ccache gcc"
ENV CXX="ccache g++"

Build Stage

构建阶段

Copies only
src/
and
package.xml
files to maximize cache hits during dependency resolution.
dockerfile
FROM ros:humble-ros-base AS build
RUN apt-get update && apt-get install -y --no-install-recommends \
    python3-colcon-common-extensions python3-rosdep \
    && rm -rf /var/lib/apt/lists/*
WORKDIR /ros2_ws
仅复制
src/
package.xml
文件,以在依赖解析期间最大化缓存命中率。
dockerfile
FROM ros:humble-ros-base AS build
RUN apt-get update && apt-get install -y --no-install-recommends \
    python3-colcon-common-extensions python3-rosdep \
    && rm -rf /var/lib/apt/lists/*
WORKDIR /ros2_ws

Copy package manifests first for dependency caching

先复制包清单以缓存依赖

COPY src/my_pkg/package.xml src/my_pkg/package.xml RUN . /opt/ros/humble/setup.sh && apt-get update &&
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*
COPY src/my_pkg/package.xml src/my_pkg/package.xml RUN . /opt/ros/humble/setup.sh && apt-get update &&
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*

Source changes invalidate only this layer and below

源文件更改仅使此层及以下层失效

COPY src/ src/ RUN . /opt/ros/humble/setup.sh &&
colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
--event-handlers console_direct+
undefined
COPY src/ src/ RUN . /opt/ros/humble/setup.sh &&
colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
--event-handlers console_direct+
undefined

Runtime Stage

运行时阶段

Contains only the built install space and runtime dependencies. No compilers, no source code.
dockerfile
FROM ros:humble-ros-core AS runtime
RUN apt-get update && apt-get install -y --no-install-recommends \
    python3-yaml ros-humble-rmw-cyclonedds-cpp \
    && rm -rf /var/lib/apt/lists/*
COPY --from=build /ros2_ws/install /ros2_ws/install
RUN groupadd -r rosuser && useradd -r -g rosuser -m rosuser
USER rosuser
COPY ros_entrypoint.sh /ros_entrypoint.sh
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
仅包含构建好的安装空间和运行时依赖。无编译器,无源代码。
dockerfile
FROM ros:humble-ros-core AS runtime
RUN apt-get update && apt-get install -y --no-install-recommends \
    python3-yaml ros-humble-rmw-cyclonedds-cpp \
    && rm -rf /var/lib/apt/lists/*
COPY --from=build /ros2_ws/install /ros2_ws/install
RUN groupadd -r rosuser && useradd -r -g rosuser -m rosuser
USER rosuser
COPY ros_entrypoint.sh /ros_entrypoint.sh
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]

Full Multi-Stage Dockerfile

完整的多阶段Dockerfile

dockerfile
undefined
dockerfile
undefined

syntax=docker/dockerfile:1

syntax=docker/dockerfile:1

Usage:

使用方法:

docker build --target dev -t my_robot:dev .

docker build --target dev -t my_robot:dev .

docker build --target runtime -t my_robot:latest .

docker build --target runtime -t my_robot:latest .

ARG ROS_DISTRO=humble ARG BASE_IMAGE=ros:${ROS_DISTRO}-ros-base
ARG ROS_DISTRO=humble ARG BASE_IMAGE=ros:${ROS_DISTRO}-ros-base

Stage 1: Dependency base — install apt and rosdep deps

阶段1:依赖基础 — 安装apt和rosdep依赖

FROM ${BASE_IMAGE} AS deps RUN apt-get update && apt-get install -y --no-install-recommends
python3-colcon-common-extensions python3-rosdep
&& rm -rf /var/lib/apt/lists/* WORKDIR /ros2_ws
FROM ${BASE_IMAGE} AS deps RUN apt-get update && apt-get install -y --no-install-recommends
python3-colcon-common-extensions python3-rosdep
&& rm -rf /var/lib/apt/lists/* WORKDIR /ros2_ws

Copy only package.xml files for rosdep resolution (maximizes cache reuse)

仅复制package.xml文件用于rosdep解析(最大化缓存复用)

COPY src/my_robot_bringup/package.xml src/my_robot_bringup/package.xml COPY src/my_robot_perception/package.xml src/my_robot_perception/package.xml COPY src/my_robot_msgs/package.xml src/my_robot_msgs/package.xml COPY src/my_robot_navigation/package.xml src/my_robot_navigation/package.xml RUN . /opt/ros/${ROS_DISTRO}/setup.sh &&
apt-get update &&
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*
COPY src/my_robot_bringup/package.xml src/my_robot_bringup/package.xml COPY src/my_robot_perception/package.xml src/my_robot_perception/package.xml COPY src/my_robot_msgs/package.xml src/my_robot_msgs/package.xml COPY src/my_robot_navigation/package.xml src/my_robot_navigation/package.xml RUN . /opt/ros/${ROS_DISTRO}/setup.sh &&
apt-get update &&
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*

Stage 2: Development — full dev environment

阶段2:开发 — 完整开发环境

FROM deps AS dev RUN apt-get update && apt-get install -y --no-install-recommends
build-essential gdb valgrind ccache python3-pip python3-pytest
ros-${ROS_DISTRO}-ament-lint-auto
ros-${ROS_DISTRO}-launch-testing-ament-cmake
ros-${ROS_DISTRO}-rviz2 ros-${ROS_DISTRO}-rqt-graph
&& rm -rf /var/lib/apt/lists/* ENV CCACHE_DIR=/ccache CC="ccache gcc" CXX="ccache g++" COPY src/ src/ COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["bash"]
FROM deps AS dev RUN apt-get update && apt-get install -y --no-install-recommends
build-essential gdb valgrind ccache python3-pip python3-pytest
ros-${ROS_DISTRO}-ament-lint-auto
ros-${ROS_DISTRO}-launch-testing-ament-cmake
ros-${ROS_DISTRO}-rviz2 ros-${ROS_DISTRO}-rqt-graph
&& rm -rf /var/lib/apt/lists/* ENV CCACHE_DIR=/ccache CC="ccache gcc" CXX="ccache g++" COPY src/ src/ COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["bash"]

Stage 3: Build — compile workspace

阶段3:构建 — 编译工作区

FROM deps AS build COPY src/ src/ RUN . /opt/ros/${ROS_DISTRO}/setup.sh &&
colcon build
--cmake-args -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF
--event-handlers console_direct+
--parallel-workers $(nproc)
FROM deps AS build COPY src/ src/ RUN . /opt/ros/${ROS_DISTRO}/setup.sh &&
colcon build
--cmake-args -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF
--event-handlers console_direct+
--parallel-workers $(nproc)

Stage 4: Runtime — minimal production image

阶段4:运行时 — 最小化生产镜像

FROM ros:${ROS_DISTRO}-ros-core AS runtime ARG ROS_DISTRO=humble RUN apt-get update && apt-get install -y --no-install-recommends
python3-yaml ros-${ROS_DISTRO}-rmw-cyclonedds-cpp
&& rm -rf /var/lib/apt/lists/* COPY --from=build /ros2_ws/install /ros2_ws/install RUN groupadd -r rosuser && useradd -r -g rosuser -m -s /bin/bash rosuser USER rosuser ENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["ros2", "launch", "my_robot_bringup", "robot.launch.py"]

The entrypoint script both dev and runtime stages use:

```bash
#!/bin/bash
set -e
source /opt/ros/${ROS_DISTRO}/setup.bash
if [ -f /ros2_ws/install/setup.bash ]; then
    source /ros2_ws/install/setup.bash
fi
exec "$@"
FROM ros:${ROS_DISTRO}-ros-core AS runtime ARG ROS_DISTRO=humble RUN apt-get update && apt-get install -y --no-install-recommends
python3-yaml ros-${ROS_DISTRO}-rmw-cyclonedds-cpp
&& rm -rf /var/lib/apt/lists/* COPY --from=build /ros2_ws/install /ros2_ws/install RUN groupadd -r rosuser && useradd -r -g rosuser -m -s /bin/bash rosuser USER rosuser ENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["ros2", "launch", "my_robot_bringup", "robot.launch.py"]

开发和运行时阶段都使用的入口点脚本:

```bash
#!/bin/bash
set -e
source /opt/ros/${ROS_DISTRO}/setup.bash
if [ -f /ros2_ws/install/setup.bash ]; then
    source /ros2_ws/install/setup.bash
fi
exec "$@"

Docker Compose for Multi-Container ROS2 Systems

用于多容器ROS2系统的Docker Compose

Basic Multi-Container Setup

基础多容器设置

Each ROS2 subsystem runs in its own container with process isolation, independent scaling, and per-service resource limits.
yaml
undefined
每个ROS2子系统在自己的容器中运行,具有进程隔离、独立扩展和按服务的资源限制。
yaml
undefined

docker-compose.yml

docker-compose.yml

version: "3.8"
x-ros-common: &ros-common environment: - ROS_DOMAIN_ID=${ROS_DOMAIN_ID:-0} - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp - CYCLONEDDS_URI=file:///cyclonedds.xml volumes: - ./config/cyclonedds.xml:/cyclonedds.xml:ro - /dev/shm:/dev/shm network_mode: host restart: unless-stopped
services: rosbridge: <<: *ros-common image: my_robot:latest command: ros2 launch rosbridge_server rosbridge_websocket_launch.xml port:=9090
perception: <<: *ros-common image: my_robot_perception:latest command: ros2 launch my_robot_perception perception.launch.py deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] devices: - /dev/video0:/dev/video0 # USB camera passthrough
navigation: <<: *ros-common image: my_robot_navigation:latest command: > ros2 launch my_robot_navigation navigation.launch.py use_sim_time:=false map:=/maps/warehouse.yaml volumes: - ./maps:/maps:ro
driver: <<: *ros-common image: my_robot_driver:latest command: ros2 launch my_robot_driver driver.launch.py devices: - /dev/ttyUSB0:/dev/ttyUSB0 # Serial motor controller - /dev/ttyACM0:/dev/ttyACM0 # IMU over USB-serial group_add: - dialout
undefined
version: "3.8"
x-ros-common: &ros-common environment: - ROS_DOMAIN_ID=${ROS_DOMAIN_ID:-0} - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp - CYCLONEDDS_URI=file:///cyclonedds.xml volumes: - ./config/cyclonedds.xml:/cyclonedds.xml:ro - /dev/shm:/dev/shm network_mode: host restart: unless-stopped
services: rosbridge: <<: *ros-common image: my_robot:latest command: ros2 launch rosbridge_server rosbridge_websocket_launch.xml port:=9090
perception: <<: *ros-common image: my_robot_perception:latest command: ros2 launch my_robot_perception perception.launch.py deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] devices: - /dev/video0:/dev/video0 # USB摄像头透传
navigation: <<: *ros-common image: my_robot_navigation:latest command: > ros2 launch my_robot_navigation navigation.launch.py use_sim_time:=false map:=/maps/warehouse.yaml volumes: - ./maps:/maps:ro
driver: <<: *ros-common image: my_robot_driver:latest command: ros2 launch my_robot_driver driver.launch.py devices: - /dev/ttyUSB0:/dev/ttyUSB0 # 串口电机控制器 - /dev/ttyACM0:/dev/ttyACM0 # USB转串口IMU group_add: - dialout
undefined

Service Dependencies with Health Checks

带健康检查的服务依赖

yaml
services:
  driver:
    <<: *ros-common
    image: my_robot_driver:latest
    healthcheck:
      test: ["CMD", "bash", "-c",
             "source /opt/ros/humble/setup.bash && ros2 topic list | grep -q /joint_states"]
      interval: 5s
      timeout: 10s
      retries: 5
      start_period: 15s

  navigation:
    <<: *ros-common
    image: my_robot_navigation:latest
    depends_on:
      driver:
        condition: service_healthy         # Wait for driver topics

  perception:
    <<: *ros-common
    image: my_robot_perception:latest
    depends_on:
      driver:
        condition: service_healthy         # Camera driver must be ready
yaml
services:
  driver:
    <<: *ros-common
    image: my_robot_driver:latest
    healthcheck:
      test: ["CMD", "bash", "-c",
             "source /opt/ros/humble/setup.bash && ros2 topic list | grep -q /joint_states"]
      interval: 5s
      timeout: 10s
      retries: 5
      start_period: 15s

  navigation:
    <<: *ros-common
    image: my_robot_navigation:latest
    depends_on:
      driver:
        condition: service_healthy         # 等待驱动主题就绪

  perception:
    <<: *ros-common
    image: my_robot_perception:latest
    depends_on:
      driver:
        condition: service_healthy         # 摄像头驱动必须就绪

Profiles for Dev vs Deploy

开发与部署配置文件

yaml
services:
  driver:
    <<: *ros-common
    image: my_robot_driver:latest
    command: ros2 launch my_robot_driver driver.launch.py

  rviz:
    <<: *ros-common
    profiles: ["dev"]
    image: my_robot:dev
    command: ros2 run rviz2 rviz2 -d /rviz/config.rviz
    environment:
      - DISPLAY=${DISPLAY}
      - QT_X11_NO_MITSHM=1
    volumes:
      - /tmp/.X11-unix:/tmp/.X11-unix:rw

  rosbag_record:
    <<: *ros-common
    profiles: ["dev"]
    image: my_robot:dev
    command: ros2 bag record -a --storage sqlite3 --max-bag-duration 300 -o /bags/session
    volumes:
      - ./bags:/bags

  watchdog:
    <<: *ros-common
    profiles: ["deploy"]
    image: my_robot:latest
    command: ros2 launch my_robot_bringup watchdog.launch.py
    restart: always
bash
docker compose --profile dev up          # Dev tools (rviz, rosbag)
docker compose --profile deploy up -d    # Production (watchdog, no GUI)
yaml
services:
  driver:
    <<: *ros-common
    image: my_robot_driver:latest
    command: ros2 launch my_robot_driver driver.launch.py

  rviz:
    <<: *ros-common
    profiles: ["dev"]
    image: my_robot:dev
    command: ros2 run rviz2 rviz2 -d /rviz/config.rviz
    environment:
      - DISPLAY=${DISPLAY}
      - QT_X11_NO_MITSHM=1
    volumes:
      - /tmp/.X11-unix:/tmp/.X11-unix:rw

  rosbag_record:
    <<: *ros-common
    profiles: ["dev"]
    image: my_robot:dev
    command: ros2 bag record -a --storage sqlite3 --max-bag-duration 300 -o /bags/session
    volumes:
      - ./bags:/bags

  watchdog:
    <<: *ros-common
    profiles: ["deploy"]
    image: my_robot:latest
    command: ros2 launch my_robot_bringup watchdog.launch.py
    restart: always
bash
docker compose --profile dev up          # 开发工具(rviz、rosbag)
docker compose --profile deploy up -d    # 生产环境(看门狗,无GUI)

DDS Discovery Across Containers

跨容器的DDS发现

CycloneDDS XML Config for Unicast Across Containers

容器间单播的CycloneDDS XML配置

When containers use bridge networking (no multicast), configure explicit unicast peer lists.
xml
<!-- cyclonedds.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<CycloneDDS xmlns="https://cdds.io/config">
  <Domain>
    <General>
      <Interfaces>
        <NetworkInterface autodetermine="true" priority="default"/>
      </Interfaces>
      <AllowMulticast>false</AllowMulticast>
    </General>
    <Discovery>
      <!-- Peer list uses docker-compose service names as hostnames -->
      <Peers>
        <Peer address="perception"/>
        <Peer address="navigation"/>
        <Peer address="driver"/>
        <Peer address="rosbridge"/>
      </Peers>
      <ParticipantIndex>auto</ParticipantIndex>
      <MaxAutoParticipantIndex>120</MaxAutoParticipantIndex>
    </Discovery>
    <Internal>
      <SocketReceiveBufferSize min="10MB"/>
    </Internal>
  </Domain>
</CycloneDDS>
当容器使用桥接网络(无多播)时,配置显式单播对等列表。
xml
<!-- cyclonedds.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<CycloneDDS xmlns="https://cdds.io/config">
  <Domain>
    <General>
      <Interfaces>
        <NetworkInterface autodetermine="true" priority="default"/>
      </Interfaces>
      <AllowMulticast>false</AllowMulticast>
    </General>
    <Discovery>
      <!-- 对等列表使用docker-compose服务名称作为主机名 -->
      <Peers>
        <Peer address="perception"/>
        <Peer address="navigation"/>
        <Peer address="driver"/>
        <Peer address="rosbridge"/>
      </Peers>
      <ParticipantIndex>auto</ParticipantIndex>
      <MaxAutoParticipantIndex>120</MaxAutoParticipantIndex>
    </Discovery>
    <Internal>
      <SocketReceiveBufferSize min="10MB"/>
    </Internal>
  </Domain>
</CycloneDDS>

FastDDS XML Config

FastDDS XML配置

xml
<!-- fastdds.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<dds xmlns="http://www.eprosima.com/XMLSchemas/fastRTPS_Profiles">
  <profiles>
    <participant profile_name="docker_participant" is_default_profile="true">
      <rtps>
        <builtin>
          <discovery_config>
            <discoveryProtocol>SIMPLE</discoveryProtocol>
            <leaseDuration><sec>10</sec></leaseDuration>
          </discovery_config>
          <initialPeersList>
            <locator>
              <udpv4><address>perception</address><port>7412</port></udpv4>
            </locator>
            <locator>
              <udpv4><address>navigation</address><port>7412</port></udpv4>
            </locator>
            <locator>
              <udpv4><address>driver</address><port>7412</port></udpv4>
            </locator>
          </initialPeersList>
        </builtin>
      </rtps>
    </participant>
  </profiles>
</dds>
Mount and activate in compose:
yaml
undefined
xml
<!-- fastdds.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<dds xmlns="http://www.eprosima.com/XMLSchemas/fastRTPS_Profiles">
  <profiles>
    <participant profile_name="docker_participant" is_default_profile="true">
      <rtps>
        <builtin>
          <discovery_config>
            <discoveryProtocol>SIMPLE</discoveryProtocol>
            <leaseDuration><sec>10</sec></leaseDuration>
          </discovery_config>
          <initialPeersList>
            <locator>
              <udpv4><address>perception</address><port>7412</port></udpv4>
            </locator>
            <locator>
              <udpv4><address>navigation</address><port>7412</port></udpv4>
            </locator>
            <locator>
              <udpv4><address>driver</address><port>7412</port></udpv4>
            </locator>
          </initialPeersList>
        </builtin>
      </rtps>
    </participant>
  </profiles>
</dds>
在compose中挂载并激活:
yaml
undefined

CycloneDDS

CycloneDDS

environment:
  • RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
  • CYCLONEDDS_URI=file:///cyclonedds.xml volumes:
  • ./config/cyclonedds.xml:/cyclonedds.xml:ro
environment:
  • RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
  • CYCLONEDDS_URI=file:///cyclonedds.xml volumes:
  • ./config/cyclonedds.xml:/cyclonedds.xml:ro

FastDDS

FastDDS

environment:
  • RMW_IMPLEMENTATION=rmw_fastrtps_cpp
  • FASTRTPS_DEFAULT_PROFILES_FILE=/fastdds.xml volumes:
  • ./config/fastdds.xml:/fastdds.xml:ro
undefined
environment:
  • RMW_IMPLEMENTATION=rmw_fastrtps_cpp
  • FASTRTPS_DEFAULT_PROFILES_FILE=/fastdds.xml volumes:
  • ./config/fastdds.xml:/fastdds.xml:ro
undefined

Shared Memory Transport in Docker

Docker中的共享内存传输

DDS shared memory (zero-copy) requires
/dev/shm
sharing between containers. This provides highest throughput for large messages (images, point clouds).
yaml
services:
  perception:
    shm_size: "512m"                    # Default 64 MB is too small for image topics
    volumes:
      - /dev/shm:/dev/shm              # Share host shm for inter-container zero-copy
xml
<!-- Enable shared memory in CycloneDDS -->
<CycloneDDS xmlns="https://cdds.io/config">
  <Domain>
    <SharedMemory>
      <Enable>true</Enable>
    </SharedMemory>
  </Domain>
</CycloneDDS>
Constraints: all communicating containers must share
/dev/shm
or use
ipc: host
. Use
--ipc=shareable
on one container and
--ipc=container:<name>
on others for scoped sharing.
DDS共享内存(零拷贝)需要容器间共享
/dev/shm
。这为大消息(图像、点云)提供最高吞吐量。
yaml
services:
  perception:
    shm_size: "512m"                    # 默认64 MB对于图像主题太小
    volumes:
      - /dev/shm:/dev/shm              # 共享主机shm以实现容器间零拷贝
xml
<!-- 在CycloneDDS中启用共享内存 -->
<CycloneDDS xmlns="https://cdds.io/config">
  <Domain>
    <SharedMemory>
      <Enable>true</Enable>
    </SharedMemory>
  </Domain>
</CycloneDDS>
约束:所有通信容器必须共享
/dev/shm
或使用
ipc: host
。在一个容器上使用
--ipc=shareable
,在其他容器上使用
--ipc=container:<name>
以实现范围共享。

Networking Modes and ROS2 Implications

网络模式与ROS2的影响

Host Networking

主机网络

yaml
services:
  my_node:
    network_mode: host      # Shares host network namespace; DDS multicast works natively
yaml
services:
  my_node:
    network_mode: host      # 共享主机网络命名空间;DDS多播原生支持

Bridge Networking (Default)

桥接网络(默认)

yaml
services:
  my_node:
    networks: [ros_net]
networks:
  ros_net:
    driver: bridge          # DDS multicast blocked; requires unicast peer config
yaml
services:
  my_node:
    networks: [ros_net]
networks:
  ros_net:
    driver: bridge          # DDS多播被阻止;需要单播对等配置

Macvlan Networking

Macvlan网络

yaml
networks:
  ros_macvlan:
    driver: macvlan
    driver_opts:
      parent: eth0
    ipam:
      config:
        - subnet: 192.168.1.0/24
          gateway: 192.168.1.1
services:
  my_node:
    networks:
      ros_macvlan:
        ipv4_address: 192.168.1.50   # Real LAN IP; DDS multicast works natively
yaml
networks:
  ros_macvlan:
    driver: macvlan
    driver_opts:
      parent: eth0
    ipam:
      config:
        - subnet: 192.168.1.0/24
          gateway: 192.168.1.1
services:
  my_node:
    networks:
      ros_macvlan:
        ipv4_address: 192.168.1.50   # 真实LAN IP;DDS多播原生支持

Decision Table

决策表

FactorHostBridgeMacvlan
DDS discoveryWorks nativelyNeeds unicast peersWorks natively
Network isolationNoneFull isolationLAN-level isolation
Port conflictsYes (host ports)No (mapped ports)No (unique IPs)
PerformanceNativeSlight overheadNear-native
Multi-host supportNoWith overlay networksYes (same LAN)
When to useDev, single hostCI/CD, multi-tenantMulti-robot on LAN
因素主机网络桥接网络Macvlan网络
DDS发现原生支持需要单播对等配置原生支持
网络隔离完全隔离LAN级隔离
端口冲突是(主机端口)否(映射端口)否(唯一IP)
性能原生性能轻微开销接近原生
多主机支持支持覆盖网络是(同一LAN)
使用场景开发、单主机CI/CD、多租户LAN上的多机器人

GPU Passthrough for Perception

感知任务的GPU透传

NVIDIA Container Toolkit Setup

NVIDIA Container Toolkit设置

bash
undefined
bash
undefined

Install NVIDIA Container Toolkit on the host

在主机上安装NVIDIA Container Toolkit

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g'
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker
undefined
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g'
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker
undefined

Compose Config with deploy.resources

带deploy.resources的Compose配置

yaml
services:
  perception:
    image: my_robot_perception:latest
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1                    # Number of GPUs (or "all")
              capabilities: [gpu]
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
    shm_size: "1g"                        # Large shm for GPU<->CPU transfers
For Dockerfiles that need CUDA, start from NVIDIA base and install ROS2 on top:
dockerfile
FROM nvidia/cuda:12.2.0-cudnn8-runtime-ubuntu22.04 AS perception-base
RUN apt-get update && apt-get install -y --no-install-recommends \
    curl gnupg2 lsb-release \
    && curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key \
       -o /usr/share/keyrings/ros-archive-keyring.gpg \
    && echo "deb [arch=$(dpkg --print-architecture) \
       signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] \
       http://packages.ros.org/ros2/ubuntu $(lsb_release -cs) main" \
       > /etc/apt/sources.list.d/ros2.list \
    && apt-get update && apt-get install -y --no-install-recommends \
       ros-humble-ros-base ros-humble-cv-bridge ros-humble-image-transport \
    && rm -rf /var/lib/apt/lists/*
yaml
services:
  perception:
    image: my_robot_perception:latest
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1                    # GPU数量(或"all")
              capabilities: [gpu]
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
    shm_size: "1g"                        # 大shm用于GPU<->CPU传输
对于需要CUDA的Dockerfile,从NVIDIA基础镜像开始,在其上安装ROS2:
dockerfile
FROM nvidia/cuda:12.2.0-cudnn8-runtime-ubuntu22.04 AS perception-base
RUN apt-get update && apt-get install -y --no-install-recommends \
    curl gnupg2 lsb-release \
    && curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key \
       -o /usr/share/keyrings/ros-archive-keyring.gpg \
    && echo "deb [arch=$(dpkg --print-architecture) \
       signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] \
       http://packages.ros.org/ros2/ubuntu $(lsb_release -cs) main" \
       > /etc/apt/sources.list.d/ros2.list \
    && apt-get update && apt-get install -y --no-install-recommends \
       ros-humble-ros-base ros-humble-cv-bridge ros-humble-image-transport \
    && rm -rf /var/lib/apt/lists/*

Verification

验证

bash
docker compose exec perception bash -c '
  nvidia-smi
  python3 -c "import torch; print(f\"CUDA available: {torch.cuda.is_available()}\")"
'
bash
docker compose exec perception bash -c '
  nvidia-smi
  python3 -c "import torch; print(f\"CUDA可用: {torch.cuda.is_available()}\")"
'

Display Forwarding

显示转发

X11 Forwarding

X11转发

yaml
services:
  rviz:
    image: my_robot:dev
    command: ros2 run rviz2 rviz2
    environment:
      - DISPLAY=${DISPLAY:-:0}                   # Forward host display
      - QT_X11_NO_MITSHM=1                       # Disable MIT-SHM (crashes in Docker)
    volumes:
      - /tmp/.X11-unix:/tmp/.X11-unix:rw          # X11 socket
      - ${HOME}/.Xauthority:/root/.Xauthority:ro  # Auth cookie
    network_mode: host
bash
undefined
yaml
services:
  rviz:
    image: my_robot:dev
    command: ros2 run rviz2 rviz2
    environment:
      - DISPLAY=${DISPLAY:-:0}                   # 转发主机显示
      - QT_X11_NO_MITSHM=1                       # 禁用MIT-SHM(在Docker中会崩溃)
    volumes:
      - /tmp/.X11-unix:/tmp/.X11-unix:rw          # X11套接字
      - ${HOME}/.Xauthority:/root/.Xauthority:ro  # 认证Cookie
    network_mode: host
bash
undefined

Allow local Docker containers to access the X server

允许本地Docker容器访问X服务器

xhost +local:docker
xhost +local:docker

More secure variant:

更安全的变体:

xhost +SI:localuser:$(whoami)
undefined
xhost +SI:localuser:$(whoami)
undefined

Wayland Forwarding

Wayland转发

yaml
services:
  rviz:
    image: my_robot:dev
    command: ros2 run rviz2 rviz2
    environment:
      - WAYLAND_DISPLAY=${WAYLAND_DISPLAY:-wayland-0}
      - XDG_RUNTIME_DIR=/run/user/1000
      - QT_QPA_PLATFORM=wayland
    volumes:
      - ${XDG_RUNTIME_DIR}/${WAYLAND_DISPLAY}:/run/user/1000/${WAYLAND_DISPLAY}:rw
yaml
services:
  rviz:
    image: my_robot:dev
    command: ros2 run rviz2 rviz2
    environment:
      - WAYLAND_DISPLAY=${WAYLAND_DISPLAY:-wayland-0}
      - XDG_RUNTIME_DIR=/run/user/1000
      - QT_QPA_PLATFORM=wayland
    volumes:
      - ${XDG_RUNTIME_DIR}/${WAYLAND_DISPLAY}:/run/user/1000/${WAYLAND_DISPLAY}:rw

Headless Rendering

无头渲染

For CI/CD or remote machines without a physical display:
bash
undefined
对于CI/CD或无物理显示的远程机器:
bash
undefined

Run rviz2 headless with Xvfb for screenshot capture or testing

使用Xvfb无头运行rviz2以捕获截图或测试

docker run --rm my_robot:dev bash -c ' apt-get update && apt-get install -y xvfb mesa-utils && Xvfb :99 -screen 0 1920x1080x24 & export DISPLAY=:99 source /opt/ros/humble/setup.bash ros2 run rviz2 rviz2 -d /config/test.rviz --screenshot /output/frame.png '
undefined
docker run --rm my_robot:dev bash -c ' apt-get update && apt-get install -y xvfb mesa-utils && Xvfb :99 -screen 0 1920x1080x24 & export DISPLAY=:99 source /opt/ros/humble/setup.bash ros2 run rviz2 rviz2 -d /config/test.rviz --screenshot /output/frame.png '
undefined

Volume Mounts and Workspace Overlays

卷挂载与工作区覆盖

Source Mounts for Dev

开发用源码挂载

Mount only
src/
during development. Let colcon write
build/
,
install/
, and
log/
inside named volumes to avoid bind mount performance issues.
yaml
undefined
开发期间仅挂载
src/
。让colcon在命名卷中写入
build/
install/
log/
,以避免绑定挂载的性能问题。
yaml
undefined

BAD: mounting entire workspace — build artifacts on bind mount are slow

错误:挂载整个工作区 — 绑定挂载上的构建产物速度慢

volumes:

volumes:

- ./my_ros2_ws:/ros2_ws

- ./my_ros2_ws:/ros2_ws

GOOD: mount only source, use named volumes for build artifacts

正确:仅挂载源码,对构建产物使用命名卷

services: dev: image: my_robot:dev volumes: - ./src:/ros2_ws/src:rw # Source code (bind mount) - build_vol:/ros2_ws/build # Build artifacts (named volume) - install_vol:/ros2_ws/install # Install space (named volume) - log_vol:/ros2_ws/log # Log output (named volume) working_dir: /ros2_ws
volumes: build_vol: install_vol: log_vol:
undefined
services: dev: image: my_robot:dev volumes: - ./src:/ros2_ws/src:rw # 源代码(绑定挂载) - build_vol:/ros2_ws/build # 构建产物(命名卷) - install_vol:/ros2_ws/install # 安装空间(命名卷) - log_vol:/ros2_ws/log # 日志输出(命名卷) working_dir: /ros2_ws
volumes: build_vol: install_vol: log_vol:
undefined

ccache Caching

ccache缓存

Persist ccache across container rebuilds for faster C++ compilation:
yaml
services:
  dev:
    volumes:
      - ccache_vol:/ccache
    environment:
      - CCACHE_DIR=/ccache
      - CCACHE_MAXSIZE=2G
      - CC=ccache gcc
      - CXX=ccache g++
volumes:
  ccache_vol:
在容器重建之间持久化ccache,以加快C++编译:
yaml
services:
  dev:
    volumes:
      - ccache_vol:/ccache
    environment:
      - CCACHE_DIR=/ccache
      - CCACHE_MAXSIZE=2G
      - CC=ccache gcc
      - CXX=ccache g++
volumes:
  ccache_vol:

ROS Workspace Overlay in Docker

Docker中的ROS工作区覆盖

Keep upstream packages cached and only rebuild custom packages:
dockerfile
undefined
缓存上游包,仅重新构建自定义包:
dockerfile
undefined

Stage 1: upstream dependencies (rarely changes)

阶段1:上游依赖(很少更改)

FROM ros:humble-ros-base AS upstream RUN apt-get update && apt-get install -y --no-install-recommends
ros-humble-nav2-bringup ros-humble-slam-toolbox
ros-humble-robot-localization
&& rm -rf /var/lib/apt/lists/*
FROM ros:humble-ros-base AS upstream RUN apt-get update && apt-get install -y --no-install-recommends
ros-humble-nav2-bringup ros-humble-slam-toolbox
ros-humble-robot-localization
&& rm -rf /var/lib/apt/lists/*

Stage 2: custom packages overlay on top

阶段2:自定义包覆盖在上游之上

FROM upstream AS workspace WORKDIR /ros2_ws COPY src/ src/ RUN . /opt/ros/humble/setup.sh && colcon build --symlink-install
FROM upstream AS workspace WORKDIR /ros2_ws COPY src/ src/ RUN . /opt/ros/humble/setup.sh && colcon build --symlink-install

install/setup.bash automatically sources /opt/ros/humble as underlay

install/setup.bash会自动将/opt/ros/humble作为底层工作区

undefined
undefined

USB Device Passthrough

USB设备透传

Cameras and Serial Devices

摄像头与串口设备

yaml
services:
  camera_driver:
    image: my_robot_driver:latest
    devices:
      - /dev/video0:/dev/video0        # USB camera (V4L2)
      - /dev/video1:/dev/video1
    group_add:
      - video                          # Access /dev/videoN without root

  motor_driver:
    image: my_robot_driver:latest
    devices:
      - /dev/ttyUSB0:/dev/ttyUSB0      # USB-serial motor controller
      - /dev/ttyACM0:/dev/ttyACM0      # Arduino/Teensy
    group_add:
      - dialout                        # Access serial ports without root
yaml
services:
  camera_driver:
    image: my_robot_driver:latest
    devices:
      - /dev/video0:/dev/video0        # USB摄像头(V4L2)
      - /dev/video1:/dev/video1
    group_add:
      - video                          # 无需root即可访问/dev/videoN

  motor_driver:
    image: my_robot_driver:latest
    devices:
      - /dev/ttyUSB0:/dev/ttyUSB0      # USB转串口电机控制器
      - /dev/ttyACM0:/dev/ttyACM0      # Arduino/Teensy
    group_add:
      - dialout                        # 无需root即可访问串口

Udev Rules Inside Containers

容器内的Udev规则

Create stable device symlinks on the host so container paths remain consistent regardless of USB enumeration order.
bash
undefined
在主机上创建稳定的设备符号链接,使容器路径在USB枚举顺序变化时保持一致。
bash
undefined

/etc/udev/rules.d/99-robot-devices.rules (host-side)

/etc/udev/rules.d/99-robot-devices.rules(主机端)

SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="robot/motor_controller" SUBSYSTEM=="tty", ATTRS{idVendor}=="10c4", ATTRS{idProduct}=="ea60", SYMLINK+="robot/lidar" SUBSYSTEM=="video4linux", ATTRS{idVendor}=="046d", ATTRS{idProduct}=="0825", SYMLINK+="robot/camera"

```bash
sudo udevadm control --reload-rules && sudo udevadm trigger
yaml
services:
  driver:
    devices:
      - /dev/robot/motor_controller:/dev/ttyMOTOR   # Stable symlink
      - /dev/robot/lidar:/dev/ttyLIDAR
      - /dev/robot/camera:/dev/video0
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="robot/motor_controller" SUBSYSTEM=="tty", ATTRS{idVendor}=="10c4", ATTRS{idProduct}=="ea60", SYMLINK+="robot/lidar" SUBSYSTEM=="video4linux", ATTRS{idVendor}=="046d", ATTRS{idProduct}=="0825", SYMLINK+="robot/camera"

```bash
sudo udevadm control --reload-rules && sudo udevadm trigger
yaml
services:
  driver:
    devices:
      - /dev/robot/motor_controller:/dev/ttyMOTOR   # 稳定符号链接
      - /dev/robot/lidar:/dev/ttyLIDAR
      - /dev/robot/camera:/dev/video0

Dynamic Device Attachment

动态设备挂载

For devices plugged in after the container starts:
yaml
services:
  driver:
    # Option 1: privileged (use only when necessary)
    privileged: true
    volumes:
      - /dev:/dev

    # Option 2: cgroup device rules (more secure)
    # device_cgroup_rules:
    #   - 'c 188:* rmw'                  # USB-serial (major 188)
    #   - 'c 81:* rmw'                   # Video devices (major 81)
对于容器启动后插入的设备:
yaml
services:
  driver:
    # 选项1:特权模式(仅在必要时使用)
    privileged: true
    volumes:
      - /dev:/dev

    # 选项2:cgroup设备规则(更安全)
    # device_cgroup_rules:
    #   - 'c 188:* rmw'                  # USB转串口(主设备号188)
    #   - 'c 81:* rmw'                   # 视频设备(主设备号81)

Dev Container Configuration

开发容器配置

json
// .devcontainer/devcontainer.json
{
  "name": "ROS2 Humble Dev",
  "build": {
    "dockerfile": "../Dockerfile",
    "target": "dev",
    "args": { "ROS_DISTRO": "humble" }
  },
  "runArgs": [
    "--network=host", "--ipc=host", "--pid=host",
    "--privileged", "--gpus", "all",
    "-e", "DISPLAY=${localEnv:DISPLAY}",
    "-e", "QT_X11_NO_MITSHM=1",
    "-v", "/tmp/.X11-unix:/tmp/.X11-unix:rw",
    "-v", "/dev:/dev"
  ],
  "workspaceMount": "source=${localWorkspaceFolder},target=/ros2_ws/src,type=bind",
  "workspaceFolder": "/ros2_ws",
  "mounts": [
    "source=ros2-build-vol,target=/ros2_ws/build,type=volume",
    "source=ros2-install-vol,target=/ros2_ws/install,type=volume",
    "source=ros2-log-vol,target=/ros2_ws/log,type=volume",
    "source=ros2-ccache-vol,target=/ccache,type=volume"
  ],
  "containerEnv": {
    "ROS_DISTRO": "humble",
    "RMW_IMPLEMENTATION": "rmw_cyclonedds_cpp",
    "CCACHE_DIR": "/ccache",
    "RCUTILS_COLORIZED_OUTPUT": "1"
  },
  "customizations": {
    "vscode": {
      "extensions": [
        "ms-iot.vscode-ros",
        "ms-vscode.cpptools",
        "ms-python.python",
        "ms-vscode.cmake-tools",
        "smilerobotics.urdf",
        "redhat.vscode-xml",
        "redhat.vscode-yaml"
      ],
      "settings": {
        "ros.distro": "humble",
        "python.defaultInterpreterPath": "/usr/bin/python3",
        "C_Cpp.default.compileCommands": "/ros2_ws/build/compile_commands.json",
        "cmake.configureOnOpen": false
      }
    }
  },
  "postCreateCommand": "sudo apt-get update && rosdep update && rosdep install --from-paths src --ignore-src -r -y",
  "remoteUser": "rosuser"
}
json
// .devcontainer/devcontainer.json
{
  "name": "ROS2 Humble开发环境",
  "build": {
    "dockerfile": "../Dockerfile",
    "target": "dev",
    "args": { "ROS_DISTRO": "humble" }
  },
  "runArgs": [
    "--network=host", "--ipc=host", "--pid=host",
    "--privileged", "--gpus", "all",
    "-e", "DISPLAY=${localEnv:DISPLAY}",
    "-e", "QT_X11_NO_MITSHM=1",
    "-v", "/tmp/.X11-unix:/tmp/.X11-unix:rw",
    "-v", "/dev:/dev"
  ],
  "workspaceMount": "source=${localWorkspaceFolder},target=/ros2_ws/src,type=bind",
  "workspaceFolder": "/ros2_ws",
  "mounts": [
    "source=ros2-build-vol,target=/ros2_ws/build,type=volume",
    "source=ros2-install-vol,target=/ros2_ws/install,type=volume",
    "source=ros2-log-vol,target=/ros2_ws/log,type=volume",
    "source=ros2-ccache-vol,target=/ccache,type=volume"
  ],
  "containerEnv": {
    "ROS_DISTRO": "humble",
    "RMW_IMPLEMENTATION": "rmw_cyclonedds_cpp",
    "CCACHE_DIR": "/ccache",
    "RCUTILS_COLORIZED_OUTPUT": "1"
  },
  "customizations": {
    "vscode": {
      "extensions": [
        "ms-iot.vscode-ros",
        "ms-vscode.cpptools",
        "ms-python.python",
        "ms-vscode.cmake-tools",
        "smilerobotics.urdf",
        "redhat.vscode-xml",
        "redhat.vscode-yaml"
      ],
      "settings": {
        "ros.distro": "humble",
        "python.defaultInterpreterPath": "/usr/bin/python3",
        "C_Cpp.default.compileCommands": "/ros2_ws/build/compile_commands.json",
        "cmake.configureOnOpen": false
      }
    }
  },
  "postCreateCommand": "sudo apt-get update && rosdep update && rosdep install --from-paths src --ignore-src -r -y",
  "remoteUser": "rosuser"
}

CI/CD with Docker

基于Docker的CI/CD

GitHub Actions Workflow

GitHub Actions工作流

yaml
undefined
yaml
undefined

.github/workflows/ros2-docker-ci.yml

.github/workflows/ros2-docker-ci.yml

name: ROS2 Docker CI on: push: branches: [main, develop] pull_request: branches: [main] env: REGISTRY: ghcr.io IMAGE_NAME: ${{ github.repository }}
jobs: build-and-test: runs-on: ubuntu-22.04 strategy: fail-fast: false matrix: ros_distro: [humble, jazzy] include: - ros_distro: humble ubuntu: "22.04" - ros_distro: jazzy ubuntu: "24.04" steps: - uses: actions/checkout@v4 - uses: docker/setup-buildx-action@v3 - uses: docker/login-action@v3 with: registry: ${{ env.REGISTRY }} username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }}
  - name: Cache Docker layers
    uses: actions/cache@v4
    with:
      path: /tmp/.buildx-cache
      key: ${{ runner.os }}-buildx-${{ matrix.ros_distro }}-${{ hashFiles('src/**/package.xml') }}
      restore-keys: ${{ runner.os }}-buildx-${{ matrix.ros_distro }}-

  - name: Build and test
    run: |
      docker build --target dev \
        --build-arg ROS_DISTRO=${{ matrix.ros_distro }} \
        -t test-image:${{ matrix.ros_distro }} .
      docker run --rm test-image:${{ matrix.ros_distro }} bash -c '
        source /opt/ros/${{ matrix.ros_distro }}/setup.bash &&
        cd /ros2_ws &&
        colcon build --cmake-args -DBUILD_TESTING=ON &&
        colcon test --event-handlers console_direct+ &&
        colcon test-result --verbose'

  - name: Push runtime image
    if: github.ref == 'refs/heads/main'
    uses: docker/build-push-action@v5
    with:
      context: .
      target: runtime
      build-args: ROS_DISTRO=${{ matrix.ros_distro }}
      tags: |
        ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ matrix.ros_distro }}-latest
        ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ matrix.ros_distro }}-${{ github.sha }}
      push: true
      cache-from: type=local,src=/tmp/.buildx-cache
      cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max

  - name: Rotate cache
    run: rm -rf /tmp/.buildx-cache && mv /tmp/.buildx-cache-new /tmp/.buildx-cache
undefined
name: ROS2 Docker CI on: push: branches: [main, develop] pull_request: branches: [main] env: REGISTRY: ghcr.io IMAGE_NAME: ${{ github.repository }}
jobs: build-and-test: runs-on: ubuntu-22.04 strategy: fail-fast: false matrix: ros_distro: [humble, jazzy] include: - ros_distro: humble ubuntu: "22.04" - ros_distro: jazzy ubuntu: "24.04" steps: - uses: actions/checkout@v4 - uses: docker/setup-buildx-action@v3 - uses: docker/login-action@v3 with: registry: ${{ env.REGISTRY }} username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }}
  - name: 缓存Docker层
    uses: actions/cache@v4
    with:
      path: /tmp/.buildx-cache
      key: ${{ runner.os }}-buildx-${{ matrix.ros_distro }}-${{ hashFiles('src/**/package.xml') }}
      restore-keys: ${{ runner.os }}-buildx-${{ matrix.ros_distro }}-

  - name: 构建并测试
    run: |
      docker build --target dev \
        --build-arg ROS_DISTRO=${{ matrix.ros_distro }} \
        -t test-image:${{ matrix.ros_distro }} .
      docker run --rm test-image:${{ matrix.ros_distro }} bash -c '
        source /opt/ros/${{ matrix.ros_distro }}/setup.bash &&
        cd /ros2_ws &&
        colcon build --cmake-args -DBUILD_TESTING=ON &&
        colcon test --event-handlers console_direct+ &&
        colcon test-result --verbose'

  - name: 推送运行时镜像
    if: github.ref == 'refs/heads/main'
    uses: docker/build-push-action@v5
    with:
      context: .
      target: runtime
      build-args: ROS_DISTRO=${{ matrix.ros_distro }}
      tags: |
        ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ matrix.ros_distro }}-latest
        ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ matrix.ros_distro }}-${{ github.sha }}
      push: true
      cache-from: type=local,src=/tmp/.buildx-cache
      cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max

  - name: 轮换缓存
    run: rm -rf /tmp/.buildx-cache && mv /tmp/.buildx-cache-new /tmp/.buildx-cache
undefined

Layer Caching

层缓存

Order Dockerfile instructions from least-frequently-changed to most-frequently-changed:
1. Base image         (ros:humble-ros-base)     — changes on distro upgrade
2. System apt packages                           — changes on new dependency
3. rosdep install     (from package.xml)         — changes on new ROS dep
4. COPY src/ src/                                — changes on every code edit
5. colcon build                                  — rebuilds on source change
将Dockerfile指令按从最少更改到最多更改的顺序排列:
1. 基础镜像         (ros:humble-ros-base)     — 发行版升级时更改
2. 系统apt包                           — 添加新依赖时更改
3. rosdep安装     (来自package.xml)         — 添加新ROS依赖时更改
4. COPY src/ src/                                — 每次代码编辑时更改
5. colcon构建                                  — 源代码更改时重新构建

Build Matrix Across Distros

跨发行版的构建矩阵

yaml
strategy:
  matrix:
    ros_distro: [humble, iron, jazzy, rolling]
    rmw: [rmw_cyclonedds_cpp, rmw_fastrtps_cpp]
    exclude:
      - ros_distro: iron           # Iron EOL — skip
        rmw: rmw_fastrtps_cpp
yaml
strategy:
  matrix:
    ros_distro: [humble, iron, jazzy, rolling]
    rmw: [rmw_cyclonedds_cpp, rmw_fastrtps_cpp]
    exclude:
      - ros_distro: iron           # Iron已终止支持 — 跳过
        rmw: rmw_fastrtps_cpp

Docker ROS2 Anti-Patterns

Docker ROS2反模式

1. Running Everything in One Container

1. 所有内容运行在一个容器中

Problem: Putting perception, navigation, planning, and drivers in a single container defeats the purpose of containerization. A crash in one subsystem takes down everything.
Fix: Split into one service per subsystem. Use docker-compose to orchestrate.
yaml
undefined
问题: 将感知、导航、规划和驱动放在单个容器中违背了容器化的目的。一个子系统崩溃会导致所有服务停止。
修复: 拆分为每个子系统一个服务。使用docker-compose进行编排。
yaml
undefined

BAD: monolithic container

错误:单体容器

services: robot: image: my_robot:latest command: ros2 launch my_robot everything.launch.py
services: robot: image: my_robot:latest command: ros2 launch my_robot everything.launch.py

GOOD: one container per subsystem

正确:每个子系统一个容器

services: perception: image: my_robot_perception:latest command: ros2 launch my_robot_perception perception.launch.py navigation: image: my_robot_navigation:latest command: ros2 launch my_robot_navigation navigation.launch.py driver: image: my_robot_driver:latest command: ros2 launch my_robot_driver driver.launch.py
undefined
services: perception: image: my_robot_perception:latest command: ros2 launch my_robot_perception perception.launch.py navigation: image: my_robot_navigation:latest command: ros2 launch my_robot_navigation navigation.launch.py driver: image: my_robot_driver:latest command: ros2 launch my_robot_driver driver.launch.py
undefined

2. Using Bridge Networking Without DDS Config

2. 使用桥接网络但未配置DDS

Problem: DDS uses multicast for discovery by default. Docker bridge networks do not forward multicast. Nodes in different containers will not discover each other.
Fix: Use
network_mode: host
or configure DDS unicast peers explicitly.
yaml
undefined
问题: DDS默认使用多播进行发现。Docker桥接网络不转发多播。不同容器中的节点无法互相发现。
修复: 使用
network_mode: host
或显式配置DDS单播对等节点。
yaml
undefined

BAD: bridge network with no DDS config

错误:桥接网络但无DDS配置

services: node_a: networks: [ros_net] node_b: networks: [ros_net]
services: node_a: networks: [ros_net] node_b: networks: [ros_net]

GOOD: host networking (simplest)

正确:主机网络(最简单)

services: node_a: network_mode: host node_b: network_mode: host
services: node_a: network_mode: host node_b: network_mode: host

GOOD: bridge with CycloneDDS unicast peers

正确:桥接网络加CycloneDDS单播对等节点

services: node_a: networks: [ros_net] environment: - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp - CYCLONEDDS_URI=file:///cyclonedds.xml volumes: - ./cyclonedds.xml:/cyclonedds.xml:ro
undefined
services: node_a: networks: [ros_net] environment: - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp - CYCLONEDDS_URI=file:///cyclonedds.xml volumes: - ./cyclonedds.xml:/cyclonedds.xml:ro
undefined

3. Building Packages in the Runtime Image

3. 在运行时镜像中构建包

Problem: Installing compilers and build tools in the runtime image bloats it by 1-2 GB and increases attack surface.
Fix: Use multi-stage builds. Compile in a build stage, copy only the install space to runtime.
dockerfile
undefined
问题: 在运行时镜像中安装编译器和构建工具会使镜像膨胀1-2 GB,并增加攻击面。
修复: 使用多阶段构建。在构建阶段编译,仅将安装空间复制到运行时镜像。
dockerfile
undefined

BAD: build tools in runtime image (2.5 GB)

错误:运行时镜像包含构建工具(2.5 GB)

FROM ros:humble-ros-base RUN apt-get update && apt-get install -y build-essential python3-colcon-common-extensions COPY src/ /ros2_ws/src/ RUN cd /ros2_ws && colcon build CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
FROM ros:humble-ros-base RUN apt-get update && apt-get install -y build-essential python3-colcon-common-extensions COPY src/ /ros2_ws/src/ RUN cd /ros2_ws && colcon build CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]

GOOD: multi-stage build (800 MB)

正确:多阶段构建(800 MB)

FROM ros:humble-ros-base AS build RUN apt-get update && apt-get install -y python3-colcon-common-extensions COPY src/ /ros2_ws/src/ RUN cd /ros2_ws && . /opt/ros/humble/setup.sh && colcon build
FROM ros:humble-ros-core AS runtime COPY --from=build /ros2_ws/install /ros2_ws/install CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
undefined
FROM ros:humble-ros-base AS build RUN apt-get update && apt-get install -y python3-colcon-common-extensions COPY src/ /ros2_ws/src/ RUN cd /ros2_ws && . /opt/ros/humble/setup.sh && colcon build
FROM ros:humble-ros-core AS runtime COPY --from=build /ros2_ws/install /ros2_ws/install CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
undefined

4. Mounting the Entire Workspace as a Volume

4. 将整个工作区挂载为卷

Problem: Mounting the full workspace means colcon writes
build/
,
install/
, and
log/
to a bind mount. On macOS/Windows Docker Desktop, bind mount I/O is 10-50x slower. Builds that take 2 minutes take 20+ minutes.
Fix: Mount only
src/
as a bind mount. Use named volumes for build artifacts.
yaml
undefined
问题: 挂载完整工作区意味着colcon将
build/
install/
log/
写入绑定挂载。在macOS/Windows Docker Desktop上,绑定挂载I/O速度慢10-50倍。原本2分钟的构建需要20+分钟。
修复: 仅将
src/
挂载为绑定挂载。对构建产物使用命名卷。
yaml
undefined

BAD:

错误:

volumes:
  • ./my_ros2_ws:/ros2_ws
volumes:
  • ./my_ros2_ws:/ros2_ws

GOOD:

正确:

volumes:
  • ./my_ros2_ws/src:/ros2_ws/src
  • build_vol:/ros2_ws/build
  • install_vol:/ros2_ws/install
  • log_vol:/ros2_ws/log
undefined
volumes:
  • ./my_ros2_ws/src:/ros2_ws/src
  • build_vol:/ros2_ws/build
  • install_vol:/ros2_ws/install
  • log_vol:/ros2_ws/log
undefined

5. Running Containers as Root

5. 以Root用户运行容器

Problem: Running as root inside containers is a security risk. If compromised, the attacker has root access to mounted volumes and devices.
Fix: Create a non-root user with appropriate group membership.
dockerfile
undefined
问题: 在容器中以Root用户运行存在安全风险。如果容器被攻陷,攻击者将拥有挂载卷和设备的Root权限。
修复: 创建具有适当组成员身份的非Root用户。
dockerfile
undefined

BAD:

错误:

FROM ros:humble-ros-base COPY --from=build /ros2_ws/install /ros2_ws/install CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
FROM ros:humble-ros-base COPY --from=build /ros2_ws/install /ros2_ws/install CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]

GOOD:

正确:

FROM ros:humble-ros-base RUN groupadd -r rosuser &&
useradd -r -g rosuser -G video,dialout -m -s /bin/bash rosuser COPY --from=build --chown=rosuser:rosuser /ros2_ws/install /ros2_ws/install USER rosuser CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
undefined
FROM ros:humble-ros-base RUN groupadd -r rosuser &&
useradd -r -g rosuser -G video,dialout -m -s /bin/bash rosuser COPY --from=build --chown=rosuser:rosuser /ros2_ws/install /ros2_ws/install USER rosuser CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
undefined

6. Ignoring Layer Cache Order in Dockerfile

6. Dockerfile中层缓存顺序错误

Problem: Placing
COPY src/ .
before
rosdep install
means every source change invalidates the dependency cache. All apt packages are re-downloaded on every build.
Fix: Copy only
package.xml
files first, install dependencies, then copy source.
dockerfile
undefined
问题:
rosdep install
之前放置
COPY src/ .
意味着每次源代码更改都会使依赖缓存失效。每次构建都会重新下载所有apt包。
修复: 先仅复制
package.xml
文件,安装依赖,然后复制源代码。
dockerfile
undefined

BAD: source copy before rosdep

错误:源代码复制在rosdep之前

COPY src/ /ros2_ws/src/ RUN rosdep install --from-paths src --ignore-src -r -y RUN colcon build
COPY src/ /ros2_ws/src/ RUN rosdep install --from-paths src --ignore-src -r -y RUN colcon build

GOOD: package.xml first, then rosdep, then source

正确:先复制package.xml,然后rosdep,再复制源代码

COPY src/my_pkg/package.xml /ros2_ws/src/my_pkg/package.xml RUN . /opt/ros/humble/setup.sh && rosdep install --from-paths src --ignore-src -r -y COPY src/ /ros2_ws/src/ RUN . /opt/ros/humble/setup.sh && colcon build
undefined
COPY src/my_pkg/package.xml /ros2_ws/src/my_pkg/package.xml RUN . /opt/ros/humble/setup.sh && rosdep install --from-paths src --ignore-src -r -y COPY src/ /ros2_ws/src/ RUN . /opt/ros/humble/setup.sh && colcon build
undefined

7. Hardcoding ROS_DOMAIN_ID

7. 硬编码ROS_DOMAIN_ID

Problem: Hardcoding
ROS_DOMAIN_ID=42
causes conflicts when multiple robots or developers share a network. Two robots with the same domain ID will cross-talk.
Fix: Use environment variables with defaults. Set domain ID at deploy time.
yaml
undefined
问题: 硬编码
ROS_DOMAIN_ID=42
会在多个机器人或开发者共享网络时导致冲突。两个具有相同域ID的机器人会互相干扰。
修复: 使用带默认值的环境变量。在部署时设置域ID。
yaml
undefined

BAD:

错误:

environment:
  • ROS_DOMAIN_ID=42
environment:
  • ROS_DOMAIN_ID=42

GOOD:

正确:

environment:
  • ROS_DOMAIN_ID=${ROS_DOMAIN_ID:-0}

```bash
ROS_DOMAIN_ID=1 docker compose up -d    # Robot 1
ROS_DOMAIN_ID=2 docker compose up -d    # Robot 2
environment:
  • ROS_DOMAIN_ID=${ROS_DOMAIN_ID:-0}

```bash
ROS_DOMAIN_ID=1 docker compose up -d    # 机器人1
ROS_DOMAIN_ID=2 docker compose up -d    # 机器人2

8. Forgetting to Source setup.bash in Entrypoint

8. 入口点中忘记source setup.bash

Problem: The
CMD
runs
ros2 launch ...
but the shell has not sourced the ROS2 setup files. Fails with
ros2: command not found
.
Fix: Use an entrypoint script that sources the underlay and overlay before executing the command.
dockerfile
undefined
问题:
CMD
运行
ros2 launch ...
但shell未source ROS2设置文件。会失败并提示
ros2: command not found
修复: 使用入口点脚本,在执行命令前source底层和覆盖层的setup文件。
dockerfile
undefined

BAD: no sourcing

错误:未source

FROM ros:humble-ros-core COPY --from=build /ros2_ws/install /ros2_ws/install CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
FROM ros:humble-ros-core COPY --from=build /ros2_ws/install /ros2_ws/install CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]

GOOD: entrypoint script handles sourcing

正确:入口点脚本处理source

FROM ros:humble-ros-core COPY --from=build /ros2_ws/install /ros2_ws/install COPY ros_entrypoint.sh /ros_entrypoint.sh RUN chmod +x /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]

```bash
#!/bin/bash
FROM ros:humble-ros-core COPY --from=build /ros2_ws/install /ros2_ws/install COPY ros_entrypoint.sh /ros_entrypoint.sh RUN chmod +x /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]

```bash
#!/bin/bash

ros_entrypoint.sh

ros_entrypoint.sh

set -e source /opt/ros/${ROS_DISTRO}/setup.bash if [ -f /ros2_ws/install/setup.bash ]; then source /ros2_ws/install/setup.bash fi exec "$@"
undefined
set -e source /opt/ros/${ROS_DISTRO}/setup.bash if [ -f /ros2_ws/install/setup.bash ]; then source /ros2_ws/install/setup.bash fi exec "$@"
undefined

Docker ROS2 Deployment Checklist

Docker ROS2部署检查清单

  1. Multi-stage build -- Separate dev, build, and runtime stages so production images contain no compilers or build tools
  2. Non-root user -- Create a dedicated
    rosuser
    with only the group memberships needed (video, dialout) instead of privileged mode
  3. DDS configuration -- Ship a
    cyclonedds.xml
    or
    fastdds.xml
    with explicit peer lists if not using host networking
  4. Health checks -- Every service has a health check that verifies the node is running and publishing on expected topics
  5. Resource limits -- Set
    mem_limit
    ,
    cpus
    , and GPU reservations for each service to prevent resource starvation
  6. Restart policy -- Use
    restart: unless-stopped
    for all services and
    restart: always
    for critical drivers and watchdogs
  7. Log management -- Configure Docker logging driver (json-file with max-size/max-file) to prevent disk exhaustion from ROS2 log output
  8. Environment injection -- Use
    .env
    files or orchestrator secrets for
    ROS_DOMAIN_ID
    , DDS config paths, and device mappings rather than hardcoding
  9. Shared memory -- Set
    shm_size
    to at least 256 MB (512 MB for image topics) and configure DDS shared memory transport for high-bandwidth topics
  10. Device stability -- Use udev rules on the host to create stable
    /dev/robot/*
    symlinks and reference those in compose device mappings
  11. Image versioning -- Tag images with both
    latest
    and a commit SHA or semantic version; never deploy unversioned
    latest
    to production
  12. Backup and rollback -- Keep the previous image version available so a failed deployment can be rolled back by reverting the image tag
  1. 多阶段构建 -- 分离开发、构建和运行时阶段,使生产镜像不包含编译器或构建工具
  2. 非Root用户 -- 创建专用的
    rosuser
    ,仅赋予所需的组成员身份(video、dialout),而非使用特权模式
  3. DDS配置 -- 如果不使用主机网络,附带
    cyclonedds.xml
    fastdds.xml
    ,包含显式对等节点列表
  4. 健康检查 -- 每个服务都有健康检查,验证节点是否运行并在预期主题上发布
  5. 资源限制 -- 为每个服务设置
    mem_limit
    cpus
    和GPU预留,防止资源耗尽
  6. 重启策略 -- 所有服务使用
    restart: unless-stopped
    ,关键驱动和看门狗使用
    restart: always
  7. 日志管理 -- 配置Docker日志驱动(带max-size/max-file的json-file),防止ROS2日志输出耗尽磁盘空间
  8. 环境注入 -- 使用
    .env
    文件或编排器密钥设置
    ROS_DOMAIN_ID
    、DDS配置路径和设备映射,而非硬编码
  9. 共享内存 -- 将
    shm_size
    设置为至少256 MB(图像主题设为512 MB),并为高带宽主题配置DDS共享内存传输
  10. 设备稳定性 -- 在主机上使用udev规则创建稳定的
    /dev/robot/*
    符号链接,并在compose设备映射中引用这些链接
  11. 镜像版本控制 -- 为镜像同时打上
    latest
    和提交SHA或语义版本标签;永远不要将未版本化的
    latest
    部署到生产环境
  12. 备份与回滚 -- 保留上一个镜像版本,以便部署失败时可通过恢复镜像标签进行回滚