docker-ros2-development
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseDocker-Based ROS2 Development Skill
基于Docker的ROS2开发技能
When to Use This Skill
何时使用本技能
- Writing Dockerfiles for ROS2 workspaces with colcon builds
- Setting up docker-compose for multi-container robotic systems
- Debugging DDS discovery failures between containers (CycloneDDS, FastDDS)
- Configuring GPU passthrough with NVIDIA Container Toolkit for perception nodes
- Forwarding X11 or Wayland displays for rviz2 and rqt tools
- Managing USB device passthrough for cameras, LiDARs, and serial devices
- Building CI/CD pipelines with Docker-based ROS2 builds and test runners
- Creating devcontainer configurations for VS Code with ROS2 extensions
- Optimizing Docker layer caching for colcon workspace builds
- Designing dev-vs-deploy container strategies with multi-stage builds
- 为带有colcon构建的ROS2工作区编写Dockerfile
- 为多容器机器人系统设置docker-compose
- 调试容器间的DDS发现失败问题(CycloneDDS、FastDDS)
- 为感知节点配置基于NVIDIA Container Toolkit的GPU透传
- 为rviz2和rqt工具转发X11或Wayland显示
- 管理摄像头、激光雷达和串口设备的USB透传
- 构建基于Docker的ROS2构建和测试运行器的CI/CD流水线
- 为带有ROS2扩展的VS Code创建开发容器配置
- 优化colcon工作区构建的Docker层缓存
- 通过多阶段构建设计开发与部署分离的容器策略
ROS2 Docker Image Hierarchy
ROS2 Docker镜像层级
Official OSRF images follow a layered hierarchy. Always choose the smallest base that satisfies dependencies.
┌──────────────────────────────────────────────────────────────────┐
│ ros:<distro>-desktop-full (~3.5 GB) │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ ros:<distro>-desktop (~2.8 GB) │ │
│ │ ┌──────────────────────────────────────────────────────┐ │ │
│ │ │ ros:<distro>-perception (~2.2 GB) │ │ │
│ │ │ ┌────────────────────────────────────────────────┐ │ │ │
│ │ │ │ ros:<distro>-ros-base (~1.1 GB) │ │ │ │
│ │ │ │ ┌──────────────────────────────────────────┐ │ │ │ │
│ │ │ │ │ ros:<distro>-ros-core (~700 MB) │ │ │ │ │
│ │ │ │ └──────────────────────────────────────────┘ │ │ │ │
│ │ │ └────────────────────────────────────────────────┘ │ │ │
│ │ └──────────────────────────────────────────────────────┘ │ │
│ └────────────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────┘| Image Tag | Base OS | Size | Contents | Use Case |
|---|---|---|---|---|
| Ubuntu 22.04 | ~700 MB | rclcpp, rclpy, rosout, launch | Minimal runtime for single nodes |
| Ubuntu 22.04 | ~1.1 GB | ros-core + common_interfaces, rosbag2 | Most production deployments |
| Ubuntu 22.04 | ~2.2 GB | ros-base + image_transport, cv_bridge, PCL | Camera/lidar perception pipelines |
| Ubuntu 22.04 | ~2.8 GB | perception + rviz2, rqt, demos | Development with GUI tools |
| Ubuntu 24.04 | ~750 MB | rclcpp, rclpy, rosout, launch | Minimal runtime (Jazzy/Noble) |
| Ubuntu 24.04 | ~1.2 GB | ros-core + common_interfaces, rosbag2 | Production deployments (Jazzy) |
官方OSRF镜像遵循分层结构。始终选择能满足依赖的最小基础镜像。
┌──────────────────────────────────────────────────────────────────┐
│ ros:<distro>-desktop-full (~3.5 GB) │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ ros:<distro>-desktop (~2.8 GB) │ │
│ │ ┌──────────────────────────────────────────────────────┐ │ │
│ │ │ ros:<distro>-perception (~2.2 GB) │ │ │
│ │ │ ┌────────────────────────────────────────────────┐ │ │ │
│ │ │ │ ros:<distro>-ros-base (~1.1 GB) │ │ │ │
│ │ │ │ ┌──────────────────────────────────────────┐ │ │ │ │
│ │ │ │ │ ros:<distro>-ros-core (~700 MB) │ │ │ │ │
│ │ │ │ └──────────────────────────────────────────┘ │ │ │ │
│ │ │ └────────────────────────────────────────────────┘ │ │ │
│ │ └──────────────────────────────────────────────────────┘ │ │
│ └────────────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────┘| 镜像标签 | 基础操作系统 | 大小 | 内容 | 使用场景 |
|---|---|---|---|---|
| Ubuntu 22.04 | ~700 MB | rclcpp, rclpy, rosout, launch | 单节点的最小运行时环境 |
| Ubuntu 22.04 | ~1.1 GB | ros-core + common_interfaces, rosbag2 | 大多数生产部署场景 |
| Ubuntu 22.04 | ~2.2 GB | ros-base + image_transport, cv_bridge, PCL | 摄像头/激光雷达感知流水线 |
| Ubuntu 22.04 | ~2.8 GB | perception + rviz2, rqt, demos | 带GUI工具的开发环境 |
| Ubuntu 24.04 | ~750 MB | rclcpp, rclpy, rosout, launch | 最小运行时环境(Jazzy/Noble) |
| Ubuntu 24.04 | ~1.2 GB | ros-core + common_interfaces, rosbag2 | 生产部署场景(Jazzy) |
Multi-Stage Dockerfiles for ROS2
用于ROS2的多阶段Dockerfile
Dev Stage
开发阶段
The development stage includes build tools, debuggers, and editor support for interactive use.
dockerfile
FROM ros:humble-desktop AS dev
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential cmake gdb python3-pip \
python3-colcon-common-extensions python3-rosdep \
ros-humble-ament-lint-auto ros-humble-ament-cmake-pytest \
ccache \
&& rm -rf /var/lib/apt/lists/*
ENV CCACHE_DIR=/ccache
ENV CC="ccache gcc"
ENV CXX="ccache g++"开发阶段包含构建工具、调试器和编辑器支持,用于交互式使用。
dockerfile
FROM ros:humble-desktop AS dev
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential cmake gdb python3-pip \
python3-colcon-common-extensions python3-rosdep \
ros-humble-ament-lint-auto ros-humble-ament-cmake-pytest \
ccache \
&& rm -rf /var/lib/apt/lists/*
ENV CCACHE_DIR=/ccache
ENV CC="ccache gcc"
ENV CXX="ccache g++"Build Stage
构建阶段
Copies only and files to maximize cache hits during dependency resolution.
src/package.xmldockerfile
FROM ros:humble-ros-base AS build
RUN apt-get update && apt-get install -y --no-install-recommends \
python3-colcon-common-extensions python3-rosdep \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /ros2_ws仅复制和文件,以在依赖解析期间最大化缓存命中率。
src/package.xmldockerfile
FROM ros:humble-ros-base AS build
RUN apt-get update && apt-get install -y --no-install-recommends \
python3-colcon-common-extensions python3-rosdep \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /ros2_wsCopy package manifests first for dependency caching
先复制包清单以缓存依赖
COPY src/my_pkg/package.xml src/my_pkg/package.xml
RUN . /opt/ros/humble/setup.sh && apt-get update &&
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*
COPY src/my_pkg/package.xml src/my_pkg/package.xml
RUN . /opt/ros/humble/setup.sh && apt-get update &&
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*
Source changes invalidate only this layer and below
源文件更改仅使此层及以下层失效
COPY src/ src/
RUN . /opt/ros/humble/setup.sh &&
colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
--event-handlers console_direct+
colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
--event-handlers console_direct+
undefinedCOPY src/ src/
RUN . /opt/ros/humble/setup.sh &&
colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
--event-handlers console_direct+
colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
--event-handlers console_direct+
undefinedRuntime Stage
运行时阶段
Contains only the built install space and runtime dependencies. No compilers, no source code.
dockerfile
FROM ros:humble-ros-core AS runtime
RUN apt-get update && apt-get install -y --no-install-recommends \
python3-yaml ros-humble-rmw-cyclonedds-cpp \
&& rm -rf /var/lib/apt/lists/*
COPY /ros2_ws/install /ros2_ws/install
RUN groupadd -r rosuser && useradd -r -g rosuser -m rosuser
USER rosuser
COPY ros_entrypoint.sh /ros_entrypoint.sh
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]仅包含构建好的安装空间和运行时依赖。无编译器,无源代码。
dockerfile
FROM ros:humble-ros-core AS runtime
RUN apt-get update && apt-get install -y --no-install-recommends \
python3-yaml ros-humble-rmw-cyclonedds-cpp \
&& rm -rf /var/lib/apt/lists/*
COPY /ros2_ws/install /ros2_ws/install
RUN groupadd -r rosuser && useradd -r -g rosuser -m rosuser
USER rosuser
COPY ros_entrypoint.sh /ros_entrypoint.sh
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]Full Multi-Stage Dockerfile
完整的多阶段Dockerfile
dockerfile
undefineddockerfile
undefinedsyntax=docker/dockerfile:1
syntax=docker/dockerfile:1
Usage:
使用方法:
docker build --target dev -t my_robot:dev .
docker build --target dev -t my_robot:dev .
docker build --target runtime -t my_robot:latest .
docker build --target runtime -t my_robot:latest .
ARG ROS_DISTRO=humble
ARG BASE_IMAGE=ros:${ROS_DISTRO}-ros-base
ARG ROS_DISTRO=humble
ARG BASE_IMAGE=ros:${ROS_DISTRO}-ros-base
Stage 1: Dependency base — install apt and rosdep deps
阶段1:依赖基础 — 安装apt和rosdep依赖
FROM ${BASE_IMAGE} AS deps
RUN apt-get update && apt-get install -y --no-install-recommends
python3-colcon-common-extensions python3-rosdep
&& rm -rf /var/lib/apt/lists/* WORKDIR /ros2_ws
python3-colcon-common-extensions python3-rosdep
&& rm -rf /var/lib/apt/lists/* WORKDIR /ros2_ws
FROM ${BASE_IMAGE} AS deps
RUN apt-get update && apt-get install -y --no-install-recommends
python3-colcon-common-extensions python3-rosdep
&& rm -rf /var/lib/apt/lists/* WORKDIR /ros2_ws
python3-colcon-common-extensions python3-rosdep
&& rm -rf /var/lib/apt/lists/* WORKDIR /ros2_ws
Copy only package.xml files for rosdep resolution (maximizes cache reuse)
仅复制package.xml文件用于rosdep解析(最大化缓存复用)
COPY src/my_robot_bringup/package.xml src/my_robot_bringup/package.xml
COPY src/my_robot_perception/package.xml src/my_robot_perception/package.xml
COPY src/my_robot_msgs/package.xml src/my_robot_msgs/package.xml
COPY src/my_robot_navigation/package.xml src/my_robot_navigation/package.xml
RUN . /opt/ros/${ROS_DISTRO}/setup.sh &&
apt-get update &&
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*
apt-get update &&
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*
COPY src/my_robot_bringup/package.xml src/my_robot_bringup/package.xml
COPY src/my_robot_perception/package.xml src/my_robot_perception/package.xml
COPY src/my_robot_msgs/package.xml src/my_robot_msgs/package.xml
COPY src/my_robot_navigation/package.xml src/my_robot_navigation/package.xml
RUN . /opt/ros/${ROS_DISTRO}/setup.sh &&
apt-get update &&
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*
apt-get update &&
rosdep install --from-paths src --ignore-src -r -y &&
rm -rf /var/lib/apt/lists/*
Stage 2: Development — full dev environment
阶段2:开发 — 完整开发环境
FROM deps AS dev
RUN apt-get update && apt-get install -y --no-install-recommends
build-essential gdb valgrind ccache python3-pip python3-pytest
ros-${ROS_DISTRO}-ament-lint-auto
ros-${ROS_DISTRO}-launch-testing-ament-cmake
ros-${ROS_DISTRO}-rviz2 ros-${ROS_DISTRO}-rqt-graph
&& rm -rf /var/lib/apt/lists/* ENV CCACHE_DIR=/ccache CC="ccache gcc" CXX="ccache g++" COPY src/ src/ COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["bash"]
build-essential gdb valgrind ccache python3-pip python3-pytest
ros-${ROS_DISTRO}-ament-lint-auto
ros-${ROS_DISTRO}-launch-testing-ament-cmake
ros-${ROS_DISTRO}-rviz2 ros-${ROS_DISTRO}-rqt-graph
&& rm -rf /var/lib/apt/lists/* ENV CCACHE_DIR=/ccache CC="ccache gcc" CXX="ccache g++" COPY src/ src/ COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["bash"]
FROM deps AS dev
RUN apt-get update && apt-get install -y --no-install-recommends
build-essential gdb valgrind ccache python3-pip python3-pytest
ros-${ROS_DISTRO}-ament-lint-auto
ros-${ROS_DISTRO}-launch-testing-ament-cmake
ros-${ROS_DISTRO}-rviz2 ros-${ROS_DISTRO}-rqt-graph
&& rm -rf /var/lib/apt/lists/* ENV CCACHE_DIR=/ccache CC="ccache gcc" CXX="ccache g++" COPY src/ src/ COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["bash"]
build-essential gdb valgrind ccache python3-pip python3-pytest
ros-${ROS_DISTRO}-ament-lint-auto
ros-${ROS_DISTRO}-launch-testing-ament-cmake
ros-${ROS_DISTRO}-rviz2 ros-${ROS_DISTRO}-rqt-graph
&& rm -rf /var/lib/apt/lists/* ENV CCACHE_DIR=/ccache CC="ccache gcc" CXX="ccache g++" COPY src/ src/ COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["bash"]
Stage 3: Build — compile workspace
阶段3:构建 — 编译工作区
FROM deps AS build
COPY src/ src/
RUN . /opt/ros/${ROS_DISTRO}/setup.sh &&
colcon build
--cmake-args -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF
--event-handlers console_direct+
--parallel-workers $(nproc)
colcon build
--cmake-args -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF
--event-handlers console_direct+
--parallel-workers $(nproc)
FROM deps AS build
COPY src/ src/
RUN . /opt/ros/${ROS_DISTRO}/setup.sh &&
colcon build
--cmake-args -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF
--event-handlers console_direct+
--parallel-workers $(nproc)
colcon build
--cmake-args -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF
--event-handlers console_direct+
--parallel-workers $(nproc)
Stage 4: Runtime — minimal production image
阶段4:运行时 — 最小化生产镜像
FROM ros:${ROS_DISTRO}-ros-core AS runtime
ARG ROS_DISTRO=humble
RUN apt-get update && apt-get install -y --no-install-recommends
python3-yaml ros-${ROS_DISTRO}-rmw-cyclonedds-cpp
&& rm -rf /var/lib/apt/lists/* COPY --from=build /ros2_ws/install /ros2_ws/install RUN groupadd -r rosuser && useradd -r -g rosuser -m -s /bin/bash rosuser USER rosuser ENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["ros2", "launch", "my_robot_bringup", "robot.launch.py"]
python3-yaml ros-${ROS_DISTRO}-rmw-cyclonedds-cpp
&& rm -rf /var/lib/apt/lists/* COPY --from=build /ros2_ws/install /ros2_ws/install RUN groupadd -r rosuser && useradd -r -g rosuser -m -s /bin/bash rosuser USER rosuser ENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["ros2", "launch", "my_robot_bringup", "robot.launch.py"]
The entrypoint script both dev and runtime stages use:
```bash
#!/bin/bash
set -e
source /opt/ros/${ROS_DISTRO}/setup.bash
if [ -f /ros2_ws/install/setup.bash ]; then
source /ros2_ws/install/setup.bash
fi
exec "$@"FROM ros:${ROS_DISTRO}-ros-core AS runtime
ARG ROS_DISTRO=humble
RUN apt-get update && apt-get install -y --no-install-recommends
python3-yaml ros-${ROS_DISTRO}-rmw-cyclonedds-cpp
&& rm -rf /var/lib/apt/lists/* COPY --from=build /ros2_ws/install /ros2_ws/install RUN groupadd -r rosuser && useradd -r -g rosuser -m -s /bin/bash rosuser USER rosuser ENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["ros2", "launch", "my_robot_bringup", "robot.launch.py"]
python3-yaml ros-${ROS_DISTRO}-rmw-cyclonedds-cpp
&& rm -rf /var/lib/apt/lists/* COPY --from=build /ros2_ws/install /ros2_ws/install RUN groupadd -r rosuser && useradd -r -g rosuser -m -s /bin/bash rosuser USER rosuser ENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp COPY ros_entrypoint.sh /ros_entrypoint.sh ENTRYPOINT ["/ros_entrypoint.sh"] CMD ["ros2", "launch", "my_robot_bringup", "robot.launch.py"]
开发和运行时阶段都使用的入口点脚本:
```bash
#!/bin/bash
set -e
source /opt/ros/${ROS_DISTRO}/setup.bash
if [ -f /ros2_ws/install/setup.bash ]; then
source /ros2_ws/install/setup.bash
fi
exec "$@"Docker Compose for Multi-Container ROS2 Systems
用于多容器ROS2系统的Docker Compose
Basic Multi-Container Setup
基础多容器设置
Each ROS2 subsystem runs in its own container with process isolation, independent scaling, and per-service resource limits.
yaml
undefined每个ROS2子系统在自己的容器中运行,具有进程隔离、独立扩展和按服务的资源限制。
yaml
undefineddocker-compose.yml
docker-compose.yml
version: "3.8"
x-ros-common: &ros-common
environment:
- ROS_DOMAIN_ID=${ROS_DOMAIN_ID:-0}
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
- CYCLONEDDS_URI=file:///cyclonedds.xml
volumes:
- ./config/cyclonedds.xml:/cyclonedds.xml:ro
- /dev/shm:/dev/shm
network_mode: host
restart: unless-stopped
services:
rosbridge:
<<: *ros-common
image: my_robot:latest
command: ros2 launch rosbridge_server rosbridge_websocket_launch.xml port:=9090
perception:
<<: *ros-common
image: my_robot_perception:latest
command: ros2 launch my_robot_perception perception.launch.py
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
devices:
- /dev/video0:/dev/video0 # USB camera passthrough
navigation:
<<: *ros-common
image: my_robot_navigation:latest
command: >
ros2 launch my_robot_navigation navigation.launch.py
use_sim_time:=false map:=/maps/warehouse.yaml
volumes:
- ./maps:/maps:ro
driver:
<<: *ros-common
image: my_robot_driver:latest
command: ros2 launch my_robot_driver driver.launch.py
devices:
- /dev/ttyUSB0:/dev/ttyUSB0 # Serial motor controller
- /dev/ttyACM0:/dev/ttyACM0 # IMU over USB-serial
group_add:
- dialout
undefinedversion: "3.8"
x-ros-common: &ros-common
environment:
- ROS_DOMAIN_ID=${ROS_DOMAIN_ID:-0}
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
- CYCLONEDDS_URI=file:///cyclonedds.xml
volumes:
- ./config/cyclonedds.xml:/cyclonedds.xml:ro
- /dev/shm:/dev/shm
network_mode: host
restart: unless-stopped
services:
rosbridge:
<<: *ros-common
image: my_robot:latest
command: ros2 launch rosbridge_server rosbridge_websocket_launch.xml port:=9090
perception:
<<: *ros-common
image: my_robot_perception:latest
command: ros2 launch my_robot_perception perception.launch.py
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
devices:
- /dev/video0:/dev/video0 # USB摄像头透传
navigation:
<<: *ros-common
image: my_robot_navigation:latest
command: >
ros2 launch my_robot_navigation navigation.launch.py
use_sim_time:=false map:=/maps/warehouse.yaml
volumes:
- ./maps:/maps:ro
driver:
<<: *ros-common
image: my_robot_driver:latest
command: ros2 launch my_robot_driver driver.launch.py
devices:
- /dev/ttyUSB0:/dev/ttyUSB0 # 串口电机控制器
- /dev/ttyACM0:/dev/ttyACM0 # USB转串口IMU
group_add:
- dialout
undefinedService Dependencies with Health Checks
带健康检查的服务依赖
yaml
services:
driver:
<<: *ros-common
image: my_robot_driver:latest
healthcheck:
test: ["CMD", "bash", "-c",
"source /opt/ros/humble/setup.bash && ros2 topic list | grep -q /joint_states"]
interval: 5s
timeout: 10s
retries: 5
start_period: 15s
navigation:
<<: *ros-common
image: my_robot_navigation:latest
depends_on:
driver:
condition: service_healthy # Wait for driver topics
perception:
<<: *ros-common
image: my_robot_perception:latest
depends_on:
driver:
condition: service_healthy # Camera driver must be readyyaml
services:
driver:
<<: *ros-common
image: my_robot_driver:latest
healthcheck:
test: ["CMD", "bash", "-c",
"source /opt/ros/humble/setup.bash && ros2 topic list | grep -q /joint_states"]
interval: 5s
timeout: 10s
retries: 5
start_period: 15s
navigation:
<<: *ros-common
image: my_robot_navigation:latest
depends_on:
driver:
condition: service_healthy # 等待驱动主题就绪
perception:
<<: *ros-common
image: my_robot_perception:latest
depends_on:
driver:
condition: service_healthy # 摄像头驱动必须就绪Profiles for Dev vs Deploy
开发与部署配置文件
yaml
services:
driver:
<<: *ros-common
image: my_robot_driver:latest
command: ros2 launch my_robot_driver driver.launch.py
rviz:
<<: *ros-common
profiles: ["dev"]
image: my_robot:dev
command: ros2 run rviz2 rviz2 -d /rviz/config.rviz
environment:
- DISPLAY=${DISPLAY}
- QT_X11_NO_MITSHM=1
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix:rw
rosbag_record:
<<: *ros-common
profiles: ["dev"]
image: my_robot:dev
command: ros2 bag record -a --storage sqlite3 --max-bag-duration 300 -o /bags/session
volumes:
- ./bags:/bags
watchdog:
<<: *ros-common
profiles: ["deploy"]
image: my_robot:latest
command: ros2 launch my_robot_bringup watchdog.launch.py
restart: alwaysbash
docker compose --profile dev up # Dev tools (rviz, rosbag)
docker compose --profile deploy up -d # Production (watchdog, no GUI)yaml
services:
driver:
<<: *ros-common
image: my_robot_driver:latest
command: ros2 launch my_robot_driver driver.launch.py
rviz:
<<: *ros-common
profiles: ["dev"]
image: my_robot:dev
command: ros2 run rviz2 rviz2 -d /rviz/config.rviz
environment:
- DISPLAY=${DISPLAY}
- QT_X11_NO_MITSHM=1
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix:rw
rosbag_record:
<<: *ros-common
profiles: ["dev"]
image: my_robot:dev
command: ros2 bag record -a --storage sqlite3 --max-bag-duration 300 -o /bags/session
volumes:
- ./bags:/bags
watchdog:
<<: *ros-common
profiles: ["deploy"]
image: my_robot:latest
command: ros2 launch my_robot_bringup watchdog.launch.py
restart: alwaysbash
docker compose --profile dev up # 开发工具(rviz、rosbag)
docker compose --profile deploy up -d # 生产环境(看门狗,无GUI)DDS Discovery Across Containers
跨容器的DDS发现
CycloneDDS XML Config for Unicast Across Containers
容器间单播的CycloneDDS XML配置
When containers use bridge networking (no multicast), configure explicit unicast peer lists.
xml
<!-- cyclonedds.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<CycloneDDS xmlns="https://cdds.io/config">
<Domain>
<General>
<Interfaces>
<NetworkInterface autodetermine="true" priority="default"/>
</Interfaces>
<AllowMulticast>false</AllowMulticast>
</General>
<Discovery>
<!-- Peer list uses docker-compose service names as hostnames -->
<Peers>
<Peer address="perception"/>
<Peer address="navigation"/>
<Peer address="driver"/>
<Peer address="rosbridge"/>
</Peers>
<ParticipantIndex>auto</ParticipantIndex>
<MaxAutoParticipantIndex>120</MaxAutoParticipantIndex>
</Discovery>
<Internal>
<SocketReceiveBufferSize min="10MB"/>
</Internal>
</Domain>
</CycloneDDS>当容器使用桥接网络(无多播)时,配置显式单播对等列表。
xml
<!-- cyclonedds.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<CycloneDDS xmlns="https://cdds.io/config">
<Domain>
<General>
<Interfaces>
<NetworkInterface autodetermine="true" priority="default"/>
</Interfaces>
<AllowMulticast>false</AllowMulticast>
</General>
<Discovery>
<!-- 对等列表使用docker-compose服务名称作为主机名 -->
<Peers>
<Peer address="perception"/>
<Peer address="navigation"/>
<Peer address="driver"/>
<Peer address="rosbridge"/>
</Peers>
<ParticipantIndex>auto</ParticipantIndex>
<MaxAutoParticipantIndex>120</MaxAutoParticipantIndex>
</Discovery>
<Internal>
<SocketReceiveBufferSize min="10MB"/>
</Internal>
</Domain>
</CycloneDDS>FastDDS XML Config
FastDDS XML配置
xml
<!-- fastdds.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<dds xmlns="http://www.eprosima.com/XMLSchemas/fastRTPS_Profiles">
<profiles>
<participant profile_name="docker_participant" is_default_profile="true">
<rtps>
<builtin>
<discovery_config>
<discoveryProtocol>SIMPLE</discoveryProtocol>
<leaseDuration><sec>10</sec></leaseDuration>
</discovery_config>
<initialPeersList>
<locator>
<udpv4><address>perception</address><port>7412</port></udpv4>
</locator>
<locator>
<udpv4><address>navigation</address><port>7412</port></udpv4>
</locator>
<locator>
<udpv4><address>driver</address><port>7412</port></udpv4>
</locator>
</initialPeersList>
</builtin>
</rtps>
</participant>
</profiles>
</dds>Mount and activate in compose:
yaml
undefinedxml
<!-- fastdds.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<dds xmlns="http://www.eprosima.com/XMLSchemas/fastRTPS_Profiles">
<profiles>
<participant profile_name="docker_participant" is_default_profile="true">
<rtps>
<builtin>
<discovery_config>
<discoveryProtocol>SIMPLE</discoveryProtocol>
<leaseDuration><sec>10</sec></leaseDuration>
</discovery_config>
<initialPeersList>
<locator>
<udpv4><address>perception</address><port>7412</port></udpv4>
</locator>
<locator>
<udpv4><address>navigation</address><port>7412</port></udpv4>
</locator>
<locator>
<udpv4><address>driver</address><port>7412</port></udpv4>
</locator>
</initialPeersList>
</builtin>
</rtps>
</participant>
</profiles>
</dds>在compose中挂载并激活:
yaml
undefinedCycloneDDS
CycloneDDS
environment:
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
- CYCLONEDDS_URI=file:///cyclonedds.xml volumes:
- ./config/cyclonedds.xml:/cyclonedds.xml:ro
environment:
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
- CYCLONEDDS_URI=file:///cyclonedds.xml volumes:
- ./config/cyclonedds.xml:/cyclonedds.xml:ro
FastDDS
FastDDS
environment:
- RMW_IMPLEMENTATION=rmw_fastrtps_cpp
- FASTRTPS_DEFAULT_PROFILES_FILE=/fastdds.xml volumes:
- ./config/fastdds.xml:/fastdds.xml:ro
undefinedenvironment:
- RMW_IMPLEMENTATION=rmw_fastrtps_cpp
- FASTRTPS_DEFAULT_PROFILES_FILE=/fastdds.xml volumes:
- ./config/fastdds.xml:/fastdds.xml:ro
undefinedShared Memory Transport in Docker
Docker中的共享内存传输
DDS shared memory (zero-copy) requires sharing between containers. This provides highest throughput for large messages (images, point clouds).
/dev/shmyaml
services:
perception:
shm_size: "512m" # Default 64 MB is too small for image topics
volumes:
- /dev/shm:/dev/shm # Share host shm for inter-container zero-copyxml
<!-- Enable shared memory in CycloneDDS -->
<CycloneDDS xmlns="https://cdds.io/config">
<Domain>
<SharedMemory>
<Enable>true</Enable>
</SharedMemory>
</Domain>
</CycloneDDS>Constraints: all communicating containers must share or use . Use on one container and on others for scoped sharing.
/dev/shmipc: host--ipc=shareable--ipc=container:<name>DDS共享内存(零拷贝)需要容器间共享。这为大消息(图像、点云)提供最高吞吐量。
/dev/shmyaml
services:
perception:
shm_size: "512m" # 默认64 MB对于图像主题太小
volumes:
- /dev/shm:/dev/shm # 共享主机shm以实现容器间零拷贝xml
<!-- 在CycloneDDS中启用共享内存 -->
<CycloneDDS xmlns="https://cdds.io/config">
<Domain>
<SharedMemory>
<Enable>true</Enable>
</SharedMemory>
</Domain>
</CycloneDDS>约束:所有通信容器必须共享或使用。在一个容器上使用,在其他容器上使用以实现范围共享。
/dev/shmipc: host--ipc=shareable--ipc=container:<name>Networking Modes and ROS2 Implications
网络模式与ROS2的影响
Host Networking
主机网络
yaml
services:
my_node:
network_mode: host # Shares host network namespace; DDS multicast works nativelyyaml
services:
my_node:
network_mode: host # 共享主机网络命名空间;DDS多播原生支持Bridge Networking (Default)
桥接网络(默认)
yaml
services:
my_node:
networks: [ros_net]
networks:
ros_net:
driver: bridge # DDS multicast blocked; requires unicast peer configyaml
services:
my_node:
networks: [ros_net]
networks:
ros_net:
driver: bridge # DDS多播被阻止;需要单播对等配置Macvlan Networking
Macvlan网络
yaml
networks:
ros_macvlan:
driver: macvlan
driver_opts:
parent: eth0
ipam:
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
services:
my_node:
networks:
ros_macvlan:
ipv4_address: 192.168.1.50 # Real LAN IP; DDS multicast works nativelyyaml
networks:
ros_macvlan:
driver: macvlan
driver_opts:
parent: eth0
ipam:
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
services:
my_node:
networks:
ros_macvlan:
ipv4_address: 192.168.1.50 # 真实LAN IP;DDS多播原生支持Decision Table
决策表
| Factor | Host | Bridge | Macvlan |
|---|---|---|---|
| DDS discovery | Works natively | Needs unicast peers | Works natively |
| Network isolation | None | Full isolation | LAN-level isolation |
| Port conflicts | Yes (host ports) | No (mapped ports) | No (unique IPs) |
| Performance | Native | Slight overhead | Near-native |
| Multi-host support | No | With overlay networks | Yes (same LAN) |
| When to use | Dev, single host | CI/CD, multi-tenant | Multi-robot on LAN |
| 因素 | 主机网络 | 桥接网络 | Macvlan网络 |
|---|---|---|---|
| DDS发现 | 原生支持 | 需要单播对等配置 | 原生支持 |
| 网络隔离 | 无 | 完全隔离 | LAN级隔离 |
| 端口冲突 | 是(主机端口) | 否(映射端口) | 否(唯一IP) |
| 性能 | 原生性能 | 轻微开销 | 接近原生 |
| 多主机支持 | 否 | 支持覆盖网络 | 是(同一LAN) |
| 使用场景 | 开发、单主机 | CI/CD、多租户 | LAN上的多机器人 |
GPU Passthrough for Perception
感知任务的GPU透传
NVIDIA Container Toolkit Setup
NVIDIA Container Toolkit设置
bash
undefinedbash
undefinedInstall NVIDIA Container Toolkit on the host
在主机上安装NVIDIA Container Toolkit
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g'
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g'
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker
undefinedcurl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g'
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g'
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker
undefinedCompose Config with deploy.resources
带deploy.resources的Compose配置
yaml
services:
perception:
image: my_robot_perception:latest
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1 # Number of GPUs (or "all")
capabilities: [gpu]
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
shm_size: "1g" # Large shm for GPU<->CPU transfersFor Dockerfiles that need CUDA, start from NVIDIA base and install ROS2 on top:
dockerfile
FROM nvidia/cuda:12.2.0-cudnn8-runtime-ubuntu22.04 AS perception-base
RUN apt-get update && apt-get install -y --no-install-recommends \
curl gnupg2 lsb-release \
&& curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key \
-o /usr/share/keyrings/ros-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] \
http://packages.ros.org/ros2/ubuntu $(lsb_release -cs) main" \
> /etc/apt/sources.list.d/ros2.list \
&& apt-get update && apt-get install -y --no-install-recommends \
ros-humble-ros-base ros-humble-cv-bridge ros-humble-image-transport \
&& rm -rf /var/lib/apt/lists/*yaml
services:
perception:
image: my_robot_perception:latest
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1 # GPU数量(或"all")
capabilities: [gpu]
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
shm_size: "1g" # 大shm用于GPU<->CPU传输对于需要CUDA的Dockerfile,从NVIDIA基础镜像开始,在其上安装ROS2:
dockerfile
FROM nvidia/cuda:12.2.0-cudnn8-runtime-ubuntu22.04 AS perception-base
RUN apt-get update && apt-get install -y --no-install-recommends \
curl gnupg2 lsb-release \
&& curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key \
-o /usr/share/keyrings/ros-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] \
http://packages.ros.org/ros2/ubuntu $(lsb_release -cs) main" \
> /etc/apt/sources.list.d/ros2.list \
&& apt-get update && apt-get install -y --no-install-recommends \
ros-humble-ros-base ros-humble-cv-bridge ros-humble-image-transport \
&& rm -rf /var/lib/apt/lists/*Verification
验证
bash
docker compose exec perception bash -c '
nvidia-smi
python3 -c "import torch; print(f\"CUDA available: {torch.cuda.is_available()}\")"
'bash
docker compose exec perception bash -c '
nvidia-smi
python3 -c "import torch; print(f\"CUDA可用: {torch.cuda.is_available()}\")"
'Display Forwarding
显示转发
X11 Forwarding
X11转发
yaml
services:
rviz:
image: my_robot:dev
command: ros2 run rviz2 rviz2
environment:
- DISPLAY=${DISPLAY:-:0} # Forward host display
- QT_X11_NO_MITSHM=1 # Disable MIT-SHM (crashes in Docker)
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix:rw # X11 socket
- ${HOME}/.Xauthority:/root/.Xauthority:ro # Auth cookie
network_mode: hostbash
undefinedyaml
services:
rviz:
image: my_robot:dev
command: ros2 run rviz2 rviz2
environment:
- DISPLAY=${DISPLAY:-:0} # 转发主机显示
- QT_X11_NO_MITSHM=1 # 禁用MIT-SHM(在Docker中会崩溃)
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix:rw # X11套接字
- ${HOME}/.Xauthority:/root/.Xauthority:ro # 认证Cookie
network_mode: hostbash
undefinedAllow local Docker containers to access the X server
允许本地Docker容器访问X服务器
xhost +local:docker
xhost +local:docker
More secure variant:
更安全的变体:
xhost +SI:localuser:$(whoami)
undefinedxhost +SI:localuser:$(whoami)
undefinedWayland Forwarding
Wayland转发
yaml
services:
rviz:
image: my_robot:dev
command: ros2 run rviz2 rviz2
environment:
- WAYLAND_DISPLAY=${WAYLAND_DISPLAY:-wayland-0}
- XDG_RUNTIME_DIR=/run/user/1000
- QT_QPA_PLATFORM=wayland
volumes:
- ${XDG_RUNTIME_DIR}/${WAYLAND_DISPLAY}:/run/user/1000/${WAYLAND_DISPLAY}:rwyaml
services:
rviz:
image: my_robot:dev
command: ros2 run rviz2 rviz2
environment:
- WAYLAND_DISPLAY=${WAYLAND_DISPLAY:-wayland-0}
- XDG_RUNTIME_DIR=/run/user/1000
- QT_QPA_PLATFORM=wayland
volumes:
- ${XDG_RUNTIME_DIR}/${WAYLAND_DISPLAY}:/run/user/1000/${WAYLAND_DISPLAY}:rwHeadless Rendering
无头渲染
For CI/CD or remote machines without a physical display:
bash
undefined对于CI/CD或无物理显示的远程机器:
bash
undefinedRun rviz2 headless with Xvfb for screenshot capture or testing
使用Xvfb无头运行rviz2以捕获截图或测试
docker run --rm my_robot:dev bash -c '
apt-get update && apt-get install -y xvfb mesa-utils &&
Xvfb :99 -screen 0 1920x1080x24 &
export DISPLAY=:99
source /opt/ros/humble/setup.bash
ros2 run rviz2 rviz2 -d /config/test.rviz --screenshot /output/frame.png
'
undefineddocker run --rm my_robot:dev bash -c '
apt-get update && apt-get install -y xvfb mesa-utils &&
Xvfb :99 -screen 0 1920x1080x24 &
export DISPLAY=:99
source /opt/ros/humble/setup.bash
ros2 run rviz2 rviz2 -d /config/test.rviz --screenshot /output/frame.png
'
undefinedVolume Mounts and Workspace Overlays
卷挂载与工作区覆盖
Source Mounts for Dev
开发用源码挂载
Mount only during development. Let colcon write , , and inside named volumes to avoid bind mount performance issues.
src/build/install/log/yaml
undefined开发期间仅挂载。让colcon在命名卷中写入、和,以避免绑定挂载的性能问题。
src/build/install/log/yaml
undefinedBAD: mounting entire workspace — build artifacts on bind mount are slow
错误:挂载整个工作区 — 绑定挂载上的构建产物速度慢
volumes:
volumes:
- ./my_ros2_ws:/ros2_ws
- ./my_ros2_ws:/ros2_ws
GOOD: mount only source, use named volumes for build artifacts
正确:仅挂载源码,对构建产物使用命名卷
services:
dev:
image: my_robot:dev
volumes:
- ./src:/ros2_ws/src:rw # Source code (bind mount)
- build_vol:/ros2_ws/build # Build artifacts (named volume)
- install_vol:/ros2_ws/install # Install space (named volume)
- log_vol:/ros2_ws/log # Log output (named volume)
working_dir: /ros2_ws
volumes:
build_vol:
install_vol:
log_vol:
undefinedservices:
dev:
image: my_robot:dev
volumes:
- ./src:/ros2_ws/src:rw # 源代码(绑定挂载)
- build_vol:/ros2_ws/build # 构建产物(命名卷)
- install_vol:/ros2_ws/install # 安装空间(命名卷)
- log_vol:/ros2_ws/log # 日志输出(命名卷)
working_dir: /ros2_ws
volumes:
build_vol:
install_vol:
log_vol:
undefinedccache Caching
ccache缓存
Persist ccache across container rebuilds for faster C++ compilation:
yaml
services:
dev:
volumes:
- ccache_vol:/ccache
environment:
- CCACHE_DIR=/ccache
- CCACHE_MAXSIZE=2G
- CC=ccache gcc
- CXX=ccache g++
volumes:
ccache_vol:在容器重建之间持久化ccache,以加快C++编译:
yaml
services:
dev:
volumes:
- ccache_vol:/ccache
environment:
- CCACHE_DIR=/ccache
- CCACHE_MAXSIZE=2G
- CC=ccache gcc
- CXX=ccache g++
volumes:
ccache_vol:ROS Workspace Overlay in Docker
Docker中的ROS工作区覆盖
Keep upstream packages cached and only rebuild custom packages:
dockerfile
undefined缓存上游包,仅重新构建自定义包:
dockerfile
undefinedStage 1: upstream dependencies (rarely changes)
阶段1:上游依赖(很少更改)
FROM ros:humble-ros-base AS upstream
RUN apt-get update && apt-get install -y --no-install-recommends
ros-humble-nav2-bringup ros-humble-slam-toolbox
ros-humble-robot-localization
&& rm -rf /var/lib/apt/lists/*
ros-humble-nav2-bringup ros-humble-slam-toolbox
ros-humble-robot-localization
&& rm -rf /var/lib/apt/lists/*
FROM ros:humble-ros-base AS upstream
RUN apt-get update && apt-get install -y --no-install-recommends
ros-humble-nav2-bringup ros-humble-slam-toolbox
ros-humble-robot-localization
&& rm -rf /var/lib/apt/lists/*
ros-humble-nav2-bringup ros-humble-slam-toolbox
ros-humble-robot-localization
&& rm -rf /var/lib/apt/lists/*
Stage 2: custom packages overlay on top
阶段2:自定义包覆盖在上游之上
FROM upstream AS workspace
WORKDIR /ros2_ws
COPY src/ src/
RUN . /opt/ros/humble/setup.sh && colcon build --symlink-install
FROM upstream AS workspace
WORKDIR /ros2_ws
COPY src/ src/
RUN . /opt/ros/humble/setup.sh && colcon build --symlink-install
install/setup.bash automatically sources /opt/ros/humble as underlay
install/setup.bash会自动将/opt/ros/humble作为底层工作区
undefinedundefinedUSB Device Passthrough
USB设备透传
Cameras and Serial Devices
摄像头与串口设备
yaml
services:
camera_driver:
image: my_robot_driver:latest
devices:
- /dev/video0:/dev/video0 # USB camera (V4L2)
- /dev/video1:/dev/video1
group_add:
- video # Access /dev/videoN without root
motor_driver:
image: my_robot_driver:latest
devices:
- /dev/ttyUSB0:/dev/ttyUSB0 # USB-serial motor controller
- /dev/ttyACM0:/dev/ttyACM0 # Arduino/Teensy
group_add:
- dialout # Access serial ports without rootyaml
services:
camera_driver:
image: my_robot_driver:latest
devices:
- /dev/video0:/dev/video0 # USB摄像头(V4L2)
- /dev/video1:/dev/video1
group_add:
- video # 无需root即可访问/dev/videoN
motor_driver:
image: my_robot_driver:latest
devices:
- /dev/ttyUSB0:/dev/ttyUSB0 # USB转串口电机控制器
- /dev/ttyACM0:/dev/ttyACM0 # Arduino/Teensy
group_add:
- dialout # 无需root即可访问串口Udev Rules Inside Containers
容器内的Udev规则
Create stable device symlinks on the host so container paths remain consistent regardless of USB enumeration order.
bash
undefined在主机上创建稳定的设备符号链接,使容器路径在USB枚举顺序变化时保持一致。
bash
undefined/etc/udev/rules.d/99-robot-devices.rules (host-side)
/etc/udev/rules.d/99-robot-devices.rules(主机端)
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="robot/motor_controller"
SUBSYSTEM=="tty", ATTRS{idVendor}=="10c4", ATTRS{idProduct}=="ea60", SYMLINK+="robot/lidar"
SUBSYSTEM=="video4linux", ATTRS{idVendor}=="046d", ATTRS{idProduct}=="0825", SYMLINK+="robot/camera"
```bash
sudo udevadm control --reload-rules && sudo udevadm triggeryaml
services:
driver:
devices:
- /dev/robot/motor_controller:/dev/ttyMOTOR # Stable symlink
- /dev/robot/lidar:/dev/ttyLIDAR
- /dev/robot/camera:/dev/video0SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="robot/motor_controller"
SUBSYSTEM=="tty", ATTRS{idVendor}=="10c4", ATTRS{idProduct}=="ea60", SYMLINK+="robot/lidar"
SUBSYSTEM=="video4linux", ATTRS{idVendor}=="046d", ATTRS{idProduct}=="0825", SYMLINK+="robot/camera"
```bash
sudo udevadm control --reload-rules && sudo udevadm triggeryaml
services:
driver:
devices:
- /dev/robot/motor_controller:/dev/ttyMOTOR # 稳定符号链接
- /dev/robot/lidar:/dev/ttyLIDAR
- /dev/robot/camera:/dev/video0Dynamic Device Attachment
动态设备挂载
For devices plugged in after the container starts:
yaml
services:
driver:
# Option 1: privileged (use only when necessary)
privileged: true
volumes:
- /dev:/dev
# Option 2: cgroup device rules (more secure)
# device_cgroup_rules:
# - 'c 188:* rmw' # USB-serial (major 188)
# - 'c 81:* rmw' # Video devices (major 81)对于容器启动后插入的设备:
yaml
services:
driver:
# 选项1:特权模式(仅在必要时使用)
privileged: true
volumes:
- /dev:/dev
# 选项2:cgroup设备规则(更安全)
# device_cgroup_rules:
# - 'c 188:* rmw' # USB转串口(主设备号188)
# - 'c 81:* rmw' # 视频设备(主设备号81)Dev Container Configuration
开发容器配置
json
// .devcontainer/devcontainer.json
{
"name": "ROS2 Humble Dev",
"build": {
"dockerfile": "../Dockerfile",
"target": "dev",
"args": { "ROS_DISTRO": "humble" }
},
"runArgs": [
"--network=host", "--ipc=host", "--pid=host",
"--privileged", "--gpus", "all",
"-e", "DISPLAY=${localEnv:DISPLAY}",
"-e", "QT_X11_NO_MITSHM=1",
"-v", "/tmp/.X11-unix:/tmp/.X11-unix:rw",
"-v", "/dev:/dev"
],
"workspaceMount": "source=${localWorkspaceFolder},target=/ros2_ws/src,type=bind",
"workspaceFolder": "/ros2_ws",
"mounts": [
"source=ros2-build-vol,target=/ros2_ws/build,type=volume",
"source=ros2-install-vol,target=/ros2_ws/install,type=volume",
"source=ros2-log-vol,target=/ros2_ws/log,type=volume",
"source=ros2-ccache-vol,target=/ccache,type=volume"
],
"containerEnv": {
"ROS_DISTRO": "humble",
"RMW_IMPLEMENTATION": "rmw_cyclonedds_cpp",
"CCACHE_DIR": "/ccache",
"RCUTILS_COLORIZED_OUTPUT": "1"
},
"customizations": {
"vscode": {
"extensions": [
"ms-iot.vscode-ros",
"ms-vscode.cpptools",
"ms-python.python",
"ms-vscode.cmake-tools",
"smilerobotics.urdf",
"redhat.vscode-xml",
"redhat.vscode-yaml"
],
"settings": {
"ros.distro": "humble",
"python.defaultInterpreterPath": "/usr/bin/python3",
"C_Cpp.default.compileCommands": "/ros2_ws/build/compile_commands.json",
"cmake.configureOnOpen": false
}
}
},
"postCreateCommand": "sudo apt-get update && rosdep update && rosdep install --from-paths src --ignore-src -r -y",
"remoteUser": "rosuser"
}json
// .devcontainer/devcontainer.json
{
"name": "ROS2 Humble开发环境",
"build": {
"dockerfile": "../Dockerfile",
"target": "dev",
"args": { "ROS_DISTRO": "humble" }
},
"runArgs": [
"--network=host", "--ipc=host", "--pid=host",
"--privileged", "--gpus", "all",
"-e", "DISPLAY=${localEnv:DISPLAY}",
"-e", "QT_X11_NO_MITSHM=1",
"-v", "/tmp/.X11-unix:/tmp/.X11-unix:rw",
"-v", "/dev:/dev"
],
"workspaceMount": "source=${localWorkspaceFolder},target=/ros2_ws/src,type=bind",
"workspaceFolder": "/ros2_ws",
"mounts": [
"source=ros2-build-vol,target=/ros2_ws/build,type=volume",
"source=ros2-install-vol,target=/ros2_ws/install,type=volume",
"source=ros2-log-vol,target=/ros2_ws/log,type=volume",
"source=ros2-ccache-vol,target=/ccache,type=volume"
],
"containerEnv": {
"ROS_DISTRO": "humble",
"RMW_IMPLEMENTATION": "rmw_cyclonedds_cpp",
"CCACHE_DIR": "/ccache",
"RCUTILS_COLORIZED_OUTPUT": "1"
},
"customizations": {
"vscode": {
"extensions": [
"ms-iot.vscode-ros",
"ms-vscode.cpptools",
"ms-python.python",
"ms-vscode.cmake-tools",
"smilerobotics.urdf",
"redhat.vscode-xml",
"redhat.vscode-yaml"
],
"settings": {
"ros.distro": "humble",
"python.defaultInterpreterPath": "/usr/bin/python3",
"C_Cpp.default.compileCommands": "/ros2_ws/build/compile_commands.json",
"cmake.configureOnOpen": false
}
}
},
"postCreateCommand": "sudo apt-get update && rosdep update && rosdep install --from-paths src --ignore-src -r -y",
"remoteUser": "rosuser"
}CI/CD with Docker
基于Docker的CI/CD
GitHub Actions Workflow
GitHub Actions工作流
yaml
undefinedyaml
undefined.github/workflows/ros2-docker-ci.yml
.github/workflows/ros2-docker-ci.yml
name: ROS2 Docker CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-test:
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
ros_distro: [humble, jazzy]
include:
- ros_distro: humble
ubuntu: "22.04"
- ros_distro: jazzy
ubuntu: "24.04"
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ matrix.ros_distro }}-${{ hashFiles('src/**/package.xml') }}
restore-keys: ${{ runner.os }}-buildx-${{ matrix.ros_distro }}-
- name: Build and test
run: |
docker build --target dev \
--build-arg ROS_DISTRO=${{ matrix.ros_distro }} \
-t test-image:${{ matrix.ros_distro }} .
docker run --rm test-image:${{ matrix.ros_distro }} bash -c '
source /opt/ros/${{ matrix.ros_distro }}/setup.bash &&
cd /ros2_ws &&
colcon build --cmake-args -DBUILD_TESTING=ON &&
colcon test --event-handlers console_direct+ &&
colcon test-result --verbose'
- name: Push runtime image
if: github.ref == 'refs/heads/main'
uses: docker/build-push-action@v5
with:
context: .
target: runtime
build-args: ROS_DISTRO=${{ matrix.ros_distro }}
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ matrix.ros_distro }}-latest
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ matrix.ros_distro }}-${{ github.sha }}
push: true
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
- name: Rotate cache
run: rm -rf /tmp/.buildx-cache && mv /tmp/.buildx-cache-new /tmp/.buildx-cacheundefinedname: ROS2 Docker CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-test:
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
ros_distro: [humble, jazzy]
include:
- ros_distro: humble
ubuntu: "22.04"
- ros_distro: jazzy
ubuntu: "24.04"
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: 缓存Docker层
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ matrix.ros_distro }}-${{ hashFiles('src/**/package.xml') }}
restore-keys: ${{ runner.os }}-buildx-${{ matrix.ros_distro }}-
- name: 构建并测试
run: |
docker build --target dev \
--build-arg ROS_DISTRO=${{ matrix.ros_distro }} \
-t test-image:${{ matrix.ros_distro }} .
docker run --rm test-image:${{ matrix.ros_distro }} bash -c '
source /opt/ros/${{ matrix.ros_distro }}/setup.bash &&
cd /ros2_ws &&
colcon build --cmake-args -DBUILD_TESTING=ON &&
colcon test --event-handlers console_direct+ &&
colcon test-result --verbose'
- name: 推送运行时镜像
if: github.ref == 'refs/heads/main'
uses: docker/build-push-action@v5
with:
context: .
target: runtime
build-args: ROS_DISTRO=${{ matrix.ros_distro }}
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ matrix.ros_distro }}-latest
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ matrix.ros_distro }}-${{ github.sha }}
push: true
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
- name: 轮换缓存
run: rm -rf /tmp/.buildx-cache && mv /tmp/.buildx-cache-new /tmp/.buildx-cacheundefinedLayer Caching
层缓存
Order Dockerfile instructions from least-frequently-changed to most-frequently-changed:
1. Base image (ros:humble-ros-base) — changes on distro upgrade
2. System apt packages — changes on new dependency
3. rosdep install (from package.xml) — changes on new ROS dep
4. COPY src/ src/ — changes on every code edit
5. colcon build — rebuilds on source change将Dockerfile指令按从最少更改到最多更改的顺序排列:
1. 基础镜像 (ros:humble-ros-base) — 发行版升级时更改
2. 系统apt包 — 添加新依赖时更改
3. rosdep安装 (来自package.xml) — 添加新ROS依赖时更改
4. COPY src/ src/ — 每次代码编辑时更改
5. colcon构建 — 源代码更改时重新构建Build Matrix Across Distros
跨发行版的构建矩阵
yaml
strategy:
matrix:
ros_distro: [humble, iron, jazzy, rolling]
rmw: [rmw_cyclonedds_cpp, rmw_fastrtps_cpp]
exclude:
- ros_distro: iron # Iron EOL — skip
rmw: rmw_fastrtps_cppyaml
strategy:
matrix:
ros_distro: [humble, iron, jazzy, rolling]
rmw: [rmw_cyclonedds_cpp, rmw_fastrtps_cpp]
exclude:
- ros_distro: iron # Iron已终止支持 — 跳过
rmw: rmw_fastrtps_cppDocker ROS2 Anti-Patterns
Docker ROS2反模式
1. Running Everything in One Container
1. 所有内容运行在一个容器中
Problem: Putting perception, navigation, planning, and drivers in a single container defeats the purpose of containerization. A crash in one subsystem takes down everything.
Fix: Split into one service per subsystem. Use docker-compose to orchestrate.
yaml
undefined问题: 将感知、导航、规划和驱动放在单个容器中违背了容器化的目的。一个子系统崩溃会导致所有服务停止。
修复: 拆分为每个子系统一个服务。使用docker-compose进行编排。
yaml
undefinedBAD: monolithic container
错误:单体容器
services:
robot:
image: my_robot:latest
command: ros2 launch my_robot everything.launch.py
services:
robot:
image: my_robot:latest
command: ros2 launch my_robot everything.launch.py
GOOD: one container per subsystem
正确:每个子系统一个容器
services:
perception:
image: my_robot_perception:latest
command: ros2 launch my_robot_perception perception.launch.py
navigation:
image: my_robot_navigation:latest
command: ros2 launch my_robot_navigation navigation.launch.py
driver:
image: my_robot_driver:latest
command: ros2 launch my_robot_driver driver.launch.py
undefinedservices:
perception:
image: my_robot_perception:latest
command: ros2 launch my_robot_perception perception.launch.py
navigation:
image: my_robot_navigation:latest
command: ros2 launch my_robot_navigation navigation.launch.py
driver:
image: my_robot_driver:latest
command: ros2 launch my_robot_driver driver.launch.py
undefined2. Using Bridge Networking Without DDS Config
2. 使用桥接网络但未配置DDS
Problem: DDS uses multicast for discovery by default. Docker bridge networks do not forward multicast. Nodes in different containers will not discover each other.
Fix: Use or configure DDS unicast peers explicitly.
network_mode: hostyaml
undefined问题: DDS默认使用多播进行发现。Docker桥接网络不转发多播。不同容器中的节点无法互相发现。
修复: 使用或显式配置DDS单播对等节点。
network_mode: hostyaml
undefinedBAD: bridge network with no DDS config
错误:桥接网络但无DDS配置
services:
node_a:
networks: [ros_net]
node_b:
networks: [ros_net]
services:
node_a:
networks: [ros_net]
node_b:
networks: [ros_net]
GOOD: host networking (simplest)
正确:主机网络(最简单)
services:
node_a:
network_mode: host
node_b:
network_mode: host
services:
node_a:
network_mode: host
node_b:
network_mode: host
GOOD: bridge with CycloneDDS unicast peers
正确:桥接网络加CycloneDDS单播对等节点
services:
node_a:
networks: [ros_net]
environment:
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
- CYCLONEDDS_URI=file:///cyclonedds.xml
volumes:
- ./cyclonedds.xml:/cyclonedds.xml:ro
undefinedservices:
node_a:
networks: [ros_net]
environment:
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
- CYCLONEDDS_URI=file:///cyclonedds.xml
volumes:
- ./cyclonedds.xml:/cyclonedds.xml:ro
undefined3. Building Packages in the Runtime Image
3. 在运行时镜像中构建包
Problem: Installing compilers and build tools in the runtime image bloats it by 1-2 GB and increases attack surface.
Fix: Use multi-stage builds. Compile in a build stage, copy only the install space to runtime.
dockerfile
undefined问题: 在运行时镜像中安装编译器和构建工具会使镜像膨胀1-2 GB,并增加攻击面。
修复: 使用多阶段构建。在构建阶段编译,仅将安装空间复制到运行时镜像。
dockerfile
undefinedBAD: build tools in runtime image (2.5 GB)
错误:运行时镜像包含构建工具(2.5 GB)
FROM ros:humble-ros-base
RUN apt-get update && apt-get install -y build-essential python3-colcon-common-extensions
COPY src/ /ros2_ws/src/
RUN cd /ros2_ws && colcon build
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
FROM ros:humble-ros-base
RUN apt-get update && apt-get install -y build-essential python3-colcon-common-extensions
COPY src/ /ros2_ws/src/
RUN cd /ros2_ws && colcon build
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
GOOD: multi-stage build (800 MB)
正确:多阶段构建(800 MB)
FROM ros:humble-ros-base AS build
RUN apt-get update && apt-get install -y python3-colcon-common-extensions
COPY src/ /ros2_ws/src/
RUN cd /ros2_ws && . /opt/ros/humble/setup.sh && colcon build
FROM ros:humble-ros-core AS runtime
COPY --from=build /ros2_ws/install /ros2_ws/install
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
undefinedFROM ros:humble-ros-base AS build
RUN apt-get update && apt-get install -y python3-colcon-common-extensions
COPY src/ /ros2_ws/src/
RUN cd /ros2_ws && . /opt/ros/humble/setup.sh && colcon build
FROM ros:humble-ros-core AS runtime
COPY --from=build /ros2_ws/install /ros2_ws/install
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
undefined4. Mounting the Entire Workspace as a Volume
4. 将整个工作区挂载为卷
Problem: Mounting the full workspace means colcon writes , , and to a bind mount. On macOS/Windows Docker Desktop, bind mount I/O is 10-50x slower. Builds that take 2 minutes take 20+ minutes.
build/install/log/Fix: Mount only as a bind mount. Use named volumes for build artifacts.
src/yaml
undefined问题: 挂载完整工作区意味着colcon将、和写入绑定挂载。在macOS/Windows Docker Desktop上,绑定挂载I/O速度慢10-50倍。原本2分钟的构建需要20+分钟。
build/install/log/修复: 仅将挂载为绑定挂载。对构建产物使用命名卷。
src/yaml
undefinedBAD:
错误:
volumes:
- ./my_ros2_ws:/ros2_ws
volumes:
- ./my_ros2_ws:/ros2_ws
GOOD:
正确:
volumes:
- ./my_ros2_ws/src:/ros2_ws/src
- build_vol:/ros2_ws/build
- install_vol:/ros2_ws/install
- log_vol:/ros2_ws/log
undefinedvolumes:
- ./my_ros2_ws/src:/ros2_ws/src
- build_vol:/ros2_ws/build
- install_vol:/ros2_ws/install
- log_vol:/ros2_ws/log
undefined5. Running Containers as Root
5. 以Root用户运行容器
Problem: Running as root inside containers is a security risk. If compromised, the attacker has root access to mounted volumes and devices.
Fix: Create a non-root user with appropriate group membership.
dockerfile
undefined问题: 在容器中以Root用户运行存在安全风险。如果容器被攻陷,攻击者将拥有挂载卷和设备的Root权限。
修复: 创建具有适当组成员身份的非Root用户。
dockerfile
undefinedBAD:
错误:
FROM ros:humble-ros-base
COPY --from=build /ros2_ws/install /ros2_ws/install
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
FROM ros:humble-ros-base
COPY --from=build /ros2_ws/install /ros2_ws/install
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
GOOD:
正确:
FROM ros:humble-ros-base
RUN groupadd -r rosuser &&
useradd -r -g rosuser -G video,dialout -m -s /bin/bash rosuser COPY --from=build --chown=rosuser:rosuser /ros2_ws/install /ros2_ws/install USER rosuser CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
useradd -r -g rosuser -G video,dialout -m -s /bin/bash rosuser COPY --from=build --chown=rosuser:rosuser /ros2_ws/install /ros2_ws/install USER rosuser CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
undefinedFROM ros:humble-ros-base
RUN groupadd -r rosuser &&
useradd -r -g rosuser -G video,dialout -m -s /bin/bash rosuser COPY --from=build --chown=rosuser:rosuser /ros2_ws/install /ros2_ws/install USER rosuser CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
useradd -r -g rosuser -G video,dialout -m -s /bin/bash rosuser COPY --from=build --chown=rosuser:rosuser /ros2_ws/install /ros2_ws/install USER rosuser CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
undefined6. Ignoring Layer Cache Order in Dockerfile
6. Dockerfile中层缓存顺序错误
Problem: Placing before means every source change invalidates the dependency cache. All apt packages are re-downloaded on every build.
COPY src/ .rosdep installFix: Copy only files first, install dependencies, then copy source.
package.xmldockerfile
undefined问题: 在之前放置意味着每次源代码更改都会使依赖缓存失效。每次构建都会重新下载所有apt包。
rosdep installCOPY src/ .修复: 先仅复制文件,安装依赖,然后复制源代码。
package.xmldockerfile
undefinedBAD: source copy before rosdep
错误:源代码复制在rosdep之前
COPY src/ /ros2_ws/src/
RUN rosdep install --from-paths src --ignore-src -r -y
RUN colcon build
COPY src/ /ros2_ws/src/
RUN rosdep install --from-paths src --ignore-src -r -y
RUN colcon build
GOOD: package.xml first, then rosdep, then source
正确:先复制package.xml,然后rosdep,再复制源代码
COPY src/my_pkg/package.xml /ros2_ws/src/my_pkg/package.xml
RUN . /opt/ros/humble/setup.sh && rosdep install --from-paths src --ignore-src -r -y
COPY src/ /ros2_ws/src/
RUN . /opt/ros/humble/setup.sh && colcon build
undefinedCOPY src/my_pkg/package.xml /ros2_ws/src/my_pkg/package.xml
RUN . /opt/ros/humble/setup.sh && rosdep install --from-paths src --ignore-src -r -y
COPY src/ /ros2_ws/src/
RUN . /opt/ros/humble/setup.sh && colcon build
undefined7. Hardcoding ROS_DOMAIN_ID
7. 硬编码ROS_DOMAIN_ID
Problem: Hardcoding causes conflicts when multiple robots or developers share a network. Two robots with the same domain ID will cross-talk.
ROS_DOMAIN_ID=42Fix: Use environment variables with defaults. Set domain ID at deploy time.
yaml
undefined问题: 硬编码会在多个机器人或开发者共享网络时导致冲突。两个具有相同域ID的机器人会互相干扰。
ROS_DOMAIN_ID=42修复: 使用带默认值的环境变量。在部署时设置域ID。
yaml
undefinedBAD:
错误:
environment:
- ROS_DOMAIN_ID=42
environment:
- ROS_DOMAIN_ID=42
GOOD:
正确:
environment:
- ROS_DOMAIN_ID=${ROS_DOMAIN_ID:-0}
```bash
ROS_DOMAIN_ID=1 docker compose up -d # Robot 1
ROS_DOMAIN_ID=2 docker compose up -d # Robot 2environment:
- ROS_DOMAIN_ID=${ROS_DOMAIN_ID:-0}
```bash
ROS_DOMAIN_ID=1 docker compose up -d # 机器人1
ROS_DOMAIN_ID=2 docker compose up -d # 机器人28. Forgetting to Source setup.bash in Entrypoint
8. 入口点中忘记source setup.bash
Problem: The runs but the shell has not sourced the ROS2 setup files. Fails with .
CMDros2 launch ...ros2: command not foundFix: Use an entrypoint script that sources the underlay and overlay before executing the command.
dockerfile
undefined问题: 运行但shell未source ROS2设置文件。会失败并提示。
CMDros2 launch ...ros2: command not found修复: 使用入口点脚本,在执行命令前source底层和覆盖层的setup文件。
dockerfile
undefinedBAD: no sourcing
错误:未source
FROM ros:humble-ros-core
COPY --from=build /ros2_ws/install /ros2_ws/install
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
FROM ros:humble-ros-core
COPY --from=build /ros2_ws/install /ros2_ws/install
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
GOOD: entrypoint script handles sourcing
正确:入口点脚本处理source
FROM ros:humble-ros-core
COPY --from=build /ros2_ws/install /ros2_ws/install
COPY ros_entrypoint.sh /ros_entrypoint.sh
RUN chmod +x /ros_entrypoint.sh
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
```bash
#!/bin/bashFROM ros:humble-ros-core
COPY --from=build /ros2_ws/install /ros2_ws/install
COPY ros_entrypoint.sh /ros_entrypoint.sh
RUN chmod +x /ros_entrypoint.sh
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["ros2", "launch", "my_pkg", "bringup.launch.py"]
```bash
#!/bin/bashros_entrypoint.sh
ros_entrypoint.sh
set -e
source /opt/ros/${ROS_DISTRO}/setup.bash
if [ -f /ros2_ws/install/setup.bash ]; then
source /ros2_ws/install/setup.bash
fi
exec "$@"
undefinedset -e
source /opt/ros/${ROS_DISTRO}/setup.bash
if [ -f /ros2_ws/install/setup.bash ]; then
source /ros2_ws/install/setup.bash
fi
exec "$@"
undefinedDocker ROS2 Deployment Checklist
Docker ROS2部署检查清单
- Multi-stage build -- Separate dev, build, and runtime stages so production images contain no compilers or build tools
- Non-root user -- Create a dedicated with only the group memberships needed (video, dialout) instead of privileged mode
rosuser - DDS configuration -- Ship a or
cyclonedds.xmlwith explicit peer lists if not using host networkingfastdds.xml - Health checks -- Every service has a health check that verifies the node is running and publishing on expected topics
- Resource limits -- Set ,
mem_limit, and GPU reservations for each service to prevent resource starvationcpus - Restart policy -- Use for all services and
restart: unless-stoppedfor critical drivers and watchdogsrestart: always - Log management -- Configure Docker logging driver (json-file with max-size/max-file) to prevent disk exhaustion from ROS2 log output
- Environment injection -- Use files or orchestrator secrets for
.env, DDS config paths, and device mappings rather than hardcodingROS_DOMAIN_ID - Shared memory -- Set to at least 256 MB (512 MB for image topics) and configure DDS shared memory transport for high-bandwidth topics
shm_size - Device stability -- Use udev rules on the host to create stable symlinks and reference those in compose device mappings
/dev/robot/* - Image versioning -- Tag images with both and a commit SHA or semantic version; never deploy unversioned
latestto productionlatest - Backup and rollback -- Keep the previous image version available so a failed deployment can be rolled back by reverting the image tag
- 多阶段构建 -- 分离开发、构建和运行时阶段,使生产镜像不包含编译器或构建工具
- 非Root用户 -- 创建专用的,仅赋予所需的组成员身份(video、dialout),而非使用特权模式
rosuser - DDS配置 -- 如果不使用主机网络,附带或
cyclonedds.xml,包含显式对等节点列表fastdds.xml - 健康检查 -- 每个服务都有健康检查,验证节点是否运行并在预期主题上发布
- 资源限制 -- 为每个服务设置、
mem_limit和GPU预留,防止资源耗尽cpus - 重启策略 -- 所有服务使用,关键驱动和看门狗使用
restart: unless-stoppedrestart: always - 日志管理 -- 配置Docker日志驱动(带max-size/max-file的json-file),防止ROS2日志输出耗尽磁盘空间
- 环境注入 -- 使用文件或编排器密钥设置
.env、DDS配置路径和设备映射,而非硬编码ROS_DOMAIN_ID - 共享内存 -- 将设置为至少256 MB(图像主题设为512 MB),并为高带宽主题配置DDS共享内存传输
shm_size - 设备稳定性 -- 在主机上使用udev规则创建稳定的符号链接,并在compose设备映射中引用这些链接
/dev/robot/* - 镜像版本控制 -- 为镜像同时打上和提交SHA或语义版本标签;永远不要将未版本化的
latest部署到生产环境latest - 备份与回滚 -- 保留上一个镜像版本,以便部署失败时可通过恢复镜像标签进行回滚