This repository serves as educational material for advanced robotics engineering courses, focusing on real-world implementation of Model Predictive Control (MPC) and Deep Reinforcement Learning (DRL) in ROS 2 environments.
This repository provides hands-on examples of two fundamental paradigms in modern robotics:
- Optimization-based Control: Model Predictive Control for real-time trajectory planning and control
- End-to-End Learning: Deep Reinforcement Learning for policy learning in simulated environments
Both approaches are implemented using ROS 2 (Robot Operating System 2) with practical examples that can be extended for research and industrial applications.
Robot-Learning-Lab-Examples/
├── 01_Optimization/ # Model Predictive Control Project
│ ├── material/ # Visual aids and documentation
│ ├── ros2_ws/ # ROS 2 workspace
│ │ ├── notebook/ # Jupyter notebooks for MPC theory
│ │ └── src/
│ │ ├── gazebo_ros2/ # Gazebo simulation wrapper
│ │ ├── mpc_local_costmap/ # Local costmap configuration
│ │ └── mpc_local_planner/ # MPC controller implementation
│ ├── docker_install.md # Docker setup instructions
│ ├── INSTALL.md # Installation guide
│ └── README.md # Detailed MPC documentation
│
├── 02_End-to-End_Learning/ # Deep Reinforcement Learning Project
│ ├── material/ # Visual aids and documentation
│ ├── ros2_ws/ # ROS 2 workspace
│ │ └── src/
│ │ ├── car_racing/ # CarRacing-v2 environment
│ │ ├── cartpole/ # CartPole-v1 environment
│ │ ├── lunar_lander/ # LunarLander-v2 environment
│ │ └── reinforcement_learning/# Core RL framework
│ │ ├── rl_core/ # Base RL components
│ │ ├── rl_msgs/ # ROS 2 message definitions
│ │ └── rl_trainers/ # Training algorithms
│ ├── docker_install.md # Docker setup instructions
│ ├── INSTALL.md # Installation guide
│ └── README.md # Detailed DRL documentation
│
└── LICENSE
Learning Objectives:
- Understand optimization principles in robotics (localization, planning, control)
- Implement real-time MPC for mobile robot navigation
- Integrate with ROS 2 navigation stack (Nav2)
- Handle constraints and cost functions in trajectory optimization
Key Components:
- MPC local planner with configurable cost functions
- Integration with TurtleBot3 in Gazebo simulation
- Real-time optimization using PyTorch
- Local costmap integration for obstacle avoidance
Prerequisites: ROS 2 (Humble/Jazzy), PyTorch, PyPose
Learning Objectives:
- Implement DRL algorithms (PPO, SAC) for robotics applications
- Design custom environments using Gymnasium
- Understand sim-to-real transfer considerations
- Apply vectorized training for efficient policy learning
Key Components:
- Modular RL framework with plug-in environments
- Three benchmark environments: CarRacing, CartPole, LunarLander
- GPU-accelerated training with vectorized environments
- ROS 2 integration for real robot deployment
Prerequisites: ROS 2 (Humble), PyTorch with CUDA, Gymnasium
- Ubuntu 22.04+ with ROS 2 Humble or Jazzy
- Docker and NVIDIA Container Toolkit (recommended)
- CUDA-compatible GPU (for DRL examples)
- Docker (Recommended): Follow the
docker_install.mdandnvidia_container_toolkit_install.mdguides in each lab directory - Local Installation: Follow the
INSTALL.mdguides in each lab directory
MPC Lab:
cd 01_Optimization/ros2_ws
colcon build --symlink-install
source install/setup.bash
ros2 launch mpc_local_planner navigation_launch.pyDRL Lab:
cd 02_End-to-End_Learning/ros2_ws
colcon build --symlink-install
source install/setup.bash
ros2 launch car_racing car_racing.launch.pyThis repository is designed for master's-level robotics engineering courses, providing:
- Theoretical Foundation: Mathematical background with practical implementation
- Industry Relevance: Real-world applicable examples using standard robotics frameworks
- Research Readiness: Extensible codebase suitable for thesis projects
- Best Practices: Professional software engineering practices in robotics