Skip to content

Example solutions for the Robot Learing Lab projects. Implementations based on ROS 2 and Pytorch.

License

Notifications You must be signed in to change notification settings

TW-Robotics/Robot-Learning-Lab-Examples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Robot Learning Lab Examples

This repository serves as educational material for advanced robotics engineering courses, focusing on real-world implementation of Model Predictive Control (MPC) and Deep Reinforcement Learning (DRL) in ROS 2 environments.

Overview

This repository provides hands-on examples of two fundamental paradigms in modern robotics:

  1. Optimization-based Control: Model Predictive Control for real-time trajectory planning and control
  2. End-to-End Learning: Deep Reinforcement Learning for policy learning in simulated environments

Both approaches are implemented using ROS 2 (Robot Operating System 2) with practical examples that can be extended for research and industrial applications.

Repository Structure

Robot-Learning-Lab-Examples/
├── 01_Optimization/                   # Model Predictive Control Project
│   ├── material/                      # Visual aids and documentation
│   ├── ros2_ws/                       # ROS 2 workspace
│   │   ├── notebook/                  # Jupyter notebooks for MPC theory
│   │   └── src/
│   │       ├── gazebo_ros2/           # Gazebo simulation wrapper
│   │       ├── mpc_local_costmap/     # Local costmap configuration
│   │       └── mpc_local_planner/     # MPC controller implementation
│   ├── docker_install.md              # Docker setup instructions
│   ├── INSTALL.md                     # Installation guide
│   └── README.md                      # Detailed MPC documentation
│
├── 02_End-to-End_Learning/            # Deep Reinforcement Learning Project
│   ├── material/                      # Visual aids and documentation
│   ├── ros2_ws/                       # ROS 2 workspace
│   │   └── src/
│   │       ├── car_racing/            # CarRacing-v2 environment
│   │       ├── cartpole/              # CartPole-v1 environment
│   │       ├── lunar_lander/          # LunarLander-v2 environment
│   │       └── reinforcement_learning/# Core RL framework
│   │           ├── rl_core/           # Base RL components
│   │           ├── rl_msgs/           # ROS 2 message definitions
│   │           └── rl_trainers/       # Training algorithms
│   ├── docker_install.md              # Docker setup instructions
│   ├── INSTALL.md                     # Installation guide
│   └── README.md                      # Detailed DRL documentation
│
└── LICENSE

Lab Contents

01_Optimization: Model Predictive Control

Learning Objectives:

  • Understand optimization principles in robotics (localization, planning, control)
  • Implement real-time MPC for mobile robot navigation
  • Integrate with ROS 2 navigation stack (Nav2)
  • Handle constraints and cost functions in trajectory optimization

Key Components:

  • MPC local planner with configurable cost functions
  • Integration with TurtleBot3 in Gazebo simulation
  • Real-time optimization using PyTorch
  • Local costmap integration for obstacle avoidance

Prerequisites: ROS 2 (Humble/Jazzy), PyTorch, PyPose

02_End-to-End_Learning: Deep Reinforcement Learning

Learning Objectives:

  • Implement DRL algorithms (PPO, SAC) for robotics applications
  • Design custom environments using Gymnasium
  • Understand sim-to-real transfer considerations
  • Apply vectorized training for efficient policy learning

Key Components:

  • Modular RL framework with plug-in environments
  • Three benchmark environments: CarRacing, CartPole, LunarLander
  • GPU-accelerated training with vectorized environments
  • ROS 2 integration for real robot deployment

Prerequisites: ROS 2 (Humble), PyTorch with CUDA, Gymnasium

Quick Start

Prerequisites

  • Ubuntu 22.04+ with ROS 2 Humble or Jazzy
  • Docker and NVIDIA Container Toolkit (recommended)
  • CUDA-compatible GPU (for DRL examples)

Installation Options

  1. Docker (Recommended): Follow the docker_install.md and nvidia_container_toolkit_install.md guides in each lab directory
  2. Local Installation: Follow the INSTALL.md guides in each lab directory

Running Examples

MPC Lab:

cd 01_Optimization/ros2_ws
colcon build --symlink-install
source install/setup.bash
ros2 launch mpc_local_planner navigation_launch.py

DRL Lab:

cd 02_End-to-End_Learning/ros2_ws
colcon build --symlink-install
source install/setup.bash
ros2 launch car_racing car_racing.launch.py

Educational Context

This repository is designed for master's-level robotics engineering courses, providing:

  • Theoretical Foundation: Mathematical background with practical implementation
  • Industry Relevance: Real-world applicable examples using standard robotics frameworks
  • Research Readiness: Extensible codebase suitable for thesis projects
  • Best Practices: Professional software engineering practices in robotics

About

Example solutions for the Robot Learing Lab projects. Implementations based on ROS 2 and Pytorch.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published