· Paper · Code · Webpage · Hugging Face
This repository contains the PyTorch implementation of "BodyGen: Advancing Towards Efficient Embodiment Co-Design." (ICLR 2025, Spotlight). Here is our poster.
Let's start with python 3.9. It's recommend to create a conda env:
conda create -n BodyGen python=3.9 mesalib glew glfw -c conda-forge -y
conda activate BodyGenInstall mujoco-py following the instruction here. For MuJoCo, you can use the script below:
#!/bin/bash
sudo apt-get update && sudo apt-get install -y wget tar libosmesa6-dev libglew-dev libgl1-mesa-glx libglfw3 patchelf cmake
sudo ln -s /usr/lib/x86_64-linux-gnu/libGL.so.1 /usr/lib/x86_64-linux-gnu/libGL.so
# Set up MuJoCo
USER_DIR=$HOME
wget -c "https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz"
mkdir -p $USER_DIR/.mujoco
cp mujoco210-linux-x86_64.tar.gz $USER_DIR/mujoco.tar.gz
rm mujoco210-linux-x86_64.tar.gz
tar -zxvf $USER_DIR/mujoco.tar.gz -C $USER_DIR/.mujoco
# Update environment variables
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$USER_DIR/.mujoco/mujoco210/bin" >> ~/.bashrc
echo "export MUJOCO_PY_MUJOCO_PATH=$USER_DIR/.mujoco/mujoco210" >> ~/.bashrc
source ~/.bashrcSet the following environment variable to avoid problems with multiprocess trajectory sampling:
export OMP_NUM_THREADS=1(Optional) For MacOS user, you can follow the README_FOR_MAC.md to install MuJoCo on M1/M2/M3 Mac, which is helpful for embodied agents visualization.
pip install -r requirements.txtNote you may have to follow https://pytorch.org/ setup instructions for installation on your own machine.
Please refer to this website for more visualization results. If you want to visualize the pretrained models, please refer to the following section.
We also provide pretrained models in BodyGen/pretrained_models for visualization.
-
You can download pretrained models from Google Drive
-
Once the
pretrained_models.zipfile is downloaded, unzip it under the folderBodyGenof this repo:
unzip pretrained_models.zipAfter you unzip the file, an example directory hierarchy is:
assets/
design_opt/
...
pretrained_models/
|-- cheetah/
|-- crawler/
|-- ...
|-- walker-regular/
...
scripts/Our pretrained models are also available on Hugging Face. You can download the pretrained models, and place the pretrained_models folder under the root directory (./BodyGen) of this repo.
If you have a GUI display, you can run the following command to visualize the pretrained model:
python design_opt/eval.py --train_dir <path_of_model_folder>For example, you can use pretrained_models to visualize the co-design embodied agent on cheetah generated by BodyGen:
python design_opt/eval.py --train_dir /Path/to/BodyGen/pretrained_models/cheetahFor linux users, if you meet the error ERROR: GLEW initalization error: Missing GL version, you can try the following command:
sudo apt-get install -y libglew-dev
export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.soPress S to slow the agent, and F to speed up.
cd BodyGen
chmod 777 scripts/Run_BodyGen.sh
./scripts/Run_BodyGen.shUse the scripts scripts/Run_BodyGen.sh, which contain preconfigured arguments and hyperparameters for all the experiments in the paper. Experiments use Hydra to manage configuration.
Visualizing results requires wandb: configure project name with the project key in BodyGen/design_opt/conf/config.yaml.
As an example of how to run, this runs BodyGen on the crawler environment:
EXPT="crawler"
OMP_NUM_THREADS=1 python -m design_opt.train -m cfg=$EXPT group=$EXPTReplace crawler with {crawler, terraincrosser, cheetah, swimmer, glider-regular, glider-medium, glider-hard, walker-regular, walker-medium, walker-hard} to train other environments.
-
OMP_NUM_THREADS=1is essential to prevent CPU ops from hanging. -
The environment is selected with the
cfg=flag, each of which corresponds to a YAML file inBodyGen/design_opt/cfg. See that folder for the list of available experiments. -
Other hyperparameters are explained in
BodyGen/design_opt/conf/config.yamland our paper.
- Our initial code is based on Transform2Act. Thanks for their great work and the discussions with the authors.
- We also refer to Neural Graph Evolution (NGE) and One-Policy-to-Control-Them-All during our implementation. Thanks for their interesting work.
- The initial designs are based on Transformer2Act, OpenAI Gym, and DeepMind dm_control. All the algorithms are evaluated with the same sets of initial designs.
- The backend engine is based on the MuJoCo, and we are planning to bump to the latest version of official MuJoCo bindings (>3.0.1) in the future.
If you find our work useful in your research, please consider citing:
@inproceedings{
lu2025bodygen,
title={BodyGen: Advancing Towards Efficient Embodiment Co-Design},
author={Haofei Lu and Zhe Wu and Junliang Xing and Jianshu Li and Ruoyu Li and Zhe Li and Yuanchun Shi},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=cTR17xl89h}
}Please see the license for further details.

