Aethersim is a reinforcement learning (RL) integrated training and visualization platform built with PyQt. This project is designed as a graduation thesis and aims to provide an intuitive, modular, and interactive interface for robot control training, simulation monitoring, and model evaluation.
Aethersim bridges the gap between reinforcement learning algorithms and robot simulation environments, making it easier for researchers and students to conduct training experiments and visualize results efficiently.
- 🧩 Task Creation – Define custom reinforcement learning environments and parameters.
- ⚙️ Parameter Configuration – Adjust RL hyperparameters such as learning rate, reward structure, and exploration rate through GUI.
- 📈 Training Monitoring – Real-time visualization of training progress, loss curves, and performance metrics.
- 🤖 Model Evaluation – Load and compare different trained models, view performance statistics and behavior.
- 💾 Model & Log Management – Automatically save model weights, training logs, and replay data.
- 🧮 Data Analysis Tools – Integrated plotting and analytics tools for interpreting agent performance.
- 🪟 PyQt Frontend – Clean, interactive, and responsive GUI for seamless experiment management.
| Category | Tools / Libraries |
|---|---|
| GUI Framework | PyQt6 |
| RL Algorithms | PyTorch, Stable-Baselines3, custom agents |
| Simulation Environment | Gym / Gazebo / Custom Sim |
| Backend | Python 3.10+ |
| Data Visualization | Matplotlib, Seaborn, Plotly |
| OS Compatibility | Ubuntu 20.04 / Windows 11 |
AetherSim/
│
├── README.md
├── requirements.txt
├── setup.py
│
├── src/
│ ├── main.py
│ │
│ ├── gui/
│ │ ├── __init__.py
│ │ ├── main_window.py
│ │ ├── task_manager.py
│ │ ├── training_monitor.py
│ │ └── model_evaluator.py
│ │
│ ├── rl/
│ │ ├── __init__.py
│ │ ├── dqn.py
│ │ ├── ppo.py
│ │ ├── sac.py
│ │ └── trainer.py
│ │
│ ├── env/
│ │ ├── __init__.py
│ │ ├── gazebo_interface.py
│ │ ├── pybullet_interface.py
│ │ └── robot_models.py
│ │
│ ├── data/
│ │ ├── __init__.py
│ │ ├── logger.py
│ │ ├── analyzer.py
│ │ └── visualizer.py
│ │
│ ├── utils/
│ │ ├── __init__.py
│ │ ├── config.py
│ │ └── helpers.py
│ │
│ └── cpp_modules/
│ ├── __init__.py
│ └── high_speed_control/
│
├── tests/
│ ├── test_rl.py
│ ├── test_gui.py
│ └── test_env.py
│
├── assets/
│ ├── icons/
│ └── ui_files/
│
└── logs/
git clone https://github.com/HongyiHao-SXIT/AetherSim.git
cd Aethersimconda create -n aethersim python=3.10 -y
conda activate aethersimpip install -r requirements.txtpython main.pyOnce started, the PyQt interface will appear — allowing you to:
- Create a new RL training task
- Configure hyperparameters
- Monitor training performance in real-time
- Save and load models
In a typical setup, Aethersim can train a quadruped robot (machine dog) for safe obstacle avoidance using reinforcement learning. It supports integrating existing frameworks such as ABS, ASAP, and YOLO to improve perception and control.
- ✅ Integration with ROS2 Foxy for real robot control.
- ✅ Support for multi-agent RL.
- ✅ Expand environment library for diverse simulation tasks.
- ✅ Add cloud training and result sharing features.
Hongyi Hao
Graduation Project (2025)
Reinforcement Learning & PyQt Integrated Training Platform
📧 Lanyi_adict@outlook.com
🌐 [HongyiHao-SXIT]
This project is open-source under the MIT License.