Skip to content

HongyiHao-SXIT/AetherSim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 AetherSim

🚀 Introduction

Aethersim is a reinforcement learning (RL) integrated training and visualization platform built with PyQt. This project is designed as a graduation thesis and aims to provide an intuitive, modular, and interactive interface for robot control training, simulation monitoring, and model evaluation.

Aethersim bridges the gap between reinforcement learning algorithms and robot simulation environments, making it easier for researchers and students to conduct training experiments and visualize results efficiently.


🎯 Features

  • 🧩 Task Creation – Define custom reinforcement learning environments and parameters.
  • ⚙️ Parameter Configuration – Adjust RL hyperparameters such as learning rate, reward structure, and exploration rate through GUI.
  • 📈 Training Monitoring – Real-time visualization of training progress, loss curves, and performance metrics.
  • 🤖 Model Evaluation – Load and compare different trained models, view performance statistics and behavior.
  • 💾 Model & Log Management – Automatically save model weights, training logs, and replay data.
  • 🧮 Data Analysis Tools – Integrated plotting and analytics tools for interpreting agent performance.
  • 🪟 PyQt Frontend – Clean, interactive, and responsive GUI for seamless experiment management.

🧠 Core Technologies

Category Tools / Libraries
GUI Framework PyQt6
RL Algorithms PyTorch, Stable-Baselines3, custom agents
Simulation Environment Gym / Gazebo / Custom Sim
Backend Python 3.10+
Data Visualization Matplotlib, Seaborn, Plotly
OS Compatibility Ubuntu 20.04 / Windows 11

🧩 Architecture Overview

AetherSim/
│
├── README.md
├── requirements.txt
├── setup.py 
│
├── src/
│   ├── main.py
│   │
│   ├── gui/
│   │   ├── __init__.py
│   │   ├── main_window.py
│   │   ├── task_manager.py
│   │   ├── training_monitor.py
│   │   └── model_evaluator.py
│   │
│   ├── rl/
│   │   ├── __init__.py
│   │   ├── dqn.py
│   │   ├── ppo.py
│   │   ├── sac.py
│   │   └── trainer.py
│   │
│   ├── env/
│   │   ├── __init__.py
│   │   ├── gazebo_interface.py
│   │   ├── pybullet_interface.py
│   │   └── robot_models.py
│   │
│   ├── data/
│   │   ├── __init__.py
│   │   ├── logger.py
│   │   ├── analyzer.py
│   │   └── visualizer.py
│   │
│   ├── utils/
│   │   ├── __init__.py
│   │   ├── config.py
│   │   └── helpers.py
│   │
│   └── cpp_modules/
│       ├── __init__.py
│       └── high_speed_control/
│
├── tests/
│   ├── test_rl.py
│   ├── test_gui.py
│   └── test_env.py
│
├── assets/
│   ├── icons/
│   └── ui_files/
│
└── logs/


⚡ Installation

1. Clone the repository

git clone https://github.com/HongyiHao-SXIT/AetherSim.git
cd Aethersim

2. Create a virtual environment

conda create -n aethersim python=3.10 -y
conda activate aethersim

3. Install dependencies

pip install -r requirements.txt

🧩 Run the Platform

python main.py

Once started, the PyQt interface will appear — allowing you to:

  • Create a new RL training task
  • Configure hyperparameters
  • Monitor training performance in real-time
  • Save and load models

🧪 Example Use Case

In a typical setup, Aethersim can train a quadruped robot (machine dog) for safe obstacle avoidance using reinforcement learning. It supports integrating existing frameworks such as ABS, ASAP, and YOLO to improve perception and control.


📚 Future Work

  • ✅ Integration with ROS2 Foxy for real robot control.
  • ✅ Support for multi-agent RL.
  • ✅ Expand environment library for diverse simulation tasks.
  • ✅ Add cloud training and result sharing features.

👨‍💻 Author

Hongyi Hao
Graduation Project (2025) Reinforcement Learning & PyQt Integrated Training Platform
📧 Lanyi_adict@outlook.com
🌐 [HongyiHao-SXIT]


🧾 License

This project is open-source under the MIT License.

About

A Reinforcement Learning and Other Model train platform

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages