This project implements a reinforcement learning-based robot navigation system that enables autonomous navigation in complex environments with obstacles. The work was done as part of my Master Thesis Submission.
The code uses IRSim to train reinforcement learning policies (using Stable-Baselines3) which can be deployed both in the simulator and on real hardware. We have deployed a policy on the physical QBot platform. The hardware implementation of ROS-based code can be found in the ros-deployment/
folder.
The system enables a robot to navigate from a start position to a goal while avoiding obstacles. The reinforcement learning agent learns to output linear and angular velocities based on laser scan observations. The trained policy demonstrates robust navigation behavior in both simulated and real-world environments.
- Clone the repository:
git clone https://github.com/harshmahesheka/rl-nav
cd rl-nav
- Install the required dependencies (The code was tested with python 3.10):
pip install -r requirements.txt
To train a new model:
python train.py --num-envs 7 \
--total-timesteps 200000 \
--model-path models/td3_robot_nav_model \
--tensorboard-log ./td3_robot_nav_tensorboard/ \
--eval-episodes 10 \
--render
To evaluate a trained model:
python run.py --model-path models/your_model.zip \
--num-episodes 10 \
train.py
: Contains the custom Gym environment for trainingsim.py
: Core simulation environment wrapperrun.py
: Training and evaluation scriptsmodels/
: Directory for storing trained modelsrobot_world.yaml
: World configuration fileros-deployment/
: Package for deploying trained policies on physical robot
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- Stable-Baselines3 for the RL algorithms
- Gym for the environment interface
- IRSim for the simulation environment
- DRL-IRSim for code inspiration