Skip to content

This repository provides the source code for the paper Semantically-driven Deep Reinforcement Learning for Inspection Path Planning.

License

Notifications You must be signed in to change notification settings

ntnu-arl/semantic-RL-inspection

Repository files navigation

semantic-RL-inspection

This repository contains the source code for the paper Semantically-driven Deep Reinforcement Learning for Inspection Path Planning. The accompanying video is available at the following link.

1754819741372

Installation

  1. Install Isaac Gym and Aerial Gym Simulator

    Follow the instructions provided in the respective repository.

    ⚠️ Important Note: Change to Argument Parser in Isaac Gym's gymutil.py

    Before installing the Aerial Gym Simulator, you must modify the Isaac Gym installation. The argument parser in Isaac Gym may interfere with additional arguments required by other learning frameworks. To resolve this, you need to modify line 337 of the gymutil.py file located in the isaacgym folder.

    Change the following line:

    args = parser.parse_args()

    to:

    args, _ = parser.parse_known_args()
  2. Set up the environment

    Once the installation is successful, activate the aerialgym environment:

    cd ~/workspaces/ && conda activate aerialgym
  3. Clone this repository

    Clone the repository by running the following command:

    git clone git@github.com:ntnu-arl/semantic-RL-inspection.git
  4. Install Semantic-RL-Inspection

    Navigate to the cloned repository and install it using the following command:

    cd ~/workspaces/semantic-RL-inspection/
    pip install -e .

Running the Examples

The standalone examples, along with a pre-trained RL policy, can be found in the examples directory. The ready-to-use policy used in the work detailed in Semantically-driven Deep Reinforcement Learning for Inspection Path Planning is available under examples/pre-trained_network. To evaluate the performance of this policy, follow the steps below.

Single Semantic Example

This example demonstrates policy inference in a room-like environment, which was also used during training. However, in this scenario, there are no obstacles, and only a semantic object (Emerald Green) is present. To run this example, execute the following commands:

cd ~/workspaces/semantic-RL-inspection/examples/
conda activate aerialgym
bash semantic_example.sh

You should now be able to observe the trained policy in action — performing an inspection of the specified semantic object without any obstacles in the environment:

singleSemantic_Example.mp4

Single Semantic with Obstacles Example

In this example, the policy is inferred in the same room-like environment as before, but with the addition of 4 obstacles (Tyrian Purple) alongside the semantic object (Emerald Green). To modify the number of obstacles, you can adjust the configuration in src/config/env/env_object_config.py by changing the value in the following class:

class obstacle_asset_params(asset_state_params):
    num_assets = 4

To run this example, execute the following commands:

cd ~/workspaces/semantic-RL-inspection/examples/
conda activate aerialgym
bash semantic_and_obstacles_example.sh

You should now be able to observe the trained policy in action — inspecting the semantic object of interest (Emerald Green) while navigating around 4 obstacles (Tyrian Purple) in the environment:

semanticAndobstacles_Example1.mp4

The default viewer is set to follow the agent. To disable this feature and inspect other parts of the environment, press F on your keyboard. After doing so, you will be able to observe the trained policy in action across 16 environments, each containing different semantic objects (Emerald Green) and 4 obstacles (Tyrian Purple):

semanticAndobstacles_Example2.mp4

RL Training

Running Training

To train your first semantic-aware inspection policy, use the following command, which initiates the training with the settings introduced in Semantically-driven Deep Reinforcement Learning for Inspection Path Planning:

conda activate aerialgym
cd ~/workspaces/
python -m rl_training.train_semanticRLinspection --env=inspection_task --train_for_env_steps=100000000  --experiment=testExperiment

By default, the number of environments is set to 512. If your GPU cannot handle this number, reduce it by adjusting the num_envs parameter in /src/config/task/inspection_task_config.py:

num_envs = 512

Loading Trained Models

To load a trained checkpoint and perform only inference (no training), follow these steps:

  1. For clear visualization (to avoid rendering overhead), reduce the number of environments (e.g., to 16) and enable the viewer by modifying /src/config/task/inspection_task_config.py:

    From:

    num_envs = 512
    use_warp = True
    headless = True

    To:

    num_envs = 16
    use_warp = True
    headless = False
  2. For a better view during inference, consider excluding the top wall from the room-like environments by modifying the /src/config/env/env_with_semantic_and_obstacles.py file:

    "top_wall": False, # excluding top wall
  3. Finally, execute the inference script with the following command:

    conda activate aerialgym
    cd ~/workspaces/
    python -m rl_training.enjoy_semanticRLinspection --env=inspection_task --experiment=testExperiment

    The default viewer is set to follow the agent. To disable this feature and inspect other parts of the environment, press F on your keyboard.

Citing

If you reference our work in your research, please cite the following paper:

G. Malczyk, M. Kulkarni and K. Alexis, "Semantically-driven Deep Reinforcement Learning for Inspection Path Planning," Accepted for publication in IEEE Robotics and Automation Letters, 2025

@article{malczyk2025semantically,
  title={Semantically-Driven Deep Reinforcement Learning for Inspection Path Planning},
  author={Malczyk, Grzegorz and Kulkarni, Mihir and Alexis, Kostas},
  journal={IEEE Robotics and Automation Letters},
  year={2025},
  publisher={IEEE}
}

Contact

For inquiries, feel free to reach out to the authors:

This research was conducted at the Autonomous Robots Lab, Norwegian University of Science and Technology (NTNU).

For more information, visit our website.

Acknowledgements

This material was supported by the Research Council of Norway under Award NO-338694.

Additionally, this repository incorporates code and helper scripts from the Aerial Gym Simulator.

arl_ntnu_logo_v2

About

This repository provides the source code for the paper Semantically-driven Deep Reinforcement Learning for Inspection Path Planning.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages