Skip to content

This repository contains the code for the paper "LLM-Assisted Light: Leveraging Large Language Model Capabilities for Human-Mimetic Traffic Signal Control in Complex Urban Environments".

License

Notifications You must be signed in to change notification settings

Traffic-Alpha/LLM-Assisted-Light

Repository files navigation

🚦 LLM-Assisted Light (LA-Light)

arXiv License Python 3.10+ Version

Official implementation of LLM-Assisted Light: Augmenting Traffic Signal Control with Large Language Model in Complex Urban Scenarios.

📢 Latest News

  • [July 2025] Introducing VLMLight: Our next-generation framework featuring image-based traffic signal control using Vision-Language Models (VLMs) for enhanced scene understanding and real-time decision-making.
  • [August 2023] We have migrated the simulation platform used in this project from Aiolos to TransSimHub (TSHub). We would like to express our sincere gratitude to our colleagues at SenseTime, @KanYuheng (阚宇衡), @MaZian (马子安), and @XuChengcheng (徐承成) (in alphabetical order) for their valuable contributions. The development of TransSimHub (TSHub) is a continuation of the work done on Aiolos.

🧩 Core Framework

Five-stage hybrid decision-making for human-AI collaborative traffic control:

  1. Task Planning: LLM defines traffic management role
  2. Tool Selection: Dynamically invokes perception & decision tools
  3. Environment Interaction: Real-time traffic data collection
  4. Data Analysis: Decision unit generates control strategies
  5. Execution Feedback: Implements decisions with explainable justifications

🧪 Quick Validation

🛠️ Installation

Install TransSimHub:

git clone https://github.com/Traffic-Alpha/TransSimHub.git
cd TransSimHub
pip install -e ".[all]"

🤖 RL Model Training & Evaluation

For training and evaluating the RL model, refer to TSCRL. You can use the following command to start training:

python train_rl_agent.py

The RL Result directory contains the trained models and training results. Use the following command to evaluate the performance of the model:

python eval_rl_agent.py

🧠 Pure LLM Inference

To directly use LLM for inference without invoking any tools, run the following script:

python llm.py --env_name '3way' --phase_num 3 --detector_break 'E0--s'

🔀 LA-Light Joint Decision-Making

To test LA-Light, run the following script. In this case, we will randomly generate congestion on E1 and the sensor on the E2--s direction will fail.

python llm_rl.py --env_name '4way' --phase_num 4 --edge_block 'E1' --detector_break 'E2--s'

The effect of running the above test is shown in the following video. Each decision made by LA-Light involves multiple tool invocations and subsequent decisions based on the tool's return results, culminating in a final decision and explanation.

LLM_for_TSC_README.webm

Due to the video length limit, we only captured part of the first decision-making process, including:

  • Action 1: Obtaining the intersection layout, the number of lanes, and lane functions (turn left, go straight, or turn right) for each edge.
  • Action 3: Obtaining the occupancy of each edge. The -E3 straight line has a higher occupancy rate, corresponding to the simulation. At this point, LA-Light can use tools to obtain real-time road network information.
  • Final Decision and Explanation: Based on a series of results, LA-Light provides the final decision and explanation.

🎥 Scenario Demos

scenario1.mp4

Examples of LA-Lights Utilizing Tools to Control Traffic Signals (Normal Scenario)

scenario_2.mp4

Examples of LA-Lights Utilizing Tools to Control Traffic Signals (Emergency Vehicle (EMV) Scenario)

📜 Citation

If you find this work useful, please cite our papers:

@article{wang2024llm,
  title={LLM-Assisted Light: Leveraging Large Language Model Capabilities for Human-Mimetic Traffic Signal Control in Complex Urban Environments},
  author={Wang, Maonan and Pang, Aoyu and Kan, Yuheng and Pun, Man-On and Chen, Chung Shue and Huang, Bo},
  journal={arXiv preprint arXiv:2403.08337},
  year={2024}
}

🤝 Open-Source Foundations

This project stands on the shoulders of these open-source giants:

📮 Contact

If you have any questions, please report issues on GitHub.

About

This repository contains the code for the paper "LLM-Assisted Light: Leveraging Large Language Model Capabilities for Human-Mimetic Traffic Signal Control in Complex Urban Environments".

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages