Skip to content

InhwanBae/Crowd-Behavior-Generation

Repository files navigation

Continuous Locomotive Crowd Behavior Generation

Inhwan Bae · Junoh Lee · Hae-Gon Jeon
CVPR 2025

Project Page CVPR Paper Source Code 3D Toolkit Related Works



Generating realistic, continuous crowd behaviors with learned dynamics.


(Left) Time-varying behavior changes, (Right) Real2Sim evaluation on New York City.
More video examples are available on our project page!


Summary: A crowd emitter diffusion model and a state-switching crowd simulator for populating input scene images and generating lifelong crowd trajectories.


🏢🚶‍♂️ Crowd Behavior Generation Benchmark 🏃‍♀️🏠


  • Repurposed Trajectory Datasets: A new benchmark that reuses existing real-world human trajectory datasets, adapting them for crowd trajectory generation.
  • Image-Only Input: Eliminates conventional observation trajectory dependency and requires only a single input image to fully populate the scene with crowds.
  • Lifelong Simulation: Generates continuous trajectories where people dynamically enter and exit the scene, replicating the ever-changing real-world crowd dynamics.
  • Two-Tier Evaluation: Assesses performance on both scene-level realism (e.g., density, frequency, coverage, and population metrics) and agent-level accuracy (e.g., kinematics, DTW, diversity, and collision rate).

🚵 CrowdES Framework 🚵


  • Crowd Emitter: A diffusion-based model that iteratively “emits” new agents by sampling when and where they appear on spatial layouts.
  • Crowd Simulator: A state-switching system that generates continuous trajectories with agents dynamically switching behavior modes.
  • Controllability & Flexibility: Users can override or customize scene-level and agent-level parameters at runtime.
  • Sim2Real & Real2Sim Capability: The framework can bridge synthetic and real-world scenarios for interdisciplinary research.

🔥 Model Training

Setup

Environment
All models were trained and tested on Ubuntu 20.04 with Python 3.10 and PyTorch 2.2.2 with CUDA 12.1. You can install all dependencies via following command:

pip install -r requirements.txt

Dataset
Preprocessed ETH, UCY, SDD and EDIN datasets are released in this repository.

Note

If you want to preprocess the datasets by yourself, please download the raw datasets and run the following command:

python utils/preprocess_dataset.py --model_config <path_to_model_config>

# Example
python utils/preprocess_dataset.py --model_config ./configs/model/CrowdES_eth.yaml
python utils/preprocess_dataset.py --model_config ./configs/model/CrowdES_hotel.yaml
python utils/preprocess_dataset.py --model_config ./configs/model/CrowdES_univ.yaml
python utils/preprocess_dataset.py --model_config ./configs/model/CrowdES_zara1.yaml
python utils/preprocess_dataset.py --model_config ./configs/model/CrowdES_zara2.yaml
python utils/preprocess_dataset.py --model_config ./configs/model/CrowdES_sdd.yaml
python utils/preprocess_dataset.py --model_config ./configs/model/CrowdES_gcs.yaml
python utils/preprocess_dataset.py --model_config ./configs/model/CrowdES_edin.yaml

Train Crowd Emitter Model

To train the CrowdES crowd emitter model, you can use the following command:

python trainval.py --model_train emitter_pre --model_config <path_to_model_config>
python trainval.py --model_train emitter --model_config <path_to_model_config>

# Example
python trainval.py --model_train emitter_pre --model_config ./configs/model/CrowdES_eth.yaml
python trainval.py --model_train emitter --model_config ./configs/model/CrowdES_eth.yaml

Train Crowd Simulator Model

To train the CrowdES crowd simulator model, you can use the following command:

python trainval.py --model_train simulator --model_config <path_to_model_config>

# Example
python trainval.py --model_train simulator --model_config ./configs/model/CrowdES_eth.yaml

📊 Model Evaluation

Pretrained Models

We provide pretrained models in the release section.

Evaluate CrowdES

To evaluate the CrowdES model, you can use the following command:

python trainval.py --test --model_config <path_to_model_config>

# Example
python trainval.py --test --model_config ./configs/model/CrowdES_eth.yaml

🚀 Model Inference

Export Generated Trajectories

To export the generated trajectories from the CrowdES model, you can use the following command:

python trainval.py --export --model_config <path_to_model_config>

# Example
python trainval.py --export --model_config ./configs/model/CrowdES_eth.yaml

Tip

You can also customize the hyperparameters for exporting the generated trajectories by modifying the CrowdES/evaluate_export_generated_traj.py file. Here are the default settings:

# Global settings
TRIALS = 1 # Number of trials for each scene
SCENARIO_LENGTH = 30 * 60 * 10 # Scenario length in frames (30fps * 60s * 10min), if None, use the length of the scene
POSTFIX = 'crowdes' # Postfix for the generated scenario files
EXPORT_SPATIAL_LAYOUT = True # Export predicted spatial layout
EXPORT_SOCIALGAN_DATA = True # Export to text file for trajectory prediction model training
EXPORT_VIDEO = True # Export video for visualization

Run CrowdES with Custom Input

To evaluate the CrowdES model with a custom input image, you can use the following command:

python trainval.py --synthetic --model_config <path_to_model_config>

# Example
python trainval.py --synthetic --model_config ./configs/model/CrowdES_eth.yaml

Tip

You can also customize the hyperparameters by modifying the CrowdES/evaluate_synthetic.py file. Here are the default settings:

# Global settings
TRIALS = 1 # Number of trials for each scene
SCENE_LIST = ['synth_scurve',] # List of scenes to use for inference
SCENARIO_LENGTH = 30 * 60 * 10 # Scenario length in frames (30fps * 60s * 10min)
POSTFIX = 'crowdes-synthetic' # Postfix for generated files
EXPORT_VIDEO = True # Export video for visualization


🌏 3D Visualization

To visualize the generated crowd behaviors in 3D, we provide a visualization toolkit based on the CARLA simulator. Please follow the instructions in the 3D_Visualization_Toolkit/README file to set up the environment and visualize the results.


📖 Citation

If you find this code useful for your research, please cite our trajectory prediction papers :)

🏢🚶‍♂️ CrowdES (CVPR'25) 🏃‍♀️🏠 | 💭 VLMTrajectory (TPAMI) 💭 | 💬 LMTrajectory (CVPR'24) 🗨️ | 1️⃣ SingularTrajectory (CVPR'24) 1️⃣ | 🌌 EigenTrajectory (ICCV'23) 🌌 | 🚩 Graph‑TERN (AAAI'23) 🚩 | 🧑‍🤝‍🧑 GP‑Graph (ECCV'22) 🧑‍🤝‍🧑 | 🎲 NPSN (CVPR'22) 🎲 | 🧶 DMRGCN (AAAI'21) 🧶

@inproceedings{bae2025crowdes,
  title={Continuous Locomotive Crowd Behavior Generation},
  author={Bae, Inhwan and Lee, Junoh and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2025}
}
More Information (Click to expand)
@article{bae2025vlmtrajectory,
  title={Social Reasoning-Aware Trajectory Prediction via Multimodal Language Model},
  author={Bae, Inhwan and Lee, Junoh and Jeon, Hae-Gon},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2025}
}

@inproceedings{bae2024lmtrajectory,
  title={Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction},
  author={Bae, Inhwan and Lee, Junoh and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

@inproceedings{bae2024singulartrajectory,
  title={SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model},
  author={Bae, Inhwan and Park, Young-Jae and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

@inproceedings{bae2023eigentrajectory,
  title={EigenTrajectory: Low-Rank Descriptors for Multi-Modal Trajectory Forecasting},
  author={Bae, Inhwan and Oh, Jean and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2023}
}

@article{bae2023graphtern,
  title={A Set of Control Points Conditioned Pedestrian Trajectory Prediction},
  author={Bae, Inhwan and Jeon, Hae-Gon},
  journal={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2023}
}

@inproceedings{bae2022gpgraph,
  title={Learning Pedestrian Group Representations for Multi-modal Trajectory Prediction},
  author={Bae, Inhwan and Park, Jin-Hwi and Jeon, Hae-Gon},
  booktitle={Proceedings of the European Conference on Computer Vision},
  year={2022}
}

@inproceedings{bae2022npsn,
  title={Non-Probability Sampling Network for Stochastic Human Trajectory Prediction},
  author={Bae, Inhwan and Park, Jin-Hwi and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

@article{bae2021dmrgcn,
  title={Disentangled Multi-Relational Graph Convolutional Network for Pedestrian Trajectory Prediction},
  author={Bae, Inhwan and Jeon, Hae-Gon},
  journal={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2021}
}