Skip to content

SeongilHeo/MOASEI

Repository files navigation

Team: Markov Mayhem (Winner!)

MOASEI AAMAS-2025 Competition

Track #3: Wildfire (Both Agent and Task Openness)

Members

University of Utah, Utah, USA

  • Varun Raveendra
  • Seongil Heo
  • Yanxi Lin

Repository Structure

The structure of the repository described below:

.
├── competition_configs                 # Environment configurations
│   └── wildfire
├── free_range_zoo                      # Package source
│   ├── envs                            #    Environment implementations
│   ├── utils                           #    Converters / environment abstract classes
│   └── wrappers                        #    Model wrappers and utilities
├── tests                               # Tests
│    ├── free_range_zoo                 #    Tests for the free_range_zoo package
│    │   ├── envs                       #       environment utilities
│    │   └── utils                      #       all package utilities
│    ├── profiles                       # Environment performance profiles
│    └── utils                          # Testing utilities
├── experiments                         # Experiments (**ours**)
│    ├── core.py                        #   Core classes definitions (Graph, Actor, Critic, Network)
│    ├── evaluation.py                  #   Evaluation scripts 
│    ├── quick_start.py                 #   Quick start guide and example scripts
│    ├── test.py                        #   Test scripts for the baseline models
│    ├── train_a2c.py                   #   Training script for A2C model
│    ├── train_gnn.py                   #   Training script for PL model
│    └── utils.py                       #   Utility functions
├── LICENSE                             # License file
├── poetry.lock                         # Poetry lock file
├── pyproject.toml                      # Package dependencies and package definition
├── README.md                           # Project documentation
└── setup.cfg                           # Setup configuration

Installation

For installation, please refer to the Installation Guide for detailed instructions on how to set up the environment and install the required dependencies.

Usage

python evaluation.py [OPTIONS] <output> <model> <config>

Required Arguments

  • output: Path to the directory where evaluation results and logs will be saved.
  • model : Path to the directory containing the trained model.
  • config: Path to the environment configuration (e.g., competition_configs/WS3.pkl).

Optional Arguments

Option Description
-h, --help Show help message and exit
--cuda Use CUDA (GPU) if available
--threads THREADS Number of threads to use
--log_level LEVEL Set logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--seed SEED Random seed for the evaluation process
--dataset_seed DATASET_SEED Random seed for initializing the environment configuration
--testing_episodes N Number of test episodes to run (parallel_envs)

Example

python run/evaluation.py ./output logging/250519_120000/model_a2c.h5 ./competition_config/wildfire/WS1.pkl --testing_episodes 100

License

This repository is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).

This project was developed by Team Markov Mayhem as part of the MOASEI 2025 competition.

It is based in part on the free-range-zoo repository by OASYS Labs, which is also licensed under the AGPL-3.0 license.

All code under the experiments/ directory was newly developed by the team in 2025.

About

MOASEI AAMAS-2025 Competition

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published