This repo explores the interplay between nature (inherited traits via reproduction and mutation) and nurture (behavior learned via reinforcement learning) in ecological systems. We combine Multi-Agent Reinforcement Learning (MARL) with evolutionary dynamics to explore emergent behaviors in a multi-agent dynamic ecosystem of Predators, Prey, and regenerating Grass. Agents differ by speed, vision, energy metabolism, and decision policies—offering ground for open-ended adaptation. At its core lies a gridworld simulation where agents are not just trained—they are born, age, reproduce, die, and even mutate in a continuously changing environment.
The Predator-Prey-Grass base-environment
-
Testing the Req Queen Hypothesis in the co-evolutionary setting of Predators and Prey
-
Hyperparameter tuning base environment - Population Based Training
Editor used: Visual Studio Code 1.101.0 on Linux Mint 22.0 Cinnamon
- Clone the repository:
git clone https://github.com/doesburg11/PredPreyGrass.git
- Open Visual Studio Code and execute:
- Press
ctrl+shift+p
- Type and choose: "Python: Create Environment..."
- Choose environment: Conda
- Choose interpreter: Python 3.11.11 or higher
- Open a new terminal
-
pip install -e .
- Press
- Install the additional system dependency for Pygame visualization:
-
conda install -y -c conda-forge gcc=14.2.0
-
Run the pre trained model in a Visual Studio Code terminal:
python ./src/predpreygrass/rllib/v1_0/evaluate_ppo_from_checkpoint_debug.py