Skip to content

GIST-DSLab/PPO_Solve

Repository files navigation

Using PPO to solve ARC Problem

Train ARC Tasks (number: 150, 179, 241, 380) with PPO (Proximal Policy Optimization Algorithms) agent.

Instructions

Environments

  1. Create a new environment
conda create --name your_env_name python=3.9
  1. Activate the environment:
conda activate your_env_name
  1. Install pacakges
pip install -r requirements.txt

How to run

To run the example code (train task 150, eval 150)

python3 run.py train.task=150 eval.task=150

Choose the task within 150, 179, 241, 380

150 - 3 x 3 Horizontal flip task

179 - N x N diagonal flip task

241 - 3 x 3 diagonal flip task

380 - 3 x 3 CCW rotate task

image

Acknowledge

This implementation is based on the work found at https://github.com/ku-dmlab/arc_trajectory_generator.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages