- Obtain a ROM of Tetris for Gameboy, move to the
lib
directory, and renametetris.gb
- In your commandline, navigate to the TetrisLearning directory and install conda environment with with
conda env create -p ./.conda environment.yml
. You may swap out the-p ./.conda
argument with-n some_name
if you prefer. - Activate the environment:
conda activate ./.conda
- Train the model by running
python run/run_training.py
- [Optional] Track progress of the model training via Tensorboard. While the model is running, open a seperate terminal, navigate to the TetrisLearning directory, and run
tensorboard --logdir ./board
- Once training has finished, play the model using the
play_model.ipynb
notbook
- Pan Docs: Low-level details on the Gameboy
- Reverse Engineering the Gameboy Tetrs: Memory locations in GB Tetris
- RAM Map for GB Tetris
- Paper: Human-level control through deep reinforcement learning: The setup for the CNN we use, as well as the frame stacking method are pretty much directly pulled from this paper. See the methodology section for specifics. For a simpler overview, see this related presentation from the authors
- Paper: Asynchronous Methods for Deep Reinforcement Learning: The A2C model used is from this paper.