(Agent trained after ~800 episodes and 1 million frames)
This project follows DQN: Playing Atari with Deep Reinforcement Learning (Mnih et al., 2013) as a foundation, which sparked my interest in deep reinforcement learning and inspired this implementation for the DemonAttack Atari environment.
The purpose of this project is to explore and implement a Deep Q-Network (DQN) agent to play the Atari game DemonAttack using modern reinforcement learning techniques. The codebase supports advanced features such as Double DQN, Dueling Networks, Prioritized Experience Replay (PER), n-step returns, and Noisy Networks (NoisyNets) for improved exploration. It is designed for long, resumable training runs with comprehensive logging, checkpointing, and visualization tools.
Key Features:
- Double DQN, Dueling DQN, n-step returns, and Prioritized Experience Replay (PER)
- Optional NoisyNets for exploration (enable with
--noisy
) - Robust checkpointing and seamless resume (with frame counting)
- Per-episode logging to CSV and TensorBoard
- Visualization scripts for training progress and agent gameplay (GIF export)
- Replay buffer
- Compatible with Gymnasium and ALE Atari environments
- Python 3.10+
- PyTorch (deep learning)
- Gymnasium (Atari environment)
- matplotlib (plotting, GIF export)
- TensorBoard (training visualization)
- NumPy, pandas (data handling)
- ale-py (Arcade Learning Environment backend)
Train a DQN agent:
python main.py
Train with NoisyNets:
python main.py --noisy
Resume training from a checkpoint:
python main.py --resume models/model.pth
Visualize a trained agent and export GIFs:
python visualize.py --model models/model.pth --episodes 1 --render rgb_array
Plot training progress:
python plot_progress.py --dir results
See requirements.txt
for all dependencies. Recommended: use a virtual environment or conda.
main.py
— Main training loop, logging, checkpointingdqn_agent.py
— DQN agent logic (Double DQN, Dueling, PER, n-step, NoisyNets)model.py
— Q-network architecturesreplay_buffer.py
— Replay buffer and PERutils.py
,debug_utils.py
— Logging, plotting, debuggingvisualize.py
,plot_progress.py
,visualize_training.py
— Visualization toolsresults/
— Logs, plots, GIFs, TensorBoard eventsmodels/
— Saved checkpoints (NoisyNet models inmodels/noisy_models/
)
- DQN: Playing Atari with Deep Reinforcement Learning (Mnih et al., 2013)
- DemonAttack ALE environment documentation
- Demon Attack Atari manual (AtariAge)
Spring 2025