A laptop version of Hierarchical Reasoning Model
Train a Hierarchical Reasoning Model on your Windows laptop using only CPU - no GPU required!
- Python 3.12 (via Anaconda or python.org)
- Windows 10/11 (64-bit)
- 8GB RAM minimum (16GB recommended)
- Clone or download this project
git clone https://github.com/alessoh/HRMlaptop
cd HRMlaptop
- Create conda environment (if using Anaconda)
conda create -n HRMlaptop python=3.12
conda activate HRMlaptop
- Install PyTorch CPU version
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
- Install remaining dependencies
python quick_install.py
Or manually:
pip install matplotlib psutil tqdm pandas scikit-learn seaborn plotly
Simplest option - for beginners:
python train_simple.py
Full training with all features:
python hrm_trainer.py
Monitor system resources (in separate terminal):
python monitor.py
hrm_trainer.py
- Main training script with full featurestrain_simple.py
- Simplified training for beginners (recommended for first run)test_model.py
- Test your trained modelmonitor.py
- Monitor CPU and memory usage during trainingvisualize.py
- Create graphs and visualizations of results
quick_install.py
- Install remaining dependencies after PyTorchrequirements.txt
- Package list (reference only).env
- Configuration settings.gitignore
- Git ignore patterns
install.py
- Full installation scriptsetup_windows.bat
- Windows batch installer
CPU Type | Training Time | Accuracy |
---|---|---|
Intel i5 (8th gen) | 90-120 min | ~97.5% |
Intel i7 (10th gen) | 45-60 min | ~97.8% |
AMD Ryzen 7 | 40-55 min | ~97.8% |
- RAM: 1.5-2.5 GB
- CPU: 60-80% utilization
- Disk: ~500 MB (including dataset)
- Model Size: ~350 KB
This project is optimized for Python 3.12 with:
- PyTorch 2.8.0+cpu (latest CPU version)
- NumPy 2.1.2 (included with PyTorch)
- All dependencies tested with Python 3.12
============================================================
HRM Training on Windows CPU
============================================================
System Information:
CPU: 6 cores, 12 threads
RAM: 16.0 GB
Python: 3.12.0
PyTorch: 2.8.0+cpu
============================================================
Starting training for 10 epochs
Epoch 1/10 [0%] Loss: 0.4521
Epoch 1/10 [20%] Loss: 0.3421
Epoch 1/10 [40%] Loss: 0.2341
...
Epoch 10/10 completed in 324.5s
Train Loss: 0.0234, Train Acc: 98.45%
Val Loss: 0.0412, Val Acc: 97.82%
TRAINING COMPLETE!
Final Validation Accuracy: 97.82%
Total Training Time: 54.3 minutes
Solution: Install PyTorch CPU version:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
Solution: Use the NumPy that comes with PyTorch (2.1.2). Don't install numpy==1.24.3
Solutions:
- Close Chrome and other heavy applications
- Set Windows to High Performance mode
- Ensure good laptop cooling
- Reduce batch size in
.env
file
Solution: Use the provided quick_install.py
script which installs compatible versions
Solutions:
- Elevate laptop for better airflow
- Use a cooling pad
- Reduce batch size to 16 or 8
- Take breaks between training epochs
Edit .env
file to customize:
# Training parameters
BATCH_SIZE=32 # Reduce to 16 if low on RAM
LEARNING_RATE=0.001
NUM_EPOCHS=10 # Increase for better accuracy
HIDDEN_DIM=64 # Model size (32 for faster, 128 for better)
NUM_LAYERS=2 # Depth of reasoning layers
# CPU optimization
NUM_THREADS=4 # Set to your physical core count
HRMlaptop/
├── hrm_trainer.py # Main training script
├── train_simple.py # Beginner-friendly trainer
├── test_model.py # Model testing
├── monitor.py # System monitoring
├── visualize.py # Results visualization
├── quick_install.py # Dependency installer
├── README.md # This file
├── .env # Configuration
├── requirements.txt # Package reference
├── data/ # MNIST dataset (auto-created)
├── checkpoints/ # Saved models (auto-created)
└── logs/ # Training logs (auto-created)
-
Windows Performance Mode
- Settings → System → Power & battery → Best performance
-
Close Background Apps
- Especially browsers (Chrome uses lots of RAM)
- Disable Windows updates during training
-
Optimal Settings for Speed
BATCH_SIZE=64 # If you have 16GB+ RAM HIDDEN_DIM=32 # Smaller model NUM_LAYERS=1 # Fewer layers
-
Optimal Settings for Accuracy
BATCH_SIZE=32 # Default HIDDEN_DIM=64 # Default NUM_LAYERS=2 # Default NUM_EPOCHS=20 # More training
-
Test your model
python test_model.py
-
Visualize results
python visualize.py
-
View training metrics
- Check
training_results.json
for detailed metrics - View
training.log
for full training history - Open generated
.png
files for graphs
- Check
The HRM (Hierarchical Reasoning Model) is optimized for CPU training:
- Small architecture: 64 hidden units (vs 128+ for GPU)
- Efficient layers: 2 reasoning layers with residual connections
- CPU optimizations: Batch normalization, dropout for stability
- Total parameters: ~85K (very lightweight)
If you use this code for learning or research:
H. Peter Alesso (2025). Hierarchical Reasoning Model: HRM on Your Laptop.
Chapter 12: Training on Your Windows Laptop.
- Issues: Create an issue on GitHub
- Logs: Check
training.log
for detailed information - System Info: See
system_info.json
for your configuration
MIT License - Free for educational and research use
Ready to start? Run python train_simple.py
and watch your laptop learn to recognize handwritten digits!