A research platform for cognitive modeling using energy-based neural networks
Owner: Darrell Mesa (darrell.mesa@pm-ss.org)
GitHub: https://github.com/ai-in-pm
Repository: https://github.com/ai-in-pm/NeuroBM
NeuroBM is a research and educational platform for cognitive modeling using Boltzmann machines. It provides a framework for exploring cognitive dynamics, hypothesis generation, and understanding human-technology interaction patterns. It is only to be used to study how the Human Brain works as technology is all around us in 2025 and onwards.
Special Thank You to Steven Bartlett
Your video gave me the opportunity to build my own Neuro Boltzmann Machine as a way to study how my brain works living with PTSD, while also exploring how my seven-year-old sonβwho has autismβexperiences and processes the world. As a Professional Project Manager, I see technology not only reshaping how we manage complex projects but also how it integrates into our personal lives.
This video was more than just contentβit became a catalyst for innovation, reflection, and a deeply personal project. Grateful that you continue to share ideas that ripple far beyond the screen.
Video: "Brain Experts WARNING: Watch This Before Using ChatGPT Again! (Shocking New Discovery)" Link: https://www.youtube.com/watch?v=5wXlmlIXJOI
NeuroBM is a research framework for exploring cognitive dynamics using Boltzmann machines. It provides:
- Statistical Foundations: Energy functions, partition function estimation, and likelihood computation
- Educational Focus: Clear documentation, ethical guidelines, and interpretability tools
- Research Scenarios: Pre-configured setups for studying general cognition, PTSD, autism, and technology-reliance patterns
- Production-Ready Code: Testing, automated scaffolding, and modular design
- Restricted Boltzmann Machines (RBM): Binary and Gaussian visible units
- Deep Boltzmann Machines (DBM): Multi-layer architectures with pre-training
- Conditional RBMs (CRBM): Time-series and conditional modeling
- Algorithms: Contrastive Divergence (CD-k), Persistent CD (PCD)
- Likelihood Estimation: Annealed Importance Sampling (AIS) with diagnostic tools
- Training Loops: Callbacks, early stopping, checkpointing, mixed precision
- Saliency Analysis: Weight importance, feature attribution, connection strength
- Mutual Information: Information flow analysis between layers
- Latent Traversals: Direction discovery, counterfactual analysis
- Visualization Tools: Weight matrices, feature importance, traversal paths
- Synthetic Data Generation: Realistic correlations and population heterogeneity
- Research Regimes:
- Base: General cognitive features (attention, working memory, stress)
- PTSD: Hyperarousal, avoidance, intrusive thoughts, sleep disruption
- Autism: Sensory sensitivity, routine adherence, focused interests
- Technology-Reliance: Effort avoidance, automation expectation, frustration tolerance
- PTSD-PM: PTSD-affected project managers with technology integration dynamics
- Data Transformations: Normalization, binarization, noise injection
- Research Monitoring: Weekly scanning of research developments
- Integration Pipeline: Automated evaluation and integration of relevant updates
- Version Management: Semantic versioning with automated releases
- Deployment: Multi-stage deployment with quality gates
# Clone the repository
git clone https://github.com/ai-in-pm/NeuroBM.git
cd NeuroBM
# Install dependencies
pip install -e .
# Verify installation
python test_neurobm_core.py
from neurobm.models.rbm import RestrictedBoltzmannMachine
from neurobm.data.synth import SyntheticDataGenerator
# Generate synthetic cognitive data
generator = SyntheticDataGenerator("base", random_seed=42)
data = generator.generate(n_samples=1000)
# Train RBM
rbm = RestrictedBoltzmannMachine(n_visible=5, n_hidden=128)
rbm.fit(data, epochs=100)
# Analyze results
from neurobm.interpret.saliency import SaliencyAnalyzer
analyzer = SaliencyAnalyzer(rbm)
importance = analyzer.feature_importance(data)
print("Feature importance:", importance)
# Train a model
python scripts/train.py --regime=base --model=rbm --epochs=100
# Generate samples
python scripts/sample.py --checkpoint=runs/base/best.ckpt --n_samples=100
# Run interpretability analysis
python scripts/analyze.py --checkpoint=runs/base/best.ckpt --data=test_data.pt
# Launch interactive dashboards
python dashboards/launch_dashboards.py
NeuroBM/
βββ neurobm/ # Core package
β βββ models/ # Model implementations
β β βββ rbm.py # Restricted Boltzmann Machine
β β βββ dbm.py # Deep Boltzmann Machine
β β βββ crbm.py # Conditional RBM
β βββ data/ # Data handling
β β βββ synth.py # Synthetic data generation
β β βββ schema.py # Data schemas and validation
β βββ training/ # Training infrastructure
β β βββ trainer.py # Training loops
β β βββ callbacks.py # Training callbacks
β β βββ evaluation.py # Model evaluation
β βββ interpret/ # Interpretability tools
β βββ saliency.py # Saliency analysis
β βββ mutual_info.py # Mutual information
β βββ latent.py # Latent space analysis
βββ experiments/ # Experiment configurations
β βββ base.yaml # Base cognitive regime
β βββ ptsd.yaml # PTSD-related patterns
β βββ autism.yaml # Autism spectrum features
β βββ ptsd_pm.yaml # PTSD project manager scenario
βββ dashboards/ # Interactive dashboards
β βββ training_monitor.py # Real-time training monitoring
β βββ model_explorer.py # Model exploration interface
β βββ results_analyzer.py # Results analysis dashboard
βββ automation/ # Automation system
β βββ research_monitor.py # Research development tracking
β βββ integration_pipeline.py # Automated integration
β βββ version_manager.py # Version and release management
β βββ deployment_manager.py # Deployment automation
βββ notebooks/ # Educational notebooks
β βββ 01_theory_primer.ipynb # Boltzmann machine theory
β βββ 02_base_latents.ipynb # Base cognitive modeling
β βββ 07_comprehensive_tutorial.ipynb # Complete tutorial
βββ scripts/ # Command-line tools
β βββ train.py # Training script
β βββ sample.py # Sampling script
β βββ eval_ais.py # AIS evaluation
βββ docs/ # Documentation
β βββ ethics_guidelines.md # Ethical guidelines
β βββ model_cards/ # Model documentation
β βββ data_cards/ # Data documentation
βββ tests/ # Test suite
βββ test_models.py # Model tests
βββ test_data.py # Data generation tests
- Attention Span: Sustained attention capacity
- Working Memory: Temporary information storage
- Novelty Seeking: Openness to new experiences
- Sleep Quality: Sleep patterns and quality
- Stress Index: General stress levels
- Hyperarousal: Heightened alertness and reactivity
- Avoidance: Tendency to avoid triggers
- Intrusive Thoughts: Unwanted recurring thoughts
- Sleep Disruption: Sleep quality and patterns
- Emotional Numbing: Reduced emotional responsiveness
- Sensory Sensitivity: Response to sensory input
- Routine Adherence: Preference for predictable patterns
- Focused Interests: Intensity of special interests
- Social Communication: Communication preferences
- Change Tolerance: Adaptability to changes
- Effort Cost: Perceived mental effort cost
- Ambiguity Tolerance: Tolerance for uncertainty
- Reward Sensitivity: Sensitivity to timing delays
- Automation Expectation: Expectation of automated assistance
- Frustration Tolerance: Tolerance for setbacks
- Hypervigilance: Heightened alertness and scanning
- Cognitive Load: Mental effort and processing capacity
- Tech Tool Mandate: Organizational pressure to use technology tools
- Frustration Tolerance: Tolerance for technology limitations
- Avoidance Behavior: Tendency to avoid challenging tasks
- Tech Tool Adoption Resistance: Resistance to new technology tools in workflow
- Tech Tool Acceptance: Comfort with technology tool integration
This scenario models the intersection of PTSD symptoms with project management cognitive demands, exploring how technology tools (from 2025 onwards) impact work performance, stress responses, and decision-making processes in certified Project Management Professionals.
Research Context:
- Synthetic data only - no real patient information
- Educational purpose - to understand potential technology impacts and inform supportive tool design
- Hypothesis generation - for future research directions
Run the test suite:
# Core functionality tests
python test_neurobm_core.py
# Model-specific tests
python test_neurobm_models.py
# Data generation tests
python test_neurobm_data.py
# Full test suite
python test_neurobm_comprehensive.py
# Automation system tests
python automation/test_automation_system.py
- Educational exploration of cognitive dynamics
- Hypothesis generation and testing
- Research into human-technology interaction patterns
- Understanding statistical relationships in synthetic data
- Clinical diagnosis or assessment
- Risk prediction or screening
- Treatment recommendations
- Real-world decision making about individuals
- Synthetic Data Only: No real patient or personal data
- Educational Focus: Clear documentation of limitations
- Ethical Guidelines: Built-in responsible use framework
- Transparency: Open source with full documentation
- Research ethics guidelines
- Data protection principles
- Educational use standards
- Responsible technology development
We welcome contributions! Please see our contributing guidelines and code of conduct.
# Clone and install in development mode
git clone https://github.com/ai-in-pm/NeuroBM.git
cd NeuroBM
pip install -e ".[dev]"
# Run tests
python -m pytest tests/
# Format code
black neurobm/ tests/
- Getting Started: See
notebooks/07_comprehensive_tutorial.ipynb
- API Reference: Generated from docstrings
- Model Cards:
docs/model_cards/
- Ethics Guidelines:
docs/ethics_guidelines.md
- Research Framework:
docs/responsible_ai_framework.md
This framework builds upon decades of research in:
- Boltzmann machines and energy-based models
- Cognitive science and computational neuroscience
- Model interpretability and analysis
- Responsible technology development
This project was inspired by Steven Bartlett's video on brain experts and technology interaction. The personal journey of understanding PTSD and autism through computational modeling reflects the intersection of technology, neuroscience, and human experience.