Table of Contents
BBCPy_DNN is an advanced component of the BBCPy toolbox, designed to facilitate the development and exploration of Deep Neural Networks (DNNs) within the PyTorch framework. Leveraging PyTorch's dynamic computation graph, PyTorch Lightning for high-level framework functionalities, and Hydra for dynamic management of configuration files, BBCPy_DNN offers researchers and developers a flexible and efficient interface to develop and reproduce DNNs, particularly for BCI applications.
- Intuitive Interface: Simplifies the complexity of DNN development, making it more accessible, especially for newcomers.
- Streamlined Training Process: Utilizes PyTorch Lightning to balance user-friendliness with high-level functionality for deep learning research.
- Supports ML Lifecycle: From data extraction and exploration to model deployment, supporting continuous improvement and superior model performance.
- Flexible Configuration: Integrated with Hydra for dynamic management of configuration files, allowing easy optimization and workflow management without altering source code.
BBCPy_DNN is built with the help of the following frameworks:
- BBCPy toolbox: A Python-based toolbox for Brain-Computer Interface (BCI) research.
- Motor Imagery Dataset:
-
Supports the Continuous SMR BCI dataset, which includes EEG data from 62 healthy individuals for MI-BCI experiment with feedback
of hand movements, it includes 4 classes (left , right , both hands and rest). The dataset is available for download from figshare.mkdir -p ~/data wget https://figshare.com/ndownloader/articles/13123148/versions/1 unzip 1
-
# clone project
git clone https://github.com/mralioo/BBCPy_DNN.git
cd your-repo-name
# [OPTIONAL] create conda environment
conda create -n myenv python=3.9
conda activate myenv
# install pytorch according to instructions
# https://pytorch.org/get-started/
# install requirements
pip install -r requirements.txt
# clone project
git clone https://github.com/mralioo/BBCPy_DNN.git
cd your-repo-name
# create conda environment and install dependencies
conda env create -f environment.yaml -n myenv
# activate conda environment
conda activate myenv
Train model with default configuration
# train on CPU
python src/train.py trainer=cpu
# train on GPU
python src/train.py trainer=gpu
Train model with chosen experiment configuration from configs/experiment/
python src/train.py experiment=experiment_name.yaml
You can override any parameter from command line like this
python src/train.py trainer.max_epochs=20 data.batch_size=64
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
- Ali Alouane (ali.alouane@campus.tu-berlin.de)
We want to give many thanks to our project supervisor Dr. Daniel Miklody, Dr. Oleksandr Zlatov, and the Neurotechnology group at TU Berlin.