New repo: https://github.com/k-CIR/NatMEG-utils
This script converts data from NatMEG (MEG, EEG, OPM) to BIDS format and organizes the data into a BIDS-compliant folder structure. The script is designed to work with data collected at NatMEG and uses the MNE-BIDS library.
Ensure you have the following non-defualt libraries installed in your Python environment: mne
, mne_bids
, numpy
, pandas
- Make sure python modules are installed in the environment. If not, install them using the following commands:
conda install mne mne-bids
A configuration file is needed and you will be prompted to create a defaul file if it does not exist.
{
"squidMEG": "/neuro/data/sinuhe/<project_on_sinuhe>",
"opmMEG": "/neuro/data/kaptah/<project_on_kaptah>",
"BIDS": "/neuro/data/local/<project_on_local>",
"Calibration": "/neuro/databases/sss/sss_cal.dat",
"Crosstalk": "/neuro/databases/ctc/ct_sparse.fif",
"Dataset_description": "dataset_description.json",
"Participants": "participants.tsv",
"Participants mapping file": "/neuro/data/local/<project>/mapping.csv",
"Original subjID name": "old_subject_id",
"New subjID name": "new_subject_id",
"Original session name": "old_session_id",
"New session name": "new_session_id",
"Overwrite": "off"
}
squidMEG
: Path to the raw data folder for SQUID MEG dataopmMEG
: Path to the raw data folder for OPM MEG dataBIDS
: Path to the BIDS folder where the data will be savedCalibration
: Path to the calibration fileCrosstalk
: Path to the crosstalk fileDataset_description
: Path to the dataset_description.json file in the BIDS folderParticipants
: Path to the participants.tsv file in the BIDS folderParticipants mapping file
: Path to the mapping file that contains the original and new subject IDsOriginal subjID name
: Name of the column in the mapping file that contains the original subject IDNew subjID name
: Name of the column in the mapping file that contains the new subject IDOriginal session name
: Name of the column in the mapping file that contains the original session IDNew session name
: Name of the column in the mapping file that contains the new session IDOverwrite
: If set to "on", the script will overwrite existing files in the BIDS folder
A conversion table will be created when running bidsify.py
. This conversion file estimates task names, processing, and other parameters to create the bidsified file name. The conversion table is then looped through, skipping split-files and already converted files. By editing the lates file, you can change deviant task names, and decide to whether to run the conversion on a specific file or not. The conversion table is saved in the conversion_logs folder as <date_of_creationg>_bids_conversion_table.csv
. By default the latest file will be used but you can also select your own file by adding the --conversion
flag to the command line.
time_stamp
: The timestamp when the original conversion file was created.run_conversion
: Indicates whether the conversion should be executed (yes
orno
).task_count
: N tasks that are unique to participant, session and acquisition, and datatype.task_flag
: Flagok
if task_count is not 1, elsecheck
.participant_from
: The original participant ID.participant_to
: The new participant ID after mapping.session_from
: The original session ID.session_to
: The new session ID after mapping.task
: The task name associated with the file.split
: Indicates if the file is a split file.run
: If more than one occurence of a task in the same session this should be set modifided manually.datatype
: The type of data according to BIDS (e.g.,meg
,eeg
).acquisition
: Acquisition device.processing
: Pre-processing details for the data, eg. MaxFilter.raw_path
: The path to the raw data file.raw_name
: The name of the raw data file.bids_path
: The path where the BIDS-compliant file will be saved.bids_name
: The name of the BIDS-compliant file.
If task_flag is
check
. You will be prompted to edit the conversion file before continuing. This is to ensure that the task name is correct and that the file is not a split file. If you are sure that the task name is correct, you can set the task_flag took
and continue with the conversion.
Example 1. Run the script by executing the following command:
python bidsify.py --config=path/to/name_of_config.json
This will run the script with the config file without further questions using the latest created conversion file.
Example 2. Edit config file before running the script:
python bidsify.py --config=path/to/name_of_config.json --edit
This will open a dialog for you to edit the config file before running the script.
Example 3. Run the script without a config flag
python bidsify.py # (add --edit to also edit an existing file)
You will get three options in the terminal
open
: Open an existing config file (default)new
: Create a new config file from a default template using the dialogcancel
: Cancel the operation
Example 4. Run the script with a specific config and conversion file:
python bidsify.py --config=path/to/name_of_config.json --conversion=path/to/conversion_file.csv
Runs conversion without any further questions using a specific conversion file.
-
If a
dataset_description.json
is not defined in the configuration file a dialog will open for you to fill in the necessary fields. -
If a
participants.tsv
file is not defined in the configuration file a default one will be created. -
The script will loop through all participants and sessions. Read all files from the raw folders, convert them to BIDS format and write locally.
Although the script is written to handle various breaches of naming convensions, it is not yet water tight. The following naming conventions are recommended:
- Do not use numbers in your task name when saving your raw data.
- Do not use
_
in the file-name. Tasks namedmy_great_task
will be renamed toMyGreatTask
. - Empty room recordings should include the word
empty
ornoise
in the filename followed bybefore
orafter
to indicate if it was recorded before or after the experiment. - Resting state should be begin with
rest
, conversions for resting state names likeRS
is not yet implemented.
- EEG data will be placed in an
eeg
folder in the BIDS root directory according to BIDS specifications. However, as EEG data is collected through the TRIUX system a.fif
file will be created and the.json
sidecare will be copied to themeg
folder.
Ensure you have the following non-default libraries installed in your Python environment: mne
, matplotlib
- Make sure python modules are installed in the environment. If not, install them using the following commands:
conda install mne matplotlib
Option 1. Run the script by executing the following command:
python maxfilter.py --config=path/to/maxfilter_settings.json
This will run the script with the config file wihout further questions.
Option 2. Edit config file before running the script:
python maxfilter.py --config=path/to/maxfilter_settings.json --edit
This will open a dialog for you to edit the config file before running the script.
Option 3. Run the script without a config flag
python maxfilter.py # (add --edit to also edit an existing file)
You will get three options in the terminal
open
: Open an existing config file (default)new
: Create a new config file from a default template using the dialogcancel
: Cancel the operation
{
"standard_settings": {
"project_name": "",
"trans_conditions": ["task1", "task2"],
"trans_option": "mne_continous",
"merge_runs": "on",
"empty_room_files": ["empty_room_before.fif", "empty_room_after.fif"],
"sss_files": ["empty_room_before.fif", "empty_room_after.fif"],
"autobad": "on",
"badlimit": 7,
"bad_channels":[""],
"tsss_default": "on",
"correlation": 0.98,
"movecomp_default": "on",
"data_path": "."
},
"advanced_settings": {
"force": "off",
"downsample": "off",
"downsample_factor": 4,
"apply_linefreq": "off",
"linefreq_Hz": 50,
"scripts_path": "/home/natmeg/Scripts",
"cal": "/neuro/databases/sss/sss_cal.dat",
"ctc": "/neuro/databases/ctc/ct_sparse.fif",
"dst_path": "neuro/data/local",
"trans_folder": "headtrans",
"log_folder": "log",
"maxfilter_version": "/neuro/bin/util/mfilter",
"MaxFilter_commands": "",
}
}
Standard_settings:
project_name
: Name of the projecttrans_conditions
: List of conditions to be transformedtrans_option
: Type of transformation (continous or initial)merge_runs
: Estimate head position average over all runs if multipleempty_room_files
: List of empty room filessss_files
: List of SSS filesautobad
: Turn on or off autobadbadlimit
: Bad limitbad_channels
: List of bad channelstsss_default
: Turn on or off tSSScorrelation
: Correlation limitmovecomp_default
: Turn on or off movecompdata_path
: Path to the data
Advanced_settings:
force
: Force maxfilter to rundownsample
: Downsample datadownsample_factor
: Downsample factorapply_linefreq
: Apply line frequency filterlinefreq_Hz
: Line frequency in Hzscripts_path
: Path to the scriptcal
: Path to the calibration filectc
: Path to the crosstalk filedst_path
: Path to the destination folder (not active yet)trans_folder
: Name of the transformation folderlog_folder
: Name of the log foldermaxfilter_version
: Path to the maxfilter versionMaxFilter_commands
: Additional commands for maxfilter (see MaxFilter manual)
Improvements are welcomed. But do not change the script locally. If you need to modify this script, follow github conventions and create a new branch or fork the repository in your GitHub account to work on your version and make pull requests.