This is the repository for CAROT, ross Atlas Remapping via Optimal Transport. All the data including mappings, intrinsic evluation, and downstream analysis are in data/ folder.
CAROT uses optimal transport theory, or the mathematics of converting a probability distribution from one set to another, to find an optimal mapping between two atlases that allows data processed from one atlas to be directly transformed into a connectome based on an unavailable atlas without needing raw data. CAROT is designed for functional connectomes based on functional magnetic imaging (fMRI) data.
Website: http://carotproject.com
We are happy to announce that we also launched carot website for demo: http://carotproject.com. In rare cases if your wifi can't recognize the name check out the exact ip address: http://34.238.128.124
The main packages we have used to run the carot pipelines includes:
conda install -c conda-forge potconda install -c conda-forge matplotlibconda install -c anaconda scikit-learnconda install -c anaconda h5pyconda install -c anaconda scipyconda install -c conda-forge argparseconda install pandaspip install numpy
To specify the location of different files you need to change config.properties:
[coord]
shen=/data_dustin/store4/Templates/shen_coords.csv
craddock=/data_dustin/store4/Templates/craddock_coords.csv
power=/data_dustin/store4/Templates/power_coords.csv
[path]
shen=/data_dustin/store4/Templates/HCP/shen/rest1.mat
craddock=/data_dustin/store4/Templates/HCP/craddock/rest1.mat
power=/data_dustin/store4/Templates/HCP/power/rest1.mat
These are the arguments we use for running our scripts:
-sor--source: source atlas-tor--target: target atlas-cor--c: cost matrix (euclidean or functional)-taskor--task: task ("rest1","gambling","wm","motor","lang","social","relational","emotion")-idor--id: id rate (True or False)-id_directionor--id_direction: (ot-ot or orig-orig)-intrinsicor--intrinsic: parameter sensitiviy (True or False)-simplexor--simplex: (1: simplex ot, 2: average ot, 3: stacking ot, default is 2)-num_itersor--num_iters: number of iterations in test-save_modelor--save_model: (True or False)
Here, we want to calculate cost matrix between different ROIs in two atlases. Then, we have to specify the names of two atlases with -s and -t and the task we want to learn mappings with -task .
python build_cost_matrix.py -s craddock -t shenThe output will be stored in cost_source_target.csv with n rows andm columns indicating number of ROIs in source and target respectively.
Now, we can specify two atlases and the cost matrix derived from previous step to obtain optimal transport mapping between these two.
python build_mapping.py -s craddock -t shen -c cost_craddock_shen.csvThe output will be stored in T_source_target.csv with n rows andm columns indicating number of ROIs in source and target respectively. Each row is a probability distribution exhibiting optimum assignment of values from the appropriate node to target nodes.
Given cost matrix cost_source_target.csv and mapping T_source_target.csv now we can transfer source parcellation into other:
python carot.py -s craddock -t shen -m T_source_target.csvTo run a simple script with source brainnetome and target shen using rest1 with euclidean cost measure, and saving it:
python hcp_atlas_to_atlas.py -s brainnetome -t shen -task rest1 --save_model True -c euclideanTo run the main CAROT pipeline with all available atlases into shen:
python hcp_atlas_to_atlas.py -s all -t shen -task rest1 -simplex 2 -sample_atlas 0To run identification pipeline between estimated connectomes and databases rest`` and rest2` in HCP dataset:
python hcp_atlas_to_atlas.py -t power -s all -id True -id_direction orig-otTo run parameter sensitivity to study different frame/train sizes:
python hcp_atlas_to_atlas.py -s brainnetome -t power -task all --intrinsic trueTo train a classification model on PNC dataset and test on MDDwe need to use script pnc_atlas_to_atlas.py :
-sor--source: source atlas-tor--target: target atlas-databaseor--database: database (UCLA, PNC)-gor--g: mapping trained on HCP (rest1 or mean)-sex_taskor--sex_task: which task we are training (rest1, nback, etc)-num_itersor--num_iters: number of iterations to train-labelor--label: which label to train (sex, iq)-siteor--site: which site we are testing (1,2,3,..,24)
python pnc_atlas_to_atlas.py -s craddock -t shen -database ucla -sex_task 2 -g mean -model reg -num_iters 100 -label sex -site 1@article{dadashkarimi2023cross,
title={Cross Atlas Remapping via Optimal Transport (CAROT): Creating connectomes for different atlases when raw data is not available},
author={Dadashkarimi, Javid and Karbasi, Amin and Liang, Qinghao and Rosenblatt, Matthew and Noble, Stephanie and Foster, Maya and Rodriguez, Raimundo and Adkinson, Brendan and Ye, Jean and Sun, Huili and others},
journal={Medical Image Analysis},
pages={102864},
year={2023},
publisher={Elsevier}
}
