Authors: Alexander Ho and Kevin Tan
Updated: 4/8/2025
The basic idea of this is to automate the readptu and image intensity stack generation for analysis of COMI data. In this folder, there are various python scripts for processing:
- process.py
- The main data processing script that generates an output folder containing additional folders for the FLIM Cubes, the FLIM Intensity Images, and the consolidated TIFF stacks for each modality
- It runs readptu.py, which searches for ptu files and uses the TSCPC data to reconstruct FLIM Cubes
- For each one of these cubes, we sum the time axis to create an intensity image
- In the third step, process.py takes the zStackToCube function from data_utils.py and consolidates the images together into a tiff file for easy viewing.
- read_ptu.py
- processes ptu files. See Kevin's readptu repo on github
- data_utils.py
- useful functions such as zStacktoCube
- process_newdata.py
- process.py by itself only processes 1 folder
- this script will iteratively search for folders with a "Setting.txt" file, indicating that there is data there
- it checks if there is no "outputs" folder
- if these 2 conditions are met, the folder is appended to a list for processing by process.py
- also does some logging stuff, but not fully implemented yet
- Installation of the environment
- navigate to this folder and run
conda env create -f environment.yml
- install the local package using
pip install -e.
- Using process_newdata.py
python process_newdata.py --root "C:\path\to\root\folder" --verbose
- If you don't have the image
- Install Docker from https://docs.docker.com/engine/install/
- navigate to the folder with the dockerfile
- run
docker build -t comi-autoprocess .
- The dockerfile basically loads up a linux kernel with miniconda installed, installs all the packages that we need and wraps that and all of the python files up into a nice image that lets us run the same way every time. The last line of the docker file tells it to run process_newdata.py on startup and allows us to define additional arguments
- If you already have the docker image
- See below
! Important ! Make sure there is a distro for wsl 2 that has 132 GB of RAM, otherwise can crash if big ptu's are processed. You can change by going to your user drive and looking for .wslconfig. Make sure you shut down wsl and wait about 8 seconds before restarting
In the WSL 2 terminal:
- Mount the folder using: sudo mount -t drvfs '\bi-isilon-smb.beckman.illinois.edu\BIL-Users\ah36\Biophotonics Imaging Laboratory\Other Peoples Files\Jaena\20250403_AdiposeTissue' /mnt/adipose_data
- This is one example of a file path. Use your own, but make sure its in UNC path (\server\share\path). Linux doesn't like mapping drives
- Check if it's mounted
- Process the data:
docker run --rm -it -v /mnt/adipose_data:/data comi-autoprocess --root /data --verbose
- the comi_auto.sh file will run all of this, so just modify the file and run it using
bash ~/run_comi.sh
. If it doesn't run, do this first:chmod +x ~/run_comi.sh
- Windows task schedule will allow you to run tasks based on certain triggers (such as time of day or logging on)
- To set this up, add a new basic task
- Set up however you want it be triggered. I set it to midnight, when fewer people are likely to be using it.
- Set up the action:
- Start a Program
- In the program/script, type:
C:\Windows\System32\wsl.exe
- In the optional arguments, type:
-d Ubuntu -- bash /home/ah36/run_comi.sh
- For start in: can leave blank
- I could never get "run whether user is logged in or not" to work. Not having enough perms maybe
- Currently, it assumes it's 10 frame averaging
- No other arguments other than --root and --verbose. Will add arguments for frame averaging or any other preprocessing
- Only tested for Femtotrain FLIM, unsure of Chameoleon FLIM
- No FLIM Processing
- Only takes the SLAM tiff files, does not deal with the bin files