Replies: 4 comments 2 replies
-
Hey so minian/minian/visualization.py Line 1271 in a6d3339 Let me know if you're able to get it to work! |
Beta Was this translation helpful? Give feedback.
-
Thanks, I'll look into it at some point and see if I can get it to work. |
Beta Was this translation helpful? Give feedback.
-
Hi, We are interested in using the raw data after motion correction for analysis in ImageJ. @chrisda13 were you able to modify the write_video function to get the raw data as tiff or other format? thanks, Andrew |
Beta Was this translation helpful? Give feedback.
-
Has anyone been able to figure out how to write the raw videos after motion correction? Has anyone altered the code to produce this? Like Andrew @hardawayja and Chris @chrisda13, I would also like to obtain the pre-processed and motion-corrected video before the video enters into the CNMF portion of the pipeline. I would like to be able to export the video without data loss at this step. For Chris, the motion-corrected video was less frames than the original video, but for me, the motion-corrected video is actually more frames than the original! In my case, I want this functionality in order to apply the pre-processing steps of minian to epifluorescent imaging performed on brain slice cultures. For these experiments, the imaging interval is once every hour, the CNMF analysis (which depends on the presence of calcium transients in the data) would be irrelevant. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am trying to verify a few things with the pipeline and have a question about the video that is generated after motion correction.
I love the ability to motion correct based on a subset of the video. I have found that it works extremely well after the glow removal/denoise/background removal when I focus an a small subset of bright cells. My goal is to apply the shifts from motion correction to the original video (prior to the glow removal, denoise, and background removal steps) then use this motion corrected video as a baseline to compare various means of analysis (hand-drawn ROIs in ImageJ, minian, caiman, min1pipe, etc). I have been able to generate this video using the pipeline, and have been extremely happy with how stable the image is.
My steps are:
Load original video & specify coordinates for subset_mc
Run through glow removal, denoise, and background removal steps
Run estimate_motion & save the output
Restart the pipeline & load original video
SKIP glow removal, denoise, and background removal steps
Load motion that was saved out
apply_transform and save minian (creates Y_fm_chk and Y_hw_chk)
Write this motion corrected video to disk (not including the non-motion corrected video normally stitched) by running:
%%time
write_video(Y_fm_chk, "Y_fm_chk_vid.avi", dpath)
Currently, I am working with a subset of my data which is 4000 frames. When I open the data (saved out from the steps above) in ImageJ (after converting to raw format using ffmpeg), there are 4000 frames as expected. However, when I try to load the same video into minian or caiman, it is read as having 4800 frames. Do you know why this is the case?
I checked that Y_fm_chk had 4000 frames before saving it out, so I am not sure why it is 4800 when it is read back in (especially when it is 4000 in ImageJ). When I visualize the data, there is nothing obviously wrong (no blank or repeated frames). In fact the max/mean traces that are generated look nearly identical, except for the fact that one has 4000 frames and the other has 4800 frames. Any idea what is happening?
I'm more than happy to give you more info if that would help. Thanks in advance for the help!
-Chris
Beta Was this translation helpful? Give feedback.
All reactions