The included training script doesn't produce viable training samples when target class is nuc #130
-
I downloaded the data with Part of the training log:
The generated datasplit.csv, which might be useful. |
Beta Was this translation helpful? Give feedback.
Replies: 13 comments
-
Could it because I am using Python venv instead of micromamba to manage the environment? And something went wrong with the dependencies? Below is my package list:
|
Beta Was this translation helpful? Give feedback.
-
If I set |
Beta Was this translation helpful? Give feedback.
-
@mzouink / @yuriyzubov Can you check the zipped data to see if this is an issue on our end? @fgdfgfthgr-fox When did you do this download? |
Beta Was this translation helpful? Give feedback.
-
@fgdfgfthgr-fox |
Beta Was this translation helpful? Give feedback.
-
@fgdfgfthgr-fox If the input and target arrays have the same shape and scale, then the full FOV is the same as the FOV of the target image (which is shown in Raw). |
Beta Was this translation helpful? Give feedback.
-
Shortly after #77 has been resolved. |
Beta Was this translation helpful? Give feedback.
-
Actually should be around 02/14/25 |
Beta Was this translation helpful? Give feedback.
-
I suspect this is because you are trying to train on 8nm data for "nuc" without having downloaded all resolutions of the data. This is probably a bug in how the metadata is downloaded. Try using |
Beta Was this translation helpful? Give feedback.
-
That command does not work. |
Beta Was this translation helpful? Give feedback.
-
Unfortunately this is an issue with the data, not the command. @yuriyzubov I thought we fixed the "checksum" issue? I think it was linked to making empty scale arrays, but maybe we only fixed it for ground truth? Might also be fixed by omitting empty scale levels. |
Beta Was this translation helpful? Give feedback.
-
HI, @fgdfgfthgr-fox I does seem like you have the old version of the data that contained Just in case the checksum issue still persists, you can resolve it by removing it from the metadata:
for a single array you can run this method
for all fib-sem data:
You can try and run these code snippets to check if it fixed the issue for |
Beta Was this translation helpful? Give feedback.
-
After re-download the data, the checksum issue is solved. |
Beta Was this translation helpful? Give feedback.
-
The email communication suggests that different organelle labels have different resolutions, and even within the same organelle classes there are different resolutions for different crops, e.g. here are examples of nuclei at 8nm, 32nm. |
Beta Was this translation helpful? Give feedback.
HI, @fgdfgfthgr-fox
I does seem like you have the old version of the data that contained
checksum
parameter, which caused tensor-store error.I recommend to download the latest data, since all the issues should be resolved, as well as additional raw fib-sem data was also added when we opened submussion.
Just in case the checksum issue still persists, you can resolve it by removing it from the metadata: