- in conda terminal:
- git clone https://github.com/Zeev1988/nnunet_toolkit.git
- cd nnunet_toolkit
- conda env create -f enviroment.yml
- in conda terminal - set up enviroment before running:
- conda activate conda_nnunet_toolkit
- export nnUNet_raw="/path/to/your/nnunet_raw"
- export nnUNet_preprocessed="/path/to/your/nnunet_preprocessed"
- export nnUNet_preprocessed="/path/to/your/nnunet_preprocessed"
- export nnUNet_results="/path/to/your/nnunet_results"
- export CUDA_VISIBLE_DEVICES=0 (or any other gpu you want)
- run:
- streamlit run ./gui.py
Additional information:
- Learning from sparse annotations (scribbles, slices)
- Region-based training
- Manual data splits
- Pretraining and finetuning
- Intensity Normalization in nnU-Net
- Manually editing nnU-Net configurations
- Extending nnU-Net
- What is different in V2?
nnU-Net is developed and maintained by the Applied Computer Vision Lab (ACVL) of Helmholtz Imaging and the Division of Medical Image Computing at the German Cancer Research Center (DKFZ).