Skip to content

Commit a8d9abc

Browse files
Merge pull request #52 from arcadelab/dev
Dev
2 parents caf97a1 + b70fb9f commit a8d9abc

File tree

122 files changed

+449593
-2032
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

122 files changed

+449593
-2032
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,3 +8,5 @@ build
88
docs/build
99
.vscode
1010
**flycheck*.py
11+
/.DS_Store
12+
output

MANIFEST.in

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
include deepdrr/projector/project_kernel.cu
1+
include deepdrr/projector/*
22
include deepdrr/projector/cubic/GL/*
33
include deepdrr/projector/cubic/internal/*
44
include deepdrr/projector/cubic/lib/*

README.md

Lines changed: 79 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,82 @@
1+
<div align="center">
2+
13
# DeepDRR
24

3-
DeepDRR provides state-of-the-art tools to generate realistic radiographs and fluoroscopy from 3D
4-
CTs on a training set scale.
5+
<a href="https://arxiv.org/abs/1803.08606">
6+
<img src="http://img.shields.io/badge/paper-arxiv.1803.08606-B31B1B.svg" alt="Paper" />
7+
</a>
8+
<a href="https://pepy.tech/project/deepdrr">
9+
<img src="https://pepy.tech/badge/deepdrr/month" alt="Downloads" />
10+
</a>
11+
<a href="https://github.com/arcadelab/deepdrr/releases/">
12+
<img src="https://img.shields.io/github/release/arcadelab/deepdrr.svg" alt="GitHub release" />
13+
</a>
14+
<a href="https://pypi.org/project/deepdrr/">
15+
<img src="https://img.shields.io/pypi/v/deepdrr" alt="PyPI" />
16+
</a>
17+
<a href="http://deepdrr.readthedocs.io/?badge=latest">
18+
<img src="https://readthedocs.org/projects/deepdrr/badge/?version=latest" alt="Documentation Status" />
19+
</a>
20+
<a href="https://github.com/psf/black">
21+
<img src="https://img.shields.io/badge/code%20style-black-000000.svg" alt="Code style: black" />
22+
</a>
23+
<a href="https://colab.research.google.com/github/arcadelab/deepdrr/blob/main/deepdrr_demo.ipynb">
24+
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" />
25+
</a>
26+
27+
</div>
28+
29+
DeepDRR provides state-of-the-art tools to generate realistic radiographs and fluoroscopy from 3D CTs on a training set scale.
530

631
## Installation
732

833
DeepDRR requires an NVIDIA GPU, preferably with >11 GB of memory.
934

1035
1. Install CUDA. Version 11 is recommended, but DeepDRR has been used with 8.0
11-
2. Make sure your C compiler is on the path. DeepDRR has been used with `gcc 9.3.0`.
12-
3. Install from `PyPI`
36+
2. Make sure your C compiler is on the path. DeepDRR has been used with `gcc 9.3.0`
37+
3. We recommend installing pycuda separately, as it may need to be built. If you are using [Anaconda](https://www.anaconda.com/), run
38+
39+
```bash
40+
conda install -c conda-forge pycuda
41+
```
42+
43+
to install it in your environment. 4. You may also wish to [install PyTorch](https://pytorch.org/get-started/locally/) separately, depending on your setup. 5. Install from `PyPI`
1344

1445
```bash
1546
pip install deepdrr
1647
```
1748

49+
### Development
50+
51+
Installing from the `dev` branch is risky, as it is unstable. However, this installation method can be used for the `main` branch as well, perhaps somewhat more reliably.
52+
53+
Dependencies:
54+
55+
1. CUDA 11.1
56+
2. Anaconda
57+
58+
The `dev` branch contains the most up-to-date code and can be easily installed using Anaconda. To create an environment with DeepDRR, run
59+
60+
```bash
61+
git clone https://github.com/arcadelab/deepdrr.git
62+
cd deepdrr
63+
git checkout dev
64+
conda env create -f environment.yaml
65+
conda activate deepdrr
66+
```
67+
68+
## Documentation
69+
70+
Documentation is available at [deepdrr.readthedocs.io](https://deepdrr.readthedocs.io/).
71+
72+
To create the autodocs, run
73+
74+
```bash
75+
sphinx-apidoc -f -o docs/source deepdrr
76+
```
77+
78+
in the base directory. Then do `cd docs` and `make html` to build the static site locally.
79+
1880
## Usage
1981

2082
The following minimal example loads a CT volume from a NifTi `.nii.gz` file and simulates an X-ray projection:
@@ -43,7 +105,7 @@ Contributions for bug fixes, enhancements, and other suggestions are welcome. Pl
43105

44106
## Method Overview
45107

46-
DeepDRR combines machine learning models for material decomposition and scatter estimation in 3D and 2D, respectively, with analytic models for projection, attenuation, and noise injection to achieve the required performance. The pipeline is illustrated below.
108+
DeepDRR combines machine learning models for material decomposition and scatter estimation in 3D and 2D, respectively, with analytic models for projection, attenuation, and noise injection to achieve the required performance. The pipeline is illustrated below.
47109

48110
![DeepDRR Pipeline](https://raw.githubusercontent.com/arcadelab/deepdrr/master/images/deepdrr_workflow.png)
49111

@@ -62,6 +124,7 @@ We have applied DeepDRR to anatomical landmark detection in pelvic X-ray: "X-ray
62124
![Prediction Performance](https://raw.githubusercontent.com/arcadelab/deepdrr/master/images/landmark_performance_real_data.PNG)
63125

64126
### Applications - Metal Tool Insertion
127+
65128
DeepDRR has also been applied to simulate X-rays of the femur during insertion of dexterous manipulaters in orthopedic surgery: "Localizing dexterous surgical tools in X-ray for image-based navigation", which has been accepted at IPCAI'19: https://arxiv.org/abs/1901.06672. Simulated images are used to train a concurrent segmentation and localization network for tool detection. We found consistent performance on both synthetic and real X-rays of ex vivo specimens. The tool model, simulation image and detection results are shown below.
66129

67130
This capability has not been tested in version 1.0. For tool insertion, we recommend working with [Version 0.1](https://github.com/arcadelab/deepdrr/releases/tag/0.1) for the time being.
@@ -71,23 +134,24 @@ This capability has not been tested in version 1.0. For tool insertion, we recom
71134
### Potential Challenges - General
72135

73136
1. Our material decomposition V-net was trained on NIH Cancer Imagign Archive data. In case it does not generalize perfectly to other acquisitions, the use of intensity thresholds (as is done in conventional Monte Carlo) is still supported. In this case, however, thresholds will likely need to be selected on a per-dataset, or worse, on a per-region basis since bone density can vary considerably.
74-
2. Scatter estimation is currently limited to Rayleigh scatter and we are working on improving this. Scatter estimation was trained on images with 1240x960 pixels with 0.301 mm. The scatter signal is a composite of Rayleigh, Compton, and multi-path scattering. While all scatter sources produce low frequency signals, Compton and multi-path are more blurred compared to Rayleigh, suggesting that simple scatter reduction techniques may do an acceptable job. In most clinical products, scatter reduction is applied as pre-processing before the image is displayed and accessible. Consequently, the current shortcoming of not providing *full scatter estimation* is likely not critical for many applications, in fact, scatter can even be turned off completely. We would like to refer to the **Applications** section above for some preliminary evidence supporting this reasoning.
137+
2. Scatter estimation is currently limited to Rayleigh scatter and we are working on improving this. Scatter estimation was trained on images with 1240x960 pixels with 0.301 mm. The scatter signal is a composite of Rayleigh, Compton, and multi-path scattering. While all scatter sources produce low frequency signals, Compton and multi-path are more blurred compared to Rayleigh, suggesting that simple scatter reduction techniques may do an acceptable job. In most clinical products, scatter reduction is applied as pre-processing before the image is displayed and accessible. Consequently, the current shortcoming of not providing _full scatter estimation_ is likely not critical for many applications, in fact, scatter can even be turned off completely. We would like to refer to the **Applications** section above for some preliminary evidence supporting this reasoning.
75138
3. Due to the nature of volumetric image processing, DeepDRR consumes a lot of GPU memory. We have successfully tested on 12 GB of GPU memory but cannot tell about 8 GB at the moment. The bottleneck is volumetric segmentation, which can be turned off and replaced by thresholds (see 1.).
76-
4. We currently provide the X-ray source sprectra from MC-GPU that are fairly standard. Additional spectra can be implemented in spectrum_generator.py.
77-
5. The current detector reading is *the average energy deposited by a single photon in a pixel*. If you are interested in modeling photon counting or energy resolving detectors, then you may want to take a look at `mass_attenuation(_gpu).py` to implement your detector.
139+
4. We currently provide the X-ray source sprectra from MC-GPU that are fairly standard. Additional spectra can be implemented in spectrum_generator.py.
140+
5. The current detector reading is _the average energy deposited by a single photon in a pixel_. If you are interested in modeling photon counting or energy resolving detectors, then you may want to take a look at `mass_attenuation(_gpu).py` to implement your detector.
78141
6. Currently we do not support import of full projection matrices. But you will need to define K, R, and T seperately or use camera.py to define projection geometry online.
79142
7. It is important to check proper import of CT volumes. We have tried to account for many variations (HU scale offsets, slice order, origin, file extensions) but one can never be sure enough, so please double check for your files.
80143

81144
### Potential Challenges - Tool Modeling
82145

83-
1. Currently, the tool/implant model must be represented as a binary 3D volume, rather than a CAD surface model. However, this 3D volume can be of different resolution than the CT volume; particularly, it can be much higher to preserve fine structures of the tool/implant.
146+
1. Currently, the tool/implant model must be represented as a binary 3D volume, rather than a CAD surface model. However, this 3D volume can be of different resolution than the CT volume; particularly, it can be much higher to preserve fine structures of the tool/implant.
84147
2. The density of the tool needs to be provided via hard coding in the file 'load_dicom_tool.py' (line 127). The pose of the tool/implant with respect to the CT volume requires manual setup. We provide one example origin setting at line 23-24.
85148
3. The tool/implant will supersede the anatomy defined by the CT volume intensities. To this end, we sample the CT materials and densities at the location of the tool in the tool volume, and subtract them from the anatomy forward projections in detector domain (to enable different resolutions of CT and tool volume). Further information can be found in the IJCARS article.
86149

87150
## Reference
88151

89-
We hope this proves useful for medical imaging research. If you use our work, we would kindly ask you to reference our work.
152+
We hope this proves useful for medical imaging research. If you use our work, we would kindly ask you to reference our work.
90153
The MICCAI article covers the basic DeepDRR pipeline and task-based evaluation:
154+
91155
```
92156
@inproceedings{DeepDRR2018,
93157
author = {Unberath, Mathias and Zaech, Jan-Nico and Lee, Sing Chun and Bier, Bastian and Fotouhi, Javad and Armand, Mehran and Navab, Nassir},
@@ -97,7 +161,9 @@ The MICCAI article covers the basic DeepDRR pipeline and task-based evaluation:
97161
publisher = {Springer},
98162
}
99163
```
164+
100165
The IJCARS paper describes the integration of tool modeling and provides quantitative results:
166+
101167
```
102168
@article{DeepDRR2019,
103169
author = {Unberath, Mathias and Zaech, Jan-Nico and Gao, Cong and Bier, Bastian and Goldmann, Florian and Lee, Sing Chun and Fotouhi, Javad and Taylor, Russell and Armand, Mehran and Navab, Nassir},
@@ -116,14 +182,14 @@ For the original DeepDRR, released alongside our 2018 paper, please see the [Ver
116182

117183
CUDA Cubic B-Spline Interpolation (CI) used in the projector:
118184
https://github.com/DannyRuijters/CubicInterpolationCUDA
119-
D. Ruijters, B. M. ter Haar Romeny, and P. Suetens. Efficient GPU-Based Texture Interpolation using Uniform B-Splines. Journal of Graphics Tools, vol. 13, no. 4, pp. 61-69, 2008.
185+
D. Ruijters, B. M. ter Haar Romeny, and P. Suetens. Efficient GPU-Based Texture Interpolation using Uniform B-Splines. Journal of Graphics Tools, vol. 13, no. 4, pp. 61-69, 2008.
120186

121187
The projector is a heavily modified and ported version of the implementation in CONRAD:
122188
https://github.com/akmaier/CONRAD
123-
A. Maier, H. G. Hofmann, M. Berger, P. Fischer, C. Schwemmer, H. Wu, K. Müller, J. Hornegger, J. H. Choi, C. Riess, A. Keil, and R. Fahrig. CONRAD—A software framework for cone-beam imaging in radiology. Medical Physics 40(11):111914-1-8. 2013.
189+
A. Maier, H. G. Hofmann, M. Berger, P. Fischer, C. Schwemmer, H. Wu, K. Müller, J. Hornegger, J. H. Choi, C. Riess, A. Keil, and R. Fahrig. CONRAD—A software framework for cone-beam imaging in radiology. Medical Physics 40(11):111914-1-8. 2013.
124190

125191
Spectra are taken from MCGPU:
126-
A. Badal, A. Badano, Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit. Med Phys. 2009 Nov;36(11): 4878–80.
192+
A. Badal, A. Badano, Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit. Med Phys. 2009 Nov;36(11): 4878–80.
127193

128194
The segmentation pipeline is based on the Vnet architecture:
129195
https://github.com/mattmacy/vnet.pytorch

deepdrr/__init__.py

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,27 @@
1-
from .device import CArm, MobileCArm
2-
from .vol import Volume
1+
import logging
2+
from rich.logging import RichHandler
3+
4+
log = logging.getLogger(__name__)
5+
ch = RichHandler(level=logging.NOTSET)
6+
log.addHandler(ch)
7+
8+
9+
from . import vis, geo, projector, device, annotations, utils
310
from .projector import Projector
11+
from .vol import Volume
12+
from .device import CArm, MobileCArm
13+
from .annotations import LineAnnotation
14+
415

5-
__all__ = ["MobileCArm", "CArm", "Volume", "Projector"]
16+
__all__ = [
17+
"MobileCArm",
18+
"CArm",
19+
"Volume",
20+
"Projector",
21+
"vis",
22+
"geo",
23+
"projector",
24+
"device",
25+
"annotations",
26+
"utils",
27+
]

deepdrr/annotations/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
from .line_annotation import LineAnnotation
2+
3+
__all__ = ['LineAnnotation']
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
from __future__ import annotations
2+
3+
import logging
4+
from typing import Optional
5+
from pathlib import Path
6+
import numpy as np
7+
import json
8+
import pyvista as pv
9+
10+
from .. import geo
11+
from ..vol import Volume, AnyVolume
12+
13+
logger = logging.getLogger(__name__)
14+
15+
16+
class LineAnnotation(object):
17+
def __init__(
18+
self, startpoint: geo.Point, endpoint: geo.Point, volume: AnyVolume
19+
) -> None:
20+
# all points in anatomical coordinates, matching the provided volume.
21+
self.startpoint = geo.point(startpoint)
22+
self.endpoint = geo.point(endpoint)
23+
self.volume = volume
24+
25+
assert (
26+
self.startpoint.dim == self.endpoint.dim
27+
), "annotation points must have matching dim"
28+
29+
def __str__(self):
30+
return f"LineAnnotation({self.startpoint}, {self.endpoint})"
31+
32+
@classmethod
33+
def from_markup(cls, path: str, volume: AnyVolume) -> LineAnnotation:
34+
with open(path, "r") as file:
35+
ann = json.load(file)
36+
37+
control_points = ann["markups"][0]["controlPoints"]
38+
points = [geo.point(cp["position"]) for cp in control_points]
39+
40+
coordinate_system = ann["markups"][0]["coordinateSystem"]
41+
logger.debug(f"coordinate system: {coordinate_system}")
42+
43+
if volume.anatomical_coordinate_system == "LPS":
44+
if coordinate_system == "LPS":
45+
pass
46+
elif coordinate_system == "RAS":
47+
logger.debug("converting to LPS")
48+
points = [geo.LPS_from_RAS @ p for p in points]
49+
else:
50+
raise ValueError
51+
elif volume.anatomical_coordinate_system == "RAS":
52+
if coordinate_system == "LPS":
53+
logger.debug("converting to RAS")
54+
points = [geo.RAS_from_LPS @ p for p in points]
55+
elif coordinate_system == "RAS":
56+
pass
57+
else:
58+
raise ValueError
59+
else:
60+
logger.warning(
61+
"annotation may not be in correct coordinate system. "
62+
"Unable to check against provided volume, probably "
63+
"because volume was created manually. Proceed with caution."
64+
)
65+
66+
return cls(*points, volume)
67+
68+
@property
69+
def startpoint_in_world(self) -> geo.Point:
70+
return self.volume.world_from_anatomical @ self.startpoint
71+
72+
@property
73+
def endpoint_in_world(self) -> geo.Point:
74+
return self.volume.world_from_anatomical @ self.endpoint
75+
76+
def get_mesh_in_world(self, full: bool = True):
77+
u = self.startpoint_in_world
78+
v = self.endpoint_in_world
79+
80+
mesh = pv.Line(u, v)
81+
mesh += pv.Sphere(2.5, u)
82+
mesh += pv.Sphere(2.5, v)
83+
return mesh

0 commit comments

Comments
 (0)