Skip to content

Commit f0c06ad

Browse files
author
Yifan Peng
authored
Merge pull request #1 from yfpeng/master
Change the project name
2 parents 3ffdda4 + c2c611e commit f0c06ad

File tree

7 files changed

+18
-17
lines changed

7 files changed

+18
-17
lines changed

README.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
.. image:: https://github.com/ncbi-nlp/DeepSeeNet/blob/master/images/eyesnet.png?raw=true
2-
:target: https://github.com/ncbi-nlp/DeepSeeNet/blob/master/images/eyesnet.png?raw=true
3-
:alt: EyesNet
4-
5-
1+
.. image:: https://github.com/ncbi-nlp/DeepSeeNet/blob/master/images/deepseenet.png?raw=true
2+
:target: https://github.com/ncbi-nlp/DeepSeeNet/blob/master/images/deepseenet.png?raw=true
3+
:alt: DeepSeeNet
4+
5+
66
-----------------------
77

88
DeepSeeNet is a high-performance deep learning framework for grading of color fundus photographs using the AREDS simplified severity scale.

docs/images/Fig5.png

1.01 MB
Loading

docs/images/Fig9.png

-337 KB
Binary file not shown.

docs/images/Tab4.png

3.95 KB
Loading

docs/index.html

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
<!--Let browser know website is optimized for mobile-->
1010
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
11-
<title>EyesNet: A deep learning framework for classifying patient-based age-related macular degeneration severity in
11+
<title>DeepSeeNet: A deep learning framework for classifying patient-based age-related macular degeneration severity in
1212
retinal color fundus photographs</title>
1313

1414
<style>
@@ -85,11 +85,11 @@
8585
<div class="container">
8686
<div class="row">
8787
<div class="col s12">
88-
<h3>EyesNet: A deep learning framework for classifying patient-based
88+
<h3>DeepSeeNet: A deep learning framework for classifying patient-based
8989
age-related macular degeneration severity in retinal color fundus photographs</h3>
9090
</div>
9191
<div class="col l12">
92-
<h5>Yifan Peng<sup>1*</sup>, Shazia Dharssi<sup>1,2*</sup>, Qingyu Chen<sup>1</sup>, Elvira Agron<sup>2</sup>, Wai Wong<sup>2</sup>, Emily Y. Chew<sup>2&dagger;</sup>, Zhiyong Lu<sup>1&dagger;</sup></h5>
92+
<h5>Yifan Peng<sup>1*</sup>, Shazia Dharssi<sup>1,2*</sup>, Qingyu Chen<sup>1</sup>, Elvira Agrón<sup>2</sup>, Wai Wong<sup>2</sup>, Emily Y. Chew<sup>2&dagger;</sup>, Zhiyong Lu<sup>1&dagger;</sup></h5>
9393
<p>1. National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, Maryland, United States;</p>
9494
<p>2. National Eye Institute (NEI), National Institutes of Health (NIH), Bethesda, Maryland, United States;</p>
9595
<p>* These authors contributed equally to this work.</p>
@@ -106,12 +106,12 @@ <h6>> Source code coming soon</h6>
106106
<section>
107107
<div class="container">
108108
<div class="row">
109-
<div class="col l9">
110-
<h5>We developed a deep learning framework that can classify retinal color fundus photographs into a 6 class patient-based age-related macular degeneration (AMD) severity score at a level that exceeds U.S.-licensed ophthalmologists.</h5>
109+
<div class="col l7">
110+
<h5>We developed a deep learning framework that can classify retinal color fundus photographs into a 6 class patient-based age-related macular degeneration (AMD) severity score at a level that exceeds retinal specialists.</h5>
111111
<p>Age-related macular degeneration (AMD) is the leading cause of incurable blindness worldwide in people over the age of 65. The Age-Related Eye Disease Study (AREDS) Simplified Severity Scale uses two risk factors found in color fundus photographs (drusen and pigmentary abnormalities) to provide convenient risk categories for the development of late AMD. However, manual assignment can still be time consuming, expensive, and requires domain expertise.<p>
112-
<p>Our model, EyesNet, mimics the human grading process by first detecting risk factors for each eye (large drusen and pigmentary abnormalities) and subsequently calculates patient-based AMD severity scores. EyesNet was trained and validated on 59,302 color fundus photographs from 4,549 participants.</p>
112+
<p>Our model, DeepSeeNet, mimics the human grading process by first detecting risk factors for each eye (large drusen and pigmentary abnormalities) and subsequently calculates patient-based AMD severity scores. DeepSeeNet was trained and validated on 59,302 color fundus photographs from 4,549 participants.</p>
113113
</div>
114-
<div class="col l3"><img src="images/Fig9.png"></div>
114+
<div class="col l5"><img src="images/Fig5.png"></div>
115115
</div>
116116
</div>
117117
</section>
@@ -121,8 +121,8 @@ <h5>We developed a deep learning framework that can classify retinal color fundu
121121
<div class="row">
122122
<div class="col l6"><img src="images/Fig1.png" style="padding-top:20px"></div>
123123
<div class="col l6">
124-
<h5>EyesNet was trained on the NIH AREDS dataset, the largest publicly available dataset of color fundus images for AMD analysis</h5>
125-
<p>This dataset, released by the NIH, contains retinal color fundus images from over 4,549 patients. Grades obtained from a central reading center were used to calculate AMD severity scores for ground truth labels. Performance of EyesNet was compared to the performance of U.S.-licensed ophthalmologists, who independently graded at baseline 450 AREDS patients based on a clinical evaluation.</p>
124+
<h5>DeepSeeNet was trained on the NIH AREDS dataset, the largest publicly available dataset of color fundus images for AMD analysis</h5>
125+
<p>This dataset, released by the NIH, contains retinal color fundus images from over 4,549 patients. Grades obtained from a central reading center were used to calculate AMD severity scores for ground truth labels. Performance of DeepSeeNet was compared to the performance of retinal specialists, who independently assessed 450 AREDS participants as part of a qualification survey used to determine initial AMD severity for each eye.</p>
126126
</div>
127127
</div>
128128
</div>
@@ -132,8 +132,8 @@ <h5>EyesNet was trained on the NIH AREDS dataset, the largest publicly available
132132
<div class="container">
133133
<div class="row">
134134
<div class="col s6">
135-
<h5>Our model consistently exceeds U.S.-licensed ophthalmologists on drusen and pigmentary abnormalities, and is comparable to ophthalmologists on late AMD detection. </h5>
136-
<p>As seen to the left, EyesNet's performance (accuracy=0.671; kappa=0.559) exceeds ophthalmologist performance levels (accuracy=0.598; kappa=0.466) on identifying AREDS Simplified Severity Scale scores. Additionally, EyesNet's performance was compared to two other deep learning models with different training strategies employed. Again, EyesNet's performance is superior to both model performance levels.</p>
135+
<h5>Our model consistently exceeds retinal specialists on drusen and pigmentary abnormalities, and is comparable to retinal specialists on late AMD detection. </h5>
136+
<p>As seen to the right, DeepSeeNet's performance (accuracy=0.671; kappa=0.558) exceeds retinal specialists performance levels (accuracy=0.599; kappa=0.467) on identifying AREDS Simplified Severity Scale scores. Additionally, DeepSeeNet's performance was compared to two other deep learning models with different training strategies employed. Again, DeepSeeNet's performance is superior to both model performance levels.</p>
137137
</div>
138138
<div class="col s6"><img src="images/Tab4.png" style="padding-top:40px"></div>
139139
</div>
@@ -146,7 +146,8 @@ <h5>Our model consistently exceeds U.S.-licensed ophthalmologists on drusen and
146146
<div class="col s12">
147147
<h5>Summary</h5>
148148
<p>While several automated deep learning systems have been developed for classifying color fundus photographs of individual eyes by AMD severity score, none to date have utilized a patient-based scoring system that employs images from both eyes to obtain one classification score for the individual.</p>
149-
<p>EyesNet, trained on one of the largest color fundus image datasets for AMD analysis, shows high classification accuracy in the AREDS dataset and can be used to assign individual patients to AMD risk categories based on the AREDS Simplified Severity Scale. Its superior performance compared to ophthalmologists’ clinical examination of AMD highlights the potential of deep learning systems to enhance clinical decision-making processes and allow for better understanding of retinal disease.</p>
149+
<p>DeepSeeNet, trained on one of the largest color fundus image datasets for AMD analysis, shows high classification accuracy in the AREDS dataset and can be used to assign individual patients to AMD risk categories based on the AREDS Simplified Severity Scale. DeepSeeNet performed better on patient-based, multi-class classification (accuracy=0.671; kappa=0.558) than retinal specialists (accuracy=0.599; kappa=0.467) with high AUCs in the detection of large drusen (0.94), pigmentary abnormalities (0.93) and late AMD (0.97), respectively.
150+
Its superior performance highlights the potential of deep learning systems to enhance clinical decision-making processes and allow for better understanding of retinal disease.</p>
150151
</div>
151152
</div>
152153
</div>

images/deepseenet.png

74.6 KB
Loading

images/eyesnet.png

-67.5 KB
Binary file not shown.

0 commit comments

Comments
 (0)