Active contour is a strong method for edge extraction. However, it cannot extract thin vessels and ridges very well. We propose an enhanced active contour for retinal blood vessel extraction. This repository is an implementation of the paper below:
"An active contour model using matched filter and Hessian matrix for retinal vessels segmentation. Shabani, etc."
LINK
Mahtab Shabani , Hossein Pourghassem
If you find our work useful, please consider citing:
@article{shabani2022active,
title={An active contour model using matched filter and Hessian matrix for retinal vessels segmentation},
author={Shabani, Mahtab and Pourghassem, Hossein},
journal={Turkish Journal of Electrical Engineering and Computer Sciences},
volume={30},
number={1},
pages={295--311},
year={2022}
}
For additional information, read the master dissertation: Preprint
using the vessel enhancement method in [1]. The authors in [1] present an algorithm based on iterated morphology operators. By using this filter inhomogeneities illumination are reduced, and the false edge around the optic disk is destroyed.
[1] Heneghan, C., Flynn, J., O’Keefe, M., et al., Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis. Medical image analysis, 2002. 6(4), pp. 407-429.
5.1. Adding Wavelet terms to the minimization energy formula to improve the performance of the algorithm
5.2. Optimization process
The result of the active contour:
We carry out the algorithm in MATLAB version R2014a on a personal computer running Windows 10 with an Intel(R) Core i5-7200U, the processor 2.5GHz and 8 GB of memory. The proposed algorithm is experimented on five public available datasets. The values achieved are 94.3%, 73.36%, and 97.41% for accuracy, sensitivity, and specificity, respectively, on the DRIVE dataset, and the proposed algorithm is comparable to the state-of-the-art approaches.
The top: Randomly chosen images from the DRIVE dataset. The middle: Segmentation results. The bottom: Expert's annotation.
Performance metrics on the DRIVE, STARE, HRF, CHASE DB1, and ARIA databases:
There are main vessel pixels extremely more than thin vessel pixels, so sensitivity is not useful to indicate performance alone, and accuracy measures typically will be high. Therefore, we remove wide vessel pixels from images, and then, to show the performance of the proposed algorithm, are calculated evaluation metrics on the images without wide vessels. For this work, we need a new benchmark of DRIVE database in the absence of wide vessels. To remove wide vessels, used Canny detector. In the figure below is illustrated a randomly chosen benchmark of DRIVE and the corresponding new benchmark image. Afterward, we calculated the measures for all 20 images of the test set of the DRIVE dataset with new benchmark.
The average TPR is achieved at 0.4544, which means our algorithm can detect 45.44 percent of the thin vessel pixels. The average accuracy, FPR, and informedness are achieved at 0.9616, 0.0164, and 0.4387, respectively. In this section, we use a new measure called J index (or informedness) that is obtained as J = sensitivity + specificity – 1. F-score only considers two positive classes (precision and sensitivity), but the J index considers information from both positive and negative classes (sensitivity and specificity). The ability of the algorithms with informedness is better than TPR alone.
(a) Expert's annotation. (b) The corresponding new benchmark that made by us.
Now RUN Demo_ActiveContoure.m and Enjoy it!