Abstract
In this work, we present a rules-based method for localizing retinal blood vessels in confocal scanning laser ophthalmoscopy (cSLO) images and evaluate its feasibility. A total of 31 healthy participants (17 female; mean age: 64.0 ± 8.2 years) were studied using manual and automatic segmentation. High-resolution peripapillary scan acquisition cSLO images were acquired. The automated segmentation method consisted of image pre-processing for gray-level homogenization and blood vessel enhancement (morphological opening operation, Gaussian filter, morphological Top-Hat transformation), binary thresholding (entropy-based thresholding operation), and removal of falsely detected isolated vessel pixels. The proposed algorithm was first tested on the publically available dataset DRIVE, which contains color fundus photographs, and compared to performance results from the literature. Good results were obtained. Monochromatic cSLO images segmented using the proposed method were compared to those manually segmented by two independent observers. For the algorithm, a sensitivity of 0.7542, specificity of 0.8607, and accuracy of 0.8508 were obtained. For the two independent observers, a sensitivity of 0.6579, specificity of 0.9699, and accuracy of 0.9401 were obtained. The results demonstrate that it is possible to localize vessels in monochromatic cSLO images of the retina using a rules-based approach. The performance results are inferior to those obtained using fundus photography, which could be due to the nature of the technology.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The segmentation of retinal vessels and their morphology such as length, width, tortuosity, and branching patterns can be used for the diagnosis, screening, treatment, and evaluation of various cardiovascular and ophthalmologic diseases [1]. It has been shown that morphological features of retinal vessels in childhood and adulthood are related to cardiovascular risk factors, such as blood pressure and body mass index [2], and both coronary heart disease and stroke in later life [3]. Automatic detection of retinal vessels and the analysis of their morphology are feasible in screening programs for diabetic retinopathy [4], arteriolar narrowing detection [5], detection of foveal avascular regions [6], retinopathy of prematurity evaluation [7], and investigation of general cardiovascular diseases and hypertension [8].
Temporal or multimodal image registration [9], optic disc identification, and fovea localization [10] are possible using automatic algorithms for retinal vessel detection and branch point detection. The retinal vascular tree can even be used for biometric identification, as it has been found to be unique for each individual [11, 12].
The automated analysis of the above-mentioned morphological features is accepted by the medical community [13], as manual measurement is not only time-consuming, but also dependent on the observer and their experience.
Many algorithms and techniques have been proposed for the segmentation of retinal blood vessels using two-dimensional (2D) colored retinal images acquired from a conventional fundus camera [13]. With the introduction of confocal scanning laser ophthalmoscopy (cSLO) [14], it has become possible to image the fundus without the need for mydriatics. Moreover, confocal scanning has several advantages over a conventional fundus camera in terms of convenience for screening.
cSLO uses a focused light beam to scan over the area of the fundus to be imaged [15–17], and only a small spot from the fundus is illuminated at any time. The light that returns from this spot determines the brightness of a corresponding point in the generated image or screen. An array of points or pixels can be built up by scanning successive points on the fundus with the light beam. This is achieved using a spinning and oscillating mirror that helps the light beam to scan faster across the fundus.
cSLO images are monochromic and thus differences in spectral information are lost. However, a monochromatic cSLO image differs from a conventional monochromatic image in that the contrast in a confocal image arises mainly from differences in the absorption of incident light [18]. This means that variation in the wavelengths found in a monochromatic fundus camera image differs from that in a confocal image. In a confocal image, the blood vessels show up well when using red illumination, whereas this does not happen in conventional monochromatic images. The resolution of cSLO may be less than that of a conventional fundus camera for structures that have high contrast; however, cSLO may be able to resolve structures that cannot be seen by conventional imaging because of poor contrast. This is due to the nature of the technology, as the visibility of the structure depends on its contrast with its surroundings.
As mentioned previously, cSLO has several advantages. These include:
-
(a)
Dilation is no longer required to ensure high-quality results. The burden of fundus investigations on the patient is low, resulting in a high acceptance rate for longitudinal investigations.
-
(b)
Examinations can be performed in under 2 min.
-
(c)
cSLO is often integrated with other technologies such as spectral-domain optical coherence tomography (SD-OCT), thereby facilitating multimodality imaging, which links different views on a particular disease and opens the research spectrum [19].
Retinal vessels are arterioles and venules starting from the optic disc and are spread out over the fundus. There is a strong light reflex along the centerline of the retinal vessels that is more apparent on arterioles than venules and in younger compared to older participants.
As mentioned before, methods for retinal vessel segmentation using 2D colored retinal images acquired from a conventional fundus camera can be divided into two groups, namely rules-based methods and supervised methods. Rules-based methods use vessel tracking, mathematical morphology, matched filtering, model-based locally adaptive thresholding, or deformable models. Supervised methods are based on pixel classification, which consists of classifying each pixel into one of two classes, vessel and non-vessel.
There are several approaches for rules-based methods. Vessel tracking uses the centerlines of vessels to obtain the vasculature of the structure [20–25]. An initial set of starting points, for example, at the optic nerve, is established automatically or manually and the vessels are traced from there. Another approach is based on the knowledge that vessels are piecewise linear and connected [7, 26–29]; therefore, morphological operators can be used to filter the vasculature from the background. Matched filtering techniques [30–36] extract the vessel silhouette from the background using a 2D linear structural element with a Gaussian cross-profile section that is rotated into three dimensions, with the kernel rotated into many different orientations (usually 8 or 12) to fit into the vessels of different configurations. A general framework based on a verification-based multi-threshold probing scheme was presented by Jiang et al. [37]. A deformable or snake model has also been used [38, 39] that evolves by iterative adaption to fit the shape of the desired structure.
Supervised methods are based on training with manually labeled images [40–46]. The images are pre-processed using approaches similar to rules-based methods. Thereafter, each pixel is classified as a vessel or non-vessel and then pixels are trained using manually labeled images.
This paper proposes a rules-based method partly adopted from Marin et al. [46], Pal et al. [47], and Chanwimaluang et al. [33]. The method has several steps: (1) image pre-processing for gray-level homogenization and blood vessel enhancement, (2) binary thresholding operation, and (3) removal of falsely detected isolated vessel pixels.
To date, we have not been able to find a method for vessel segmentation in cSLO images. Analogue techniques from fundus photography [33, 46, 47] were adopted for this new image acquisition technique, which is becoming increasingly popular. The purpose of our study is to determine whether existing techniques can be applied to localizing retinal blood vessels in cSLO images.
2 Materials and Methods
To evaluate the vessel segmentation methodology, 31 peripapillary scan acquisition cSLO images of one randomly selected eye of 31 healthy participants (17 female and 14 male; mean age, 64.0 ± 8.2 years) were used. Furthermore, as this approach adopts previous algorithms for conventional fundus photographs, it was evaluated using the dataset digital retinal images for vessel extraction (DRIVE) [43]. DRIVE is a publicly available database consisting of a total of 40 color fundus photographs. The photographs were obtained from a diabetic retinopathy screening program in the Netherlands that comprised 453 subjects between 31 and 86 years of age. This enables comparison with the literature. The Ethics Committee of the Medical Association of Hamburg ruled that approval was not required for this study, as all data were acquired anonymously. The study followed the recommendations of the Declaration of Helsinki (Seventh revision, 64th Meeting, Fortaleza, Brazil) and Good Clinical Practice. Written informed consent was obtained from each patient before any examination procedures were performed. If the patients were not able to give informed consent, they were excluded from the study.
Only patients meeting the inclusion and exclusion criteria were included. The ophthalmic inclusion criteria were (i) best-corrected visual acuity of 0.3 LogMAR or better, (ii) spherical refraction within ±5.0 dioptres (D), (iii) cylindrical correction within ±2.0 D, and (iv) normal results for visual field testing (Humphrey Visual Field Analyzer 30–2 [76 points over the central 30° of the visual field]; Humphrey, San Leandro, CA, USA). The exclusion criteria were (i) intensive alcohol abuse, (ii) body mass index >30 kg/m2, (iii) intraocular pressure ≥21 mmHg, (iv) anterior ischaemic optic neuropathy, (v) high myopia, and (vi) congenital abnormalities of the optic nerve.
Patients underwent various ophthalmic examinations, including (i) assessment of best-corrected visual acuity by auto-refractometry (OCULUS/NIDEK auto-refractometer, OCULUS Optikgeräte GmbH, Wetzlar, Germany) followed by subjective refractometry, (ii) slit lamp-assisted biomicroscopy of the anterior segment, (iii) ophthalmoscopy after medical dilation of the pupil, (iv) visual field testing (Humphrey Visual Field Analyzer 30-2 [76 points over the central 30° of the visual field]), (v) Goldmann applanation tonometry, and (vii) cSLO image acquisition (SPECTRALIS; Heidelberg Engineering, Heidelberg, Germany).
cSLO images were acquired using the SPECTRALIS device (SPECTRALIS software version 6.0a; Heidelberg Engineering), which is a combination of normal SD-OCT and cSLO. In our study, at least three high-resolution peripapillary images were taken. Scans with low fixation or failing retinal nerve fiber layer (RNFL) segmentation (thus possible low quality of cSLO as well) were excluded. To minimize possible variability, all images were acquired by one trained investigator. The criteria for determining the scan quality were a clear fundus image before and after image acquisition and absence of scan or algorithm failures.
All cSLO images were manually segmented by two experienced observers independently. These were set as the gold standard (approximate ground truth) and were compared with the automatically segmented vessels of the algorithm. Statistical analysis was carried out using a commercially available software package (Prism 6 for Mac OSX; GraphPad Software, Version 6.0d). The means and standard deviations were determined, and p values were corrected according to Bonferroni to correct for performing multiple statistical analyses. All p values were two-tailed and a p value of <0.05 was considered to indicate statistical significance. Correlation was performed using Pearson correlation calculations, as the values sampled from the populations followed an approximate Gaussian distribution. The correlation coefficient is indicated by r. One eye of each participant was used for statistical analysis.
3 Calculation
The proposed methods are (1) image pre-processing for gray-level homogenization and blood vessel enhancement, (2) binary thresholding operation, and (3) removal of falsely detected isolated vessel pixels. The input images are monochromatic in the case of cSLO. For color photographs from the DRIVE dataset, only the green channel of the retinal image was selected because it best highlights vessels.
3.1 Image Pre-processing for Gray-Level Homogenization and Blood Vessel Enhancement
cSLO images have high levels of noise; therefore, pre-processing is needed before pixel features can be extracted in the classification step. The pre-processing includes: (a) removal of the vessel central light reflex, (b) homogenization of the background, and (c) enhancement of the vessels. These steps are shown in Fig. 1a–f.
3.1.1 Vessel Central Light Reflex Removal
Retinal blood vessels in cSLO images typically appear darker than the surrounding tissue due to their lower reflectance. Furthermore, inner vessel pixels appear darker than the outer ones; however, many vessels include a central light reflex. For accurate segmentation of the vessel, this bright strip needs to be removed. Therefore, a morphological opening operation is applied using a three-pixel-diameter disc. An example is shown in Figs. 1b, c. This step was not needed for the DRIVE dataset with color fundus photographs.
3.1.2 Homogenization of Background
Apart from the vessels, the fundus contains some areas of non-uniform intensity due to unequal distribution of the RNFL, translucency of the choroid tissue, and variable illumination. For the feature vector operation later on, this variation in intensity needs to be removed as much as possible to improve the performance of vessel segmentation. Firstly, a 3 × 3 mean filter is applied to smooth the image, followed by a Gaussian kernel of size 3 × 3, with a standard deviation of 1.8. With this filtering, the smallest vessels might not be detected, but the overall performance is increased due to noise reduction. Secondly, a background image is generated using a 20 × 20 filter. This background image is subtracted from the former image, resulting in homogenization of the background.
The resultant image is not well distributed, and covers the full range of 0–255 (refer to 8-bit images). To express this linearly, the values are transformed into integers covering the whole range of possible gray levels, leading to a shade-corrected image with reduced background intensity variations and enhanced contrast. To remove the effect of differing illumination, we created a homogenized image I h using the following equation: I h = I sc + 128 − max(I sc ) where, max(I sc ) is the maximum value of the image I sc . If the pixel value of I h is less than 0 or greater than 255, then it gets replaced with 0 or 255, respectively.
3.1.3 Enhancement of Vessels
The final step for pre-processing the cSLO image is the vessel enhancement. This is achieved by applying the morphological top-hat transformation. An image with bright retinal features such as the optic disc and the possible presence of exudates of reflection artefacts is generated by applying a morphological opening (disc that is eight pixels in radius). The image is removed from the former generated homogenized image, resulting in the highlighting of the darker structures in the image (i.e., blood vessels, fovea, possible presence of micro-aneurysms or haemorrhages).
3.2 Binary Thresholding
An entropy-based thresholding scheme can be used to distinguish between vessel segments and the background, as it takes into account the spatial distribution of gray levels, and the image pixel intensities are not independent of each other. This step was adopted from Chanwimaluang et al. [33]. We calculated a co-occurrence matrix, T, based on pixel values of image I. As we assume that there are 256 color levels, the size of T is 256 × 256. T contains the structural information of the image, which is obtained by analyzing consecutive pixels and their co-occurrence. Any location, for example, T i,j , will contain the number of times the pixel values i and j occur consecutively. Such consecutive occurrences of the same pixel value can appear horizontally or vertically. If we suppose that P and Q are the height and width of the image, then we can formally describe T as follows:
where δ = 1 if either I(m, n) = i and I(m, n + 1) = j or I(m, n) = i and I(m + 1, n) = j; otherwise, δ = 0. Then, the probability of the co-occurrence of pixels i to j is calculated as:
The threshold value can be any value between 0 and 255, inclusive. If we suppose that s is such that 0 ≤ s ≤ 255, then the following quantities of all s are calculated as follows:
Then, the second-order entropies of the foreground, H As , and the background, H Cs , for all s are calculated as:
The total second-order entropy, H s , is calculated for all s as:
The final threshold will be s, for which H s is maximum:
The result of the binary thresholding operation is demonstrated in Fig. 2a.
3.3 Post-processing for Removing Falsely Detected Isolated Vessel Pixels
We encountered a significant amount of unnecessary segments that were not connected with other segments and could thus be considered as noise. To remove these unnecessary segments, we applied a method for removing unconnected components. It is obvious that all the unnecessary segments are smaller in size, which means that the total number of pixels inside the unnecessary segments is relatively low.
Component regions are built in the image, and all pixels in a component region are given the same label. In order to remove artefacts, the pixel area in each connected region is measured; during artefact removal, each region connected to an area below p is reclassified as a non-vessel. An image after the removal of all non-vessel classified pixels is shown in Fig. 2b.
4 Results
In order to quantify the algorithmic performance of the proposed method on a fundus image, the resulting segmentation was compared to its corresponding ground truth image. This image was obtained using a manually created vessel mask, in which all vessel pixels are set to one and all non-vessel pixels are set to zero. Thus, automated vessel segmentation performance can be assessed. Algorithmic performance was based on a vessel pixel and non-vessel pixel comparison with the ground truth image and evaluated in terms of sensitivity, specificity, true positive rate, false positive rate, and accuracy.
For the algorithm performance on the DRIVE dataset versus the ground truth, the results were a sensitivity of 0.6745, a specificity of 0.9714, a true positive rate of 0.6745, a false positive rate of 0.0286, and an accuracy of 0.9334 (Table 1).
For the first observer versus second observer evaluation, the results were a sensitivity of 0.6579, a specificity of 0.9699, a true positive rate of 0.6579, a false positive rate of 0.0301, and an accuracy of 0.9401. For the algorithm performance versus the first observer evaluation, the results were a sensitivity of 0.7542, a specificity of 0.8607, a true positive rate of 0.7542, a false positive rate of 0.1393, and an accuracy of 0.8508. The best and worst algorithm segmentations are given in Table 2. Individual performance values for all 31 patients are shown in Fig. 3.
5 Discussion
Methods for retinal vessel detection in fundus recordings, regardless of the image acquisition technique and the resulting image, can be classified into rules-based and supervised methods. This study proposed a rules-based method for retinal vessel detection in monochromatic cSLO fundus images. To the best of our knowledge, this is the first approach for automated vessel detection in cSLO images. The cSLO images are pre-processed for gray-level homogenization and blood vessel enhancement, followed by a binary thresholding operation and removal of falsely detected isolated vessel pixels. In the best case, the proposed approach for 31 previously described cSLO images had a sensitivity of 0.7542, a specificity of 0.8607, a true positive rate of 0.7542, a false positive rate of 0.1393, and an accuracy of 0.8508. For the DRIVE color fundus images, an accuracy of 0.9334 was obtained.
A direct comparison of our approach with other retinal vessel segmentation algorithms is possible using the DRIVE dataset. Published fundus photography vessel segmentation was often performed on the publicly available datasets DRIVE and STARE. The DRIVE database consists of a total of 40 color fundus photographs. The STARE database [31] contains 20 fundus photographs for blood vessel segmentation, 10 of which contain pathology. Performance values are presented in Table 3 (partly adapted from Fraz et al. [13]). Comparing the performance values for DRIVE with the chosen literature shows no differences to very small differences in the selected performance values (sensitivity, specificity, and accuracy) The performance values for cSLO images obtained using our approach are lower compared to those for fundus photographs.
This is the first algorithm for vessel detection in cSLO images, and thus, there is room for further improvement. A possible reason for the performance differences is that the images were different in quality. cSLO may be able to resolve certain structures better than does fundus photography, as the visibility of structures depends on their contrast with surroundings. However, the total resolution of cSLO is less than that of fundus photography. Furthermore, fundus photography enables separate color channel analysis. In most cases, the green channel is extracted as it provides the best vessel/background contrast in color images (the red channel is the brightest color channel and has the lowest contrast, and the blue channel offers poor dynamic range). Therefore, blood-containing elements in the retina are best represented (i.e., have the highest contrast) in the green channel [48]. A combination of two colors can also be used, for example, by using the red and green channels of a given retinal image to correct non-uniform illumination in color fundus images [49]. This can be even further optimized when using different color spaces such as the red–green–blue color space, the hue-saturation-intensity (HSI) color space, or the luminance–in-phase-quadrature color space (by the National Television System Committee [NTSC]) [29]. However, these options do not exist for cSLO images as they are monochromatic, thus lacking potentially relevant segmentation information.
These results contribute to current knowledge that shows that it might be possible to automatically localize retinal blood vessels in monochromatic cSLO images, although with potentially inferior performance compared to that of fundus photography, using a rules-based method.
The potential limitations of this study are as follows. First, only one method was presented, so it might be unclear whether the inferior performance results compared to those of fundus photography are due to our approach or the nature of the technology. Second, no comparison between rules-based and supervised methods was presented, and therefore, further studies that consider other approaches are required. Third, we were unable to use a publicly available database for cSLO, as none exists. As a consequence, comparisons of future algorithms with our approach are less diagnostically significant. However, we are happy to share our dataset (including manually segmented images) with other study groups.
The strengths of this is study are as follows. First, this is the first approach for monochromatic cSLO images showing that vessel localization is possible using this image acquisition technique. Second, the study used high-quality images (taken by an experienced, trained investigator) and manually segmented images from two observers. Third, the proposed approach was tested on the publically available dataset DRIVE for better comparison with the literature.
6 Conclusion
This work demonstrated that it is possible to localize vessels in monochromatic cSLO images of the retina using a rules-based approach adopted from color fundus photography approaches with an accuracy of 0.8508 compared an inter-observer accuracy of 0.9401. The performance results are inferior to those of fundus photography, which could be due to the nature of the technology. Further studies are needed to evaluate alternative approaches for vessel detection.
Abbreviations
- SD-OCT:
-
Spectral-domain optical coherence tomography
- cSLO:
-
Confocal scanning laser ophthalmoscopy
- RGB:
-
Red–green–blue color space
- HSI:
-
Hue–saturation–intensity color space
- YIQ:
-
Luminance–in-phase-quadrature color space
- NTSC:
-
National Television System Committee
References
Kanski, J. J., & Bowling, B. (2011). Clinical ophthalmology (7th ed.). Edinburgh: Saunders.
Leung, H., Wang, J. J., Rochtchina, E., Tan, A. G., Wong, T. Y., Klein, R., et al. (2003). Relationships between age, blood pressure, and retinal vessel diameters in an older population. Investigative Ophthalmology & Visual Science, 44(7), 2900–2904. doi:10.1167/iovs.02-1114.
Wong, T. Y., Klein, R., Sharrett, A. R., Duncan, B. B., Couper, D. J., Tielsch, J. M., et al. (2002). Retinal arteriolar narrowing and risk of coronary heart disease in men and women. The Atherosclerosis Risk in Communities Study. JAMA, 287(9), 1153–1159.
Teng, T., Lefley, M., & Claremont, P. D. (2002). Progress towards automated diabetic ocular screening: a review of image analysis and intelligent systems for diabetic retinopathy. Medical & Biological Engineering & Computing, 40(1), 2–13. doi:10.1007/BF02347689.
Grisan, E., & Ruggeri, A. (2003). A divide et impera strategy for automatic classification of retinal vessels into arteries and veins. In 25th annual international conference of the IEEE engineering in medicine and biology society (Vol. 1, pp. 890–893). doi:10.1109/IEMBS.2003.1279908.
Haddouche, A., Adel, M., Rasigni, M., Conrath, J., & Bourennane, S. (2010). Detection of the foveal avascular zone on retinal angiograms using Markov random fields. Digital Signal Processing, 20(1), 149–154. doi:10.1016/j.dsp.2009.06.005.
Heneghan, C., Flynn, J., O’Keefe, M., & Cahill, M. (2002). Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis. Medical Image Analysis, 6(4), 407–429. doi:10.1016/S1361-8415(02)00058-0.
Lowell, J., Hunter, A., Steel, D., Basu, A., Ryder, R., & Kennedy, R. L. (2004). Measurement of retinal vessel widths from fundus images based on 2-D modeling. IEEE Transactions on Medical Imaging, 23(10), 1196–1204. doi:10.1109/TMI.2004.830524.
Zana, F., & Klein, J. C. (1999). A multimodal registration algorithm of eye fundus images using vessels detection and Hough transform. IEEE Transactions on Medical Imaging, 18(5), 419–428. doi:10.1109/42.774169.
Li, H., & Chutatape, O. (2004). Automated feature extraction in color retinal images by a model based approach. Biomedical engineering. IEEE Transactions on, 51(2), 246–254. doi:10.1109/TBME.2003.820400.
Mariño, C., Penedo, M. G., Penas, M., & Carreira, M. J. (2006). Personal authentication using digital retinal images. Pattern Analysis and Applications,. doi:10.1007/s10044-005-0022-6.
Köse, C., & İki, C. (2011). A personal identification system using retinal vasculature in retinal fundus images. Expert Systems with Applications,. doi:10.1016/j.eswa.2011.04.141.
Fraz, M. M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A. R., Owen, C. G., et al. (2012). Blood vessel segmentation methodologies in retinal images–a survey. Computer Methods and Programs in Biomedicine, 108(1), 407–433. doi:10.1016/j.cmpb.2012.03.009.
White, J. G., & Amos, W. B. (1987). Confocal microscopy comes of age. Nature, 328, 183–184. doi:10.1038/328183a0.
Webb, R. H., Hughes, G. W., & Pomerantzeff, O. (1980). Flying spot TV ophthalmoscope. Applied Optics, 19(17), 2991–2997. doi:10.1364/AO.19.002991.
Webb, R. H., Hughes, G. W., & Delori, F. C. (1987). Confocal scanning laser ophthalmoscope. Applied Optics, 26(8), 1492–1499. doi:10.1364/AO.26.001492.
Woon, W. H., Fitzke, F. W., Bird, A. C., & Marshall, J. (1992). Confocal imaging of the fundus using a scanning laser ophthalmoscope. British Journal of Ophthalmology, 76(8), 470–474.
Webb, R. H., & Delori, F. C. (1988). How we see the retina. In Laser technology in ophthalmology. Laser.
Motte, J., Alten, F., Ewering, C., Osada, N., Kadas, E. M., Brandt, A. U., et al. (2014). Vessel labeling in combined confocal scanning laser ophthalmoscopy and optical coherence tomography images: criteria for blood vessel discrimination. PLoS One, 9(9), e102034. doi:10.1371/journal.pone.0102034.
Chutatape, O., Zheng, L., & Krishnan, S. M. (1998). Retinal blood vessel detection and tracking by matched Gaussian and Kalman filters. In 20th annual international conference of the IEEE engineering in medicine and biology society. Vol . 20 biomedical engineering towards the year 2000 and Beyond (Vol. 6, pp. 3144–3149). doi: 10.1109/IEMBS.1998.746160.
Tolias, Y. A., & Panas, S. M. (1998). A fuzzy vessel tracking algorithm for retinal images based on fuzzy clustering. IEEE Transactions on Medical Imaging, 17(2), 263–273. doi:10.1109/42.700738.
Can, A., Shen, H., Turner, J. N., Tanenbaum, H. L., & Roysam, B. (1999). Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms. IEEE Transactions on Information Technology in Biomedicine: a Publication of the IEEE Engineering in Medicine and Biology Society, 3(2), 125–138.
Gagnon, L., Lalonde, M., Beaulieu, M., & Boucher, M.-C. (2001). Procedure to detect anatomical structures in optical fundus images. Medical Imaging, 2001(4322), 1218–1225. doi:10.1117/12.430999.
Liu, I., & Sun, Y. (1993). Recursive tracking of vascular networks in angiograms based on the detection-deletion scheme. IEEE Transactions on Medical Imaging, 12(2), 334–341. doi:10.1109/42.232264.
Zhou, L., Rzeszotarski, M. S., Singerman, L. J., & Chokreff, J. M. (1994). The detection and quantification of retinopathy using digital angiograms. IEEE Transactions on Medical Imaging, 13(4), 619–626. doi:10.1109/42.363106.
Walter, T., & Klein, J.-C. (2001). Segmentation of Color Fundus Images of the Human Retina: Detection of the Optic Disc and the Vascular Tree Using Morphological Techniques. In Medical data analysis (Vol. 2199, pp. 282–287). Berlin: Springer. doi: 10.1007/3-540-45497-7_43.
Zana, F., & Klein, J.-C. (2001). Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation. Image Processing, IEEE Transactions on, 10(7), 1010–1019. doi:10.1109/83.931095.
Mendonça, A. M., & Campilho, A. (2006). Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction. IEEE Transactions on Medical Imaging, 25(9), 1200–1213.
Fraz, M. M., Barman, S. A., Remagnino, P., Hoppe, A., Basit, A., Uyyanonvara, B., et al. (2012). An approach to localize the retinal blood vessels using bit planes and centerline detection. Computer Methods and Programs in Biomedicine, 108(2), 600–616. doi:10.1016/j.cmpb.2011.08.009.
Chaudhuri, S., Chatterjee, S., Katz, N., Nelson, M., & Goldbaum, M. (1989). Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Transactions on Medical Imaging, 8(3), 263–269. doi:10.1109/42.34715.
Hoover, A., Kouznetsova, V., & Goldbaum, M. (2000). Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Transactions on Medical Imaging, 19(3), 203–210. doi:10.1109/42.845178.
Gang, L., Chutatape, O., & Krishnan, S. M. (2002). Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian filter. IEEE Transactions on Biomedical Engineering, 49(2), 168–172. doi:10.1109/10.979356.
Chanwimaluang, T., & Fan, G. (2003). An efficient blood vessel detection algorithm for retinal images using local entropy thresholding (Vol. 5, pp. V-21–V-24 vol.5). IEEE. doi: 10.1109/ISCAS.2003.1206162.
Al-Rawi, M., & Karajeh, H. (2007). Genetic algorithm matched filter optimization for automated detection of blood vessels from digital retinal images. Computer Methods and Programs in Biomedicine, 87(3), 248–253. doi:10.1016/j.cmpb.2007.05.012.
Al-Rawi, M., Qutaishat, M., & Arrar, M. (2007). An improved matched filter for blood vessel detection of digital retinal images. Computers in Biology and Medicine, 37(2), 262–267. doi:10.1016/j.compbiomed.2006.03.003.
Cinsdikici, M. G., & Aydın, D. (2009). Detection of blood vessels in ophthalmoscope images using MF/ant (matched filter/ant colony) algorithm. Computer Methods and Programs in Biomedicine, 96(2), 85–95. doi:10.1016/j.cmpb.2009.04.005.
Jiang, X., & Mojon, D. (2003). Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection in retinal images. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 25(1), 131–137. doi:10.1109/TPAMI.2003.1159954.
McInerney, T., & Terzopoulos, D. (2000). T-snakes: Topology adaptive snakes. Medical Image Analysis, 4(2), 73–91. doi:10.1016/S1361-8415(00)00008-6.
Espona, L., Carreira, M. J., Ortega, M., & Penedo, M. G. (2007). A snake for retinal vessel segmentation. In Pattern recognition and image analysis (Vol. 4478, pp. 178–185). Berlin: Springer. doi:10.1007/978-3-540-72849-8_23.
Gardner, G. G., Keating, D., Williamson, T. H., & Elliott, A. T. (1996). Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool. The British Journal of Ophthalmology, 80(11), 940–944. doi:10.1136/bjo.80.11.940.
Sinthanayothin, C., Boyce, J. F., Cook, H. L., & Williamson, T. H. (1999). Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images. British Journal of Ophthalmology, 83(8), 902–910.
Niemeijer, M., Staal, J., van Ginneken, B., Loog, M., & Abràmoff, M. D. (2004). Comparative study of retinal vessel segmentation methods on a new publicly available database. Medical Imaging, 2004(5370), 648–656. doi:10.1117/12.535349.
Staal, J., Abràmoff, M. D., Niemeijer, M., Viergever, M. A., & van Ginneken, B. (2004). Ridge-based vessel segmentation in color images of the retina. IEEE Transactions on Medical Imaging, 23(4), 501–509. doi:10.1109/TMI.2004.825627.
Soares, J. V. B., Leandro, J. J. G., Cesar Júnior, R. M., Jelinek, H. F., & Cree, M. J. (2006). Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. IEEE Transactions on Medical Imaging, 25(9), 1214–1222.
Ricci, E., & Perfetti, R. (2007). Retinal blood vessel segmentation using line operators and support vector classification. IEEE Transactions on Medical Imaging, 26(10), 1357–1365. doi:10.1109/TMI.2007.898551.
Marin, D., Aquino, A., Gegundez-Arias, M. E., & Bravo, J. M. (2011). A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features. IEEE Transactions on Medical Imaging, 30(1), 146–158. doi:10.1109/TMI.2010.2064333.
Pal, N. R., & Pal, S. K. (1989). Entropic thresholding. Signal Processing, 16(2), 97–108. doi:10.1016/0165-1684(89)90090-X.
Walter, T., Massin, P., Erginay, A., Ordonez, R., Jeulin, C., & Klein, J.-C. (2007). Automatic detection of microaneurysms in color fundus images. Medical Image Analysis, 11(6), 555–566. doi:10.1016/j.media.2007.05.001.
Kande, G. B., Subbaiah, P. V., & Savithri, T. S. (2010). Unsupervised fuzzy based vessel segmentation in pathological digital fundus images. Journal of Medical Systems, 34(5), 849–858. doi:10.1007/s10916-009-9299-0.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kromer, R., Shafin, R., Boelefahr, S. et al. An Automated Approach for Localizing Retinal Blood Vessels in Confocal Scanning Laser Ophthalmoscopy Fundus Images. J. Med. Biol. Eng. 36, 485–494 (2016). https://doi.org/10.1007/s40846-016-0152-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40846-016-0152-x