|
|
ORIGINAL ARTICLE |
|
|
|
Year : 2019 | Volume
: 44
| Issue : 1 | Page : 21-26 |
|
Extraction of retinal blood vessels on fundus images by kirsch's template and Fuzzy C-Means
T Jemima Jebaseeli1, C Anand Deva Durai2, J Dinesh Peter1
1 Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India 2 Department of Computer Science and Engineering, King Khalid University, Abha, Saudi Arabia
Date of Submission | 07-May-2018 |
Date of Decision | 15-Jan-2019 |
Date of Acceptance | 28-Jan-2019 |
Date of Web Publication | 11-Mar-2019 |
Correspondence Address: Prof. T Jemima Jebaseeli Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu India
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/jmp.JMP_51_18
Abstract | | |
Purpose: Accurate segmentation of retinal blood vessel is an important task in computer-aided diagnosis and surgery planning of diabetic retinopathy. Despite the high-resolution of photographs in fundus photography, the contrast between the blood vessels and the retinal background tends to be poor. Materials and Methods: In this proposed method, contrast-limited adaptive histogram equalization is used for noise cancellation and improving the local contrast of the image. By uniform distribution of gray values, it enhances the image and makes the hidden features more visible. The extraction of the retinal blood vessel depends on two levels of optimization. The first level is the extraction of blood vessels from the retinal image using Kirsch's templates. The second level is used to find the coarse vessels with the assistance of the unsupervised method of Fuzzy C-Means clustering. After segmentation, to remove the optic disc, the region-based active contour method is used. The proposed system is evaluated using DRIVE dataset with 40 images. Results: The performance of the proposed approach is comparable with state of the art techniques. The proposed technique outperforms the existing techniques by achieving an accuracy of 99.55%, sensitivity of 71.83%, and specificity of 99.86% in the experimental setup. Conclusion: The results show that this approach is a suitable alternative technique for the supervised method and it is support for similar fundus images dataset.
Keywords: Diabetic retinopathy, fundus image, Fuzzy C-Means, Kirsch template, medical imaging, retinal blood vessel segmentation
How to cite this article: Jebaseeli T J, Durai C A, Peter J D. Extraction of retinal blood vessels on fundus images by kirsch's template and Fuzzy C-Means. J Med Phys 2019;44:21-6 |
How to cite this URL: Jebaseeli T J, Durai C A, Peter J D. Extraction of retinal blood vessels on fundus images by kirsch's template and Fuzzy C-Means. J Med Phys [serial online] 2019 [cited 2023 Mar 31];44:21-6. Available from: https://www.jmp.org.in/text.asp?2019/44/1/21/253808 |
Introduction | |  |
Diabetic retinopathy (DR) is a condition resulting from chronically high blood sugar, in which the delicate blood vessels lining the inside of the retina get damaged and start leaking thus distorting vision. More than two-thirds of the increase in blindness and visual impairment universally in the previous two decades have been caused by diabetes. The report discharged by the International Diabetes Federation [1] at the world diabetes congress held in Vancouver in December 2015, expressed that the United States has the most elevated rate of diabetes among 38 developed nations. Poor control of glucose level and the absence of access to eye health services in many parts of the world are thought to add to this expansion. The anatomical structure of the retina contains an optic disc, retinal vasculatures, macula, and fovea. The center of the fovea is macula, which is the reason for vision. The retinal blood vessels play a major role in transmitting the light signals from the retina to the brain as information signals. The blood flow begins from the optic disc through optic nerves. The retinal disorder occurs in the human eye due to changes in the dimensions of blood vessels. In the most progressive stage, new abnormal blood vessels grow, harming the retina and leading to permanent scarring and vision impairment or blindness. The blasting of nerves causes settlement of undesirable molecules in the macula such as exudates, lesions, microaneurysms, hemorrhages, cotton wool spots, and retinal vein occlusion. The manual evaluation of retinal blood vessel is inconceivable since the estimation of vascular width is generally pivotal. One of the possible solutions is making use of fundus images analyzed by the computer-based system. The automated disease detection system significantly decreases the load of ophthalmologists.
Retinal blood vessel segmentation is an imperative tool for the recognition of any changes that happen in the blood vessels, and it gives knowledge about the location of vessels. The automatic generation of the retinal map is used for age-related macular degeneration treatment.
The method for segmentation of retinal vessel can be categorized into two main groups: supervised and unsupervised. The supervised approaches learn from a model to decide whether a pixel is a vessel or not with the assistance of the manual label. The manual methods are expensive than unsupervised techniques because of the extraction of different types of features and training of complex classifiers with an enormous amount of data. The related research in this field is presented as follows.
Azzopardi et al.[2] structured a B-COSFIRE filter. The execution of a B-COSFIRE filter is influenced by the estimations of the parameters. Geetha Ramani and Balasubramanian [3] introduced the principal component analysis (PCA) to generate the feature vector, and K-means clustering is executed on this outcome to aggregate the pixels as either vessel or nonvessel cluster. The classification on vessel cluster increased the accuracy but diminished the sensitivity measure in some cases. Wang et al.[4] demonstrated with two prevalent classifiers: convolutional neural network (CNN) and random forest (RF). By coordinating the benefits of feature learning and traditional classifier, the proposed strategy can automatically learn features from the raw images and predict the patterns.
Sreejini and Govindan [5] employed PSO based parameter determination for choosing the correct values of parameters of the multiscale matched filter. The results produced by the multiscale matched filter are superior to single scale matched filter for vessel segmentation. Sil Kar and Maity [6] have utilized the idea of maximum matched filter response and fuzzy conditional entropy to locate the multiple thresholds, the optimal values of which are searched using differential evolution calculation.
Singh and Srivastava [7] utilized the concept of PCA and contrast limited adaptive histogram equalization (CLAHE) for preprocessing. In postprocessing, entropy-based optimal thresholding, and length filtering is used. Panda et al.[8] computed Binary Hausdorff Symmetry and Edge Distance Seeded Region Growing algorithm for retinal vessel segmentation. However, on account of thin blood vessels, variation in the image intensity between vessel and background are the least. The region growing method is edge-dependent and the finest blood vessels are sometimes missed because of the absence of edges. Zhang et al.[9] designed the algorithm, especially for minor vessel components extraction to increase the sensitivity of segmentation; consequently, more non-vessel elements from the background may be recognized as vessels. This will prompt a decrease in specificity and accuracy.
Roychowdhury et al.[10] have introduced a set of eight features to discriminate the vessel and non-vessel pixels. GMM classifier and two Gaussians are used to find the fine vessel pixels. This algorithm is less reliant on training data.
Christodoulidis et al.[11] put forward a methodology dependent on a multi-scale tensor voting framework (MTVF) combined with multi-scale line detection to overcome the confinements of the line detection in handling the smallest vessels. Even though the proposed strategy achieved higher performance, a few difficulties need to be tended to. In the final result, the segmented small vessels are somewhat translated from their ground truth counterparts. In addition, their remade widths do not exactly coincide in some locations with the corresponding vessels in the manually segmented image. Besides, small vessel terminal points are overestimated. Furthermore, the MTVF reconnect the vessel-like false-positive (FP) structures to the main vasculature, conversely, it can miss real vessels lying after junctions. The diameter obtained from the larger vessel is used without considering the local information such as the neighborhood saliency from TVF. This gives rise to FPs and to an equal reduction in the true positive (TP) counts.
Aslani and Sarnel [12] have adopted the hybrid feature vector to fuse the supportive and complimentary local information. Here, they have used 13 Gabor features. If it is reduced, this will obviously cause a little degradation in the accuracy. Zhu et al.[13] have developed a set of 39 discriminative feature vectors of the fundus image. The yield of the classifier is the binary retinal vascular segmentation. As the model only learns the hidden output weights, the training is extremely fast. At that point, when the retina of the patient is examined to diagnose retinal disease, it is discovered that the retinal blood vessels have low contrast with respect to their background. Hence, it is hard to diagnose retinal disease. Therefore, it is necessary to apply suitable image segmentation technique for precise detection of retinal blood vessels. The main aim is to enhance the quality of the segmented image.
Materials and Methods | |  |
The proposed approach takes the input image from a fundus camera. The input image is converted to the green channel for the first step of preprocessing. Later on, CLAHE is applied to the image to enhance its local contrast. Extraction of blood vessels from the retinal image is performed using Kirsch's templates in the first level. To carry out the second level of optimization, Fuzzy C-Means (FCM) is used to find the coarse vessels. To remove the optic disc of the segmented image, the region-based active contour method is used. The output image shows the segmented retinal blood vessels. The schematic diagram of the proposed blood vessel segmentation system is shown in [Figure 1]. | Figure 1: Schematic diagram of the proposed retinal blood vessel segmentation method
Click here to view |
The system takes a retinal image from the publically accessible DRIVE database.[14] The photographs of the DRIVE database were acquired from a DR screening program in the Netherland. The investigated population comprised 400 diabetic patients between the age group of 25 and 90 years. The images were portrayed by a Canon CR5 non-mydriatic 3CCD camera with a 45° Field of View. Each image was captured using 8 bits per color plane at 768 × 584 pixels.
In the underlying stage, CLAHE is applied for band selection, brightness correction and to denoise the image. There are two levels of segmentation. The first level is the extraction of blood vessels from the retinal image using Kirsch's templates. The second level of segmentation is done with the assistance of the unsupervised technique of FCM clustering and to locate the coarse vessels. After segmentation, removal of the optic disc from the segmented image is done by region based active contour method. The output is a segmented image which is further used to find the stages of the disease.
Kirsch's template
Edge detection is a process of distinguishing the pixel values to get frequent changes. The edge information of a particular target pixel is examined by determining the brightness level of the neighborhood pixels. If there is no major difference in the brightness levels, then there is no probability of an edge in the image. Kirsch's template is used for the extraction of blood vessels from retinal images. The Kirsch's template uses a single mask of size 3 × 3 and rotates at 45° increments through each of the eight directions.
The edge magnitude of the Kirsch's operator is calculated as the maximum magnitude across all direction. The matrix contains the information of a pixel and its neighbors. The Kirsch's algorithm detects the edge and its direction. Accordingly, there are eight possible edge identification directions, such as the south, east, north, west, northeast, southeast, southwest, and northwest as shown in [Figure 2]. Kirsch's template can set and reset the threshold values to obtain the most appropriate edges in the images. Kirsch's template works well for images having a clear distinction between the foreground and background, since the retinal blood vessels are considered as required foreground information in fundus images.
Kirsch's template
The procedure of Kirsch's template is described as follows:
- Detection: Apply Kirsch's template to the input retinal image and establish a rule to check the condition for edge detection. If it finds positive, then it can execute the condition further
- False edge removal: If the condition is not fulfilled, at that point the algorithm cannot proceed
- Vessel junction restoration: Fix broken junctions introduced by Kirsch's template. At the broken junction, track the direction of a vessel. Extend the vessel in the opposite direction for a certain length. On the off chance that another vessel is found, then bridge the gap and reestablish the vessel junction
- Vessel labeling: A typical vessel is represented by two parallel edges. Vessel labeling fills the interior pixels of a vessel. The challenging task is to differentiate the area within a vessel and the area between the two different vessels that are parallel to each other.
Fuzzy C-Means
FCM is an unsupervised clustering algorithm that has been applied to a wide range of problems, including feature analysis, clustering, and classifier design. As shown in [Figure 3], the segmented retinal blood vessel image is represented in the feature space, and the FCM algorithm classifies the image by grouping similar data points into clusters.
The clustering is achieved by iteratively minimizing the cost function that is reliant on the distance of the pixels to the cluster centers in the feature domain. The pixels on an image are highly correlated. The pixels in the immediate neighborhood possess nearly the same feature data. Therefore, the spatial relationship of neighboring pixels is an important trademark of an extraordinary guide in imaging segmentation.

1 ≤ m < ∝


The FCM algorithm assigns the pixels to each category by using fuzzy memberships. Let xk= {x1, x1….xn} denotes an image with n pixels to be partitioned into c clusters, where vi, vj= {v1, v2…vc} represents multispectral features. The algorithm creates an iterative improvement to minimize the cost function. μik represents the membership pixel in the cluster, vi is the cluster center, and m is a constant. The parameter “m” controls the fuzziness of the resulting partition, and m = 2 is used. The cost function is minimized when the pixel close to the centroid of their clusters is assigned with high membership values. The low membership value is assigned to pixels which are far away from the centroid. The membership function represents the probability that a pixel belongs to a specific cluster.
Region-based active contour
In the region-based approach, shape representation requires image segmentation in several homogeneous regions. The fundamental idea in active contour models or snakes is to evolve a curve, subject to constraints from a given image, to distinguish the objects in that image. The curve moves toward its interior normal and needs to stop on the object boundary. In the traditional snakes and active contour models, an edge detector is used, depending on the gradient of the image, to stop the evolving curve on the boundary of the desired object. Evacuation of the optic disc is done by the following steps:
- First, select the underlying mask of the input segmented image and apply region-based active contour to obtain the mask of the retinal image
- Subtract the mask from the input image which will result in a new mask
- Then subtract the new mask from the old mask, the resulting image will be the non-masked segmented image.
Validation
In DRIVE dataset, out of 40 images, 33 images are with no indications of pathology and 7 images are with the indications of DR. These pathological images are infected diseased images, which contain microaneurysm, hemorrhages, exudates, and cotton wool spots. Among the 7 images, 3 images are available in the training set and 4 images are present in the test set. The results were obtained for the 40 images in the DRIVE database. For the training set images, a single manual segmentation of the vasculature is available.
The obtained vascular tree structures of the input images are shown in [Figure 4]. The segmented image evaluated with the ground truth blood vessel image map is shown in [Figure 5].
The overlapping boundary vessels are, in addition, distinguished, and they are associated with their corresponding vessel line. Finally, all other non-vessel pixels are excluded to frame the vessel structure. The overall performance of the proposed algorithm is better than all other existing methods.
The region-based active contour method is used to apply the mask on the optic disc of the retina and to obtain the active contours. This active contour demonstrates the retinal blood vessels. The outer boundary region is eliminated in the final segmented retinal blood vessel images as shown in [Figure 6]. | Figure 6: Removing optical disk from the segmented image using ROI for best case
Click here to view |
Three measures are calculated for evaluating the segmentation performance of the proposed novel algorithm: sensitivity, specificity, and accuracy. These measures are computed individually for each image.



TP is the pixel with delegated vessel-like structure; also, they are genuine vessels pixels in the ground truth image. False-negative (FN) is the non-vessel pixel in spite of the fact that they are vessel pixels in the ground truth image. True negative (TN) is the pixel classified as non-vessel, and they are background pixels in the ground truth image. FP is the pixel with delegated vessel-like structure even though they are background pixels in the ground truth image. Sensitivity is the proportion of effectively classified vessel pixels. Specificity is the proportion of correctly classified background pixels. Accuracy is the proportion of effectively classified vessel and background pixels.
The proposed technique outperforms the existing techniques by accomplishing an accuracy of 99.55%, sensitivity of 71.83%, and specificity of 99.86%. [Table 1] gives the performances of various strategies proposed in the literature in terms of the standard inconsistency measurements. Qualitatively, the proposed methodology segments most of the vessels without retaining any disconnected background noise. It proves that the performance of the proposed algorithm is better than all other methods which have lower performance on the DRIVE datasets. | Table 1: Performance comparisons of the proposed methodology with existing methods on the DRIVE database
Click here to view |
Discussions | |  |
A number of image analysis techniques have been developed to identify and characterize specific patterns for the automatic diagnosis of DR. It is critical in the clinical investigation to precisely distinguish the retinal blood vessels for analyzing the severity level of DR. The proposed system does not fail in recognizing the minor blood vessels which are hidden by the depigmentation caused by the infected particles. Normally, it is hard to balance the sensitivity and specificity. Increasing sensitivity tends to reduce the specificity and in turn, may change the overall accuracy. In reality, the tiny vessels have very low contrast compared with the background, thus if the algorithm is especially intended for tiny vessel elements extraction to increase the sensitivity of segmentation, more non-vessel elements from the background may be detected as vessels. This will prompt a reduction in specificity and accuracy. We report a normal sensitivity of 71.83%.
Achieving good performance in segmenting thin vessels suggests that it is better to improve the overall accuracy by pursuing higher or sacrificing only a small fraction of specificity. The configuration of B-COSFIRE filter is needed to be adjusted for various patterns, such as vessels, bifurcations, and crossovers points at different scales.[2] There are errors in the reproduced width of a small vessel. This gives rise to FPs and to an equivalent reduction in the TP counts. There is significant difficulty in reconnecting vessel segments that are located beyond junctions.[11] The classifier has been trained by choosing 300,000 samples randomly from the DRIVE dataset.[7] The proposed model does not need any training. CNN neglects to capture some fine vessels around the optic disc.[4] From [Figure 4], we can observe that it fails to distinguish the small vessels and false detection near the optic disc is more in (b) and (c).[5] Since the weak segmentation operators extract mainly the centerlines of vessels, the fused final segmentation result contains numerous FN classifications.[6]
The region growing method is edge-dependent, and due to variation in the image intensity between vessel and background, the finest blood vessels are sometimes missed due to the non-appearance of edges.[8] The accuracy of RF relies on every classifier and their correlations. Every classifier has low correlations, which is unfit for classification and regression on images with tremendous noises.[13]
Conclusions | |  |
It is essential in the clinical examination to accurately distinguish the retinal blood vessels for analyzing the severity level of DR. Many algorithms are not able to distinguish the retinal blood vessels from the depigmented pathological retinal images. The experimental results are obtained for all the 40 images of the DRIVE database. The result demonstrates that the proposed Kirsch's template with FCM clustering detects all the blood vessels precisely. The minor vessels are distinguished without any discontinuities. CLAHE eradicates the noise present in the depigmented retinal images. The proposed method reduces the work of the ophthalmologists in analyzing the blood vessels of the patients with DR by analyzing the segmented vessel structure. The proposed retinal vessel segmentation techniques can be applied to similar cases of datasets. This work can be extended by quantification of tortuosity and fractal blood vessel dimension in chronic obstructive fundus images.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
[15]
References | |  |
1. | |
2. | Azzopardi G, Strisciuglio N, Vento M, Petkov N. Trainable COSFIRE filters for vessel delineation with application to retinal images. Med Image Anal 2015;19:46-57. |
3. | Geetha Ramani R, Balasubramanian L. Retinal blood vessel segmentation employing image processing and data mining techniques for computerized retinal image analysis. Biocybern Biomed Eng 2016;36:102-18. |
4. | Wang S, Yin Y, Cao G, Wei B, Gongping Yang YZ, Hierarchical retinal blood vessel segmentation based on feature and ensemble learning. Neurocomputing 2015;149:708-17. |
5. | Sreejini KS, Govindan VK. Improved multiscale matched filter for retina vessel segmentation using PSO algorithm. Egypt Inform J 2015;6:253-60. |
6. | Sil Kar S, Maity SP. Retinal blood vessel extraction using tunable bandpass filter and fuzzy conditional entropy. Comput Methods Programs Biomed 2016;133:111-32. |
7. | Singh NP, Srivastava R. Retinal blood vessels segmentation by using gumbel probability distribution function based matched filter. Comput Methods Programs Biomed 2016;129:40-50. |
8. | Panda R, Puhan NB, Panda G, Hausdorff NB. Symmetry measure based seeded region growing for retinal vessel segmentation. Biocybern Biomed Eng 2016;36:119-29. |
9. | Zhang L, Fisher M, Wang W. Retinal vessel segmentation using multi-scale textons derived from keypoints. Comput Med Imaging Graph 2015;45:47-56. |
10. | Roychowdhury S, Koozekanani DD, Parhi KK. Blood vessel segmentation of fundus images by major vessel extraction and subimage classification. IEEE J Biomed Health Inform 2015;19:1118-28. |
11. | Christodoulidis A, Hurtut T, Tahar HB, Cheriet F. A multi-scale tensor voting approach for small retinal vessel segmentation in high resolution fundus images. Comput Med Imaging Graph 2016;52:28-43. |
12. | Aslani S, Sarnel H. A new supervised retinal vessel segmentation method based on robust hybrid features. Biomed Signal Process Control 2016;30:1-12. |
13. | Zhu C, Zou B, Zhao R, Cui J, Duan X, Chen Z, et al. Retinal vessel segmentation in colour fundus images using extreme learning machine. Comput Med Imaging Graph 2017;55:68-77. |
14. | |
15. | Hassanien AE, Emary E, Zawbaa HM. Retinal blood vessel localization approach based on bee colony swarm optimization, fuzzy c-means and pattern search. J Vis Commun Image Represent 2015;31:186-96. |
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6]
[Table 1]
This article has been cited by | 1 |
A Metaphorical Interpretation on Vessel Segmentation Algorithms by using Local Connected Fractal Dimension |
|
| A. A. Navish, M. Priya, R. Uthayakumar | | Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization. 2022; : 1 | | [Pubmed] | [DOI] | | 2 |
Comprehensive review of retinal blood vessel segmentation and classification techniques: intelligent solutions for green computing in medical images, current challenges, open issues, and knowledge gaps in fundus medical images |
|
| Aws A. Abdulsahib, Moamin A. Mahmoud, Mazin Abed Mohammed, Hind Hameed Rasheed, Salama A. Mostafa, Mashael S. Maashi | | Network Modeling Analysis in Health Informatics and Bioinformatics. 2021; 10(1) | | [Pubmed] | [DOI] | | 3 |
A novel retinal image segmentation using rSVM boosted convolutional neural network for exudates detection |
|
| Swarup Kr Ghosh, Anupam Ghosh | | Biomedical Signal Processing and Control. 2021; 68: 102785 | | [Pubmed] | [DOI] | |
|
 |
|