Mini Review - Stem Cell Research and Regenerative Medicine (2023) Volume 6, Issue 2

Impact of Retinal Vessel Image Coherence on Retinal Blood Vessel Segmentation

Brijesh Singh*

Department of Stem Cell and Research, India

*Corresponding Author:
Brijesh Singh
Department of Stem Cell and Research, India
E-mail: ahjshfj@gmail.com

Received: 01-Apr-2023, Manuscript No. srrm-23-95814; Editor assigned: 04-Apr-2023, Pre-QC No. srrm-23- 95814 (PQ); Reviewed: 18-Apr-2023, QC No. srrm-23-95814; Revised: 22- Apr-2023, Manuscript No. srrm-23- 95814 (R); Published: 28-Apr-2023, DOI: 10.37532/srrm.2023.6(2).21-23

Abstract

Retinal vessel segmentation is critical in detecting retinal blood vessels for a variety of eye disorders, and a consistent computerized method is required for automatic eye disorder screening. Many methods of retinal blood vessel segmentation are implemented, but these methods only yielded accuracy and lack of good sensitivity due to the coherence of retinal blood vessel segmentation. Another main factor of low sensitivity is the proper technique to handle the low-varying contrast problem. In this study, we proposed a fivestep technique for assessing the impact of retinal blood vessel coherence on retinal blood vessel segmentation. The proposed technique for retinal blood vessels involved four steps and is known as the pre-processing module. These four stages of the pre-processing module handle the retinal image process in the first stage, uneven illumination and noise issues using morphological operations in the second stage, and image conversion to grayscale using principal component analysis (PCA) in the third step. The fourth step is the main step of contributing to the coherence of retinal blood vessels using anisotropic diffusion filtering and testing their different schemes and get a better coherent image on the optimized anisotropic diffusion filtering. The last step included double thresholds with morphological image reconstruction techniques to produce a segmented image of the vessel. The performances of the proposed method are validated on the publicly available database named DRIVE and STARE. Sensitivity values of 0.811 and 0.821 on STARE and DRIVE respectively meet and surpass other existing methods and comparable accuracy values of 0.961 and 0.954 on STARE and DRIVE databases to existing methods. This proposed new method for retinal blood vessel segmentations can help medical experts diagnose eye disease and recommend treatment in a timely manner.

Keywords

Retinal fundus image • Fundus photography • Segmentation • Coherence • Optimized anisotropic diffusion filtering

Introduction

The most prevalent eye disorders include age-related macular degeneration, glaucoma, and diabetic retinopathy (DR). These disorders are largely caused by blood vessels in the light-sensitive membrane known as the retina. Rapid progression of DR, in particular, can be fatal and result in permanent vision loss due to two main factors: hyperglycemia and hypertension [1]. According to global statistics, it is estimated that approximately 30 million people worldwide will be affected by DR in 2030. Macular degeneration, on the other hand, is a major cause of vision loss in developed countries. Macular degeneration affects approximately one in every seven people over the age of 50 in developed countries. Simply put, if eye disorders are not treated, they can result in serious complications like a sudden loss of vision. Early detection, treatment, and consultation with an ophthalmologist are critical for avoiding serious eye disorders [2]. It has recently been documented that early disease detection and prompt treatment with proper follow-up procedures can prevent 95% of vision loss cases. For this purpose, one of the computerized techniques for identifying these progressive disorders is by analyzing the retinal image [3].

The fundus camera has two configurations of operation: Fundus fluorescein angiography (FFA) and digital color fundus image. The FFA configuration involves injecting fluorescein, a liquid that improves visibility when exposed to ultraviolet light, into the patient’s nerves. The path of the ultraviolet light through the vessels is brightened, facilitating the examination of blood flow in the retina vessel network [4]. It produces an image with high contrast and leads to a better view for the analysis of the vessels by the expert ophthalmologist. However, FFA configuration takes time, and it is challenging for the specialist to provide timely analysis for quick processing, which slows down the process. The digital color fundus image configuration contains the computerized method for performing segmentation automatically. It possesses the possibility of lowering the amount of manual labor required. While also lowering the cost of the inspection process. Reliable vessel segmentation is a hard process, and the computerized process based on color fundus image analysis allows for rapid analysis and processing [5].

This paper’s research goal is to evaluate the impact of vessel contrast on retinal blood vessel segmentation. The analysis of color images of the retinal fundus is a difficult task due to varying and minimal contrast and irregular illuminations of the vessels against their background. This method can be linked to the FFA analysis process and the influence of FFA can be reduced by using contrast normalization filtering such as the image coherence method. Our proposed method contained different stages [6]. The first stage involves converting the retinal fundus color images into 3 RGB channels (red, green, and blue). The second stage contained the use of compound morphological techniques to eliminate uneven illumination and noise. The third stage is based on the new PCA technique to get a good grayscale image. But blood vessels are not always properly coherent, so the fourth stage contains the main contribution of this research work, we used different anisotropic-oriented diffusion filter schemes to get a well-coherent image. The fifth stage includes post-processing to develop a well-segmented image, based on our proposed image-rebuilding technique [7].

Materials and Method

Retinal image to RGB channel conversion

The fundus camera is used for fundus photography, and the fundus camera magnifies viewpoints of the interior of the retina with the help of the lens. The fundus camera, which is used to photograph the inside of the eye, is made up of standard lowpower microscope sensors and a camera. The retina is made up of the posterior poles, the macula, and the optic disc. The fundus camera captured the retinal fundus images by using imagining the theory of separation of the illumination and reflectance retinal surface [8]. After the image acquisition process, the first stage of the proposed model is based on dividing retinal fundus images into RGB color channels. These channels necessitate additional processing time and intend to reduce the computation time, and the best option is to convert the retinal image to RGB channels, It is analyzed that RGB channels suffer from variable low contrast and noise, and there is a need to remove uneven illuminations. The process of removing noise and uneven lighting is explained in the following section [9].

Eliminate uneven illuminations and noise

More retinal vessels are visible by manipulating uneven illumination and removing noise from retinal fundus images. We used image-processing tactics to handle this problem. The first step is converting RGB images to inverted RGB images, then we applied the morphological operation to handle the background non-uniformity, the top and bottom morphological tactics are used to get well-visible vessels, and both of these tactics work depicts the outcome of this step [10].

Conversion of grey-scale image

Detailed images are observed from the grayscale image, especially in medical images. Medical images are very critical for analyzing features. The observation of retinal images is very important to indicate the evolution of eye diseases. After dealing with the problem of uneven illumination, The following major work is to combine the RGB images into a single grayscale image, as this is necessary because each channel shows a different variation in contrast. The novel principal component analysis (PEC) technique is used to get a good grayscale image, The PCA technique is based on transforming the rotation of the intensity magnitudes of the color space into orthogonal axes that give a well-contrasted grayscale image. The representation of the conversion of retinal RGB channels to grayscale is well defined. PCA gave a very discriminating image concerning the vessels compared to their background. Histogram analysis of PCA and it can be analyzed that it is more spread out and shows more intensity level compared to the morphological tactic image.

Coherence of the retinal vessels

After obtaining the grayscale image, the Retinal Vessels still need to be improved because the large vessels observed correctly compared to the small vessels cannot be analyzed. Tiny vessel analysis can be analyzed correctly using oriented diffusion filtering and this filtering technique is first adopted by to detect low-quality fingerprints. The operation of an oriented diffusion filtering requires the image’s externally calculated orientation information and it is known as an orientation field (OF) which makes the diffusion tensor and orients according to the direction flow of vessels. The main motivation for using anisotropic diffusion filtering is to create the best ellipse tilt angle data as well as to correctly detect small vessels. The representation of the anisotropic diffusion of the image.

Conclusion

This research work contains an analysis of the impact of the coherence of the retinal vessels on the segmentations. Previous methods for segmenting retinal vessels were used to address the issue of poor varying contrast and noise, however, these techniques were ineffective in increasing the sensitivity of small vessel detection, and small vessel detection requires good coherence of the segmentation of the retinal vessel. The competence to correctly identify retinal vessels has given medical experts an advantage in analyzing disease progression and recommending appropriate treatment. In this study, the suggested coherency of retinal vessels (pre-processing step) and its impact on the segmentation modulus (post-processing step) resulted in promising results for small vessel segmentation. The reported method gave a good performance and is comparable to existing methods on the STARE and DRIVE datasets. We compared the performance of our proposed method against traditional methods and methods based on deep learning. We achieved a sensitivity of 0.821 on DRIVE, 0.811 on STARE, a specificity of 0.962 on DRIVE and 0.959 on STARE, an accuracy of 0.961 on DRIVE and 0.954 on STARE, an AUC of 0.967 on DRIVE and 0.966 on STARE and this performance of our proposed method surpasses the traditional and deep learning methods. Our proposed method takes less computation time compared to existing methods.

There are still many improvement ideas for future work. We will implement a robust CNN model as well as a coherency module as a pre-processing to achieve improved performance. Another future improvement is the work on the databases and the generation of the synthetic image to improve the training process in order to obtain a wellsegmented image.

References

  1. Dwyer, Claire. ‘Highway to Heaven’: the creation of a multicultural, religious landscape in suburban Richmond, British Columbia. Soc Cult Geogr. 17, 667-693 (2016).
  2. Indexed at, Google Scholar, Crossref

  3. Fonseca, Frederico Torres. Using ontologies for geographic information integration. Transactions in GIS.6,231-257 (2009).
  4. Indexed at, Google Scholar, Crossref

  5. Harrison, Paul. How shall I say it…? Relating the nonrelational .Environ. Plan A. 39, 590-608 (2007).
  6. Indexed at, Google Scholar, Crossref

  7. Imrie, Rob. Industrial change and local economic fragmentation: The case of Stoke-on-Trent. Geoforum. 22, 433-453 (1991).
  8. Indexed at, Google Scholar, Crossref

  9. Jackson, Peter. The multiple ontologies of freshness in the UK and Portuguese agri‐food sectors. Trans Inst Br Geogr. 44, 79-93 (2019).
  10. Indexed at, Google Scholar, Crossref

  11. Tetila EC, Machado BB et al. Detection and classification of soybean pests using deep learning with UAV images. Comput Electron Agric. 179, 105836 (2020).
  12. Indexed at, Google Scholar

  13. Kamilaris A, Prenafeata-Boldú F. Deep learning in agriculture: A survey.Comput Electron Agric.147: 70-90 (2018).
  14. Indexed at, Google Scholar

  15. Mamdouh N, Khattab A. YOLO-based deep learning framework for olive fruit fly detection and counting. IEEE Access. 9, 84252-8426 (2021).
  16. Indexed at, Google Scholar

  17. Brunelli D, Polonelli T, Benini L. Ultra-low energy pest detection for smart agriculture. IEEE Sens J. 1-4 (2020).
  18. Indexed at, Google Scholar

  19. Suto J. Condling moth monitoring with camera-equipped automated traps: A review. Agric. 12, 1721 (2022).
  20. Indexed at, Google Scholar