0% found this document useful (0 votes)
35 views

Development of A Retinal Image Segmentation Algorithm For The Identifying Prevalence Markers of Diabetic Retinopathy Using A Neural Network

Diabetic Retinopathy (DR) is a prominent cause of blindness and visual problem that affects the eyes of humans who are affected by diabetics
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Development of A Retinal Image Segmentation Algorithm For The Identifying Prevalence Markers of Diabetic Retinopathy Using A Neural Network

Diabetic Retinopathy (DR) is a prominent cause of blindness and visual problem that affects the eyes of humans who are affected by diabetics
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Development of a Retinal Image Segmentation


Algorithm for the Identifying Prevalence Markers of
Diabetic Retinopathy Using a Neural Network
Muluneh Hailu Heyi Daniel Moges Tadesse
Department of Electrical and Computer Engineering, Department of Biomedical Engineering,
Hawassa University, Hawassa University,
Hawassa, Ethiopia, Hawassa, Ethiopia,

Abstract:- Diabetic Retinopathy (DR) is a prominent Keywords:- Retinal Imaging, Image Processing, Image
cause of blindness and visual problem that affects the eyes Segmentation, Neural Network, Diabetic Retinopathy.
of humans who are affected by diabetics. Most of the time
it does not show symptoms at an early stage and it is hard I. INTRODUCTION
for the patient to identify the symptoms until a visual
ability degrade and the treatment becomes is less The human eye has many parts which have their
effective. It becomes tough for medical experts purposes and to process the interior image of an eye needs a
(ophthalmologists) to identify DR at an early stage better understanding of the parts. The retina acts like the film
manually by observing the retinal image taken by a of the eye on the interior surface of the human eye and is used
fundus camera. Thus, computer-aided image processing to change light rays into electrical signals. Through optical
of retinal images taken by fundus camera has tremendous nerves, it sends converted electrical signals to the brain. The
advantages to detect retinal lesions associated with optic nerve act as a wire to connect the eye with the brain for
Diabetic Retinopathy at an early stage. With less time and the electrical signal. The optic disc (OD) is the small round
effort, the computer aid image processing examines a mark on the retina where the optic nerve exits and the blood
large number of images more accurately than the manual vessels enter the eye, and it has a lighter area on the retina
observer-driven techniques. It becomes important image. The macula is found around the central region of the
diagnostic aid to reduce the workload of retina which is used to control the central light vision. The
ophthalmologists. However, the presence of various fovea is a small part of the retina found in the center of the
artifacts like the similarity of anatomical structures, macula which is responsible for the highest visual acuity. The
movement of the patient eye during image capturing, vascular network is a network responsible for providing
device noise, and illumination makes the segmentation oxygen, nutrients, and blood to the retina (Afzal 2003).
and processing of images of major pathological structures
a difficult task. The human eyes will lose their sight due to different
reasons and Diabetic Retinopathy (DR) is one of them.
In this study, we have developed a retinal image Diabetic Retinopathy (DR) is a prominent cause of blindness
segmentation algorithm and user-friendly software that and visual problem that affects the eyes of humans who are
can ease the task of the medical experts by automatically affected by diabetics. Most of the time it does not show
identifying Hard Exudates (HEs), which are the most symptoms at an early stage and it is hard for the patient to
prevalent characteristic features of Diabetic Retinopathy identify the symptoms until a visual ability degrade and the
in its earliest stage. The algorithm first is written and treatment becomes is less effective. It becomes tough for
tested using MATLAB then user-friendly software is medical experts (ophthalmologists) to identify DR at an early
developed using C# programming language in the stage manually by observing the retinal image taken by a
Microsoft .Net framework. To classify and segment the fundus camera. Diabetic Retinopathy (DR) has different
retinal image taken by the fundus camera a general forms and stages to affect eye vision and these stages are
representation of images color in the three spaces represented by the characteristic features of DR which are:
(trinion) has been used and to extract image features a Microaneurysms (MA), Hemorrhages (H), and Exudates
trinion based Fourier Transforms has also been applied. (Hard Exudates (HE) and Soft Exudates (SE)). MA are
Neural Network (NN) based segmentation of Hard discrete, localized expansions of weakened capillary walls
Exudates are included in the method for color space and show up as small, red 'dots' on the retina. When the small
transformation and to extract features. blood vessels rupture, bleeding occurs over time. They
generally appear as either a red 'dot' or 'flame-like on the
The efficiency of the developed image processing has retina. Exudates are the main sign of DR, a common retinal
been tested in classifying and identifying hard exudated complication related to diabetes and the leading cause of
and it shows better results. blindness. Hard exudates (HEs) are the most specific markers
for the presence of retinal edema, the major cause of vision
loss in nonproliferative forms of DR, and one of the most

IJISRT21OCT275 www.ijisrt.com 713


Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
common lesions in the early stages of DR (Phillips et al. ophthalmoscopic techniques, including digital fundus
1993). HE is caused by the leakage of proteins and lipids photography, indirect ophthalmoscopy, stereoscopic
from the blood into the retina through damaged blood vessels. biomicroscopy, fluorescence angiography, and optical
They appear as white or yellow patches in retinal images, coherence tomography (OCT), are used for the diagnosis and
sometimes as ring structures around leaking capillaries. As treatment of ophthalmic diseases. However, due to their
the severity of DR in the blood progresses, the vessels relatively low cost, simplicity, and accessibility, fundus
become clogged, leading to a micro-infarct in the retina called cameras are the most commonly used. 2D imaging of
SE. In advanced stages, this leads to diabetic macular edema translucent 3D retinal tissue is suitable for observing fundus
(DME). and structures (see Fig 1). Most commonly used to identify
and assess symptoms of retinal detachment or ocular disease
DR is not a curable disease, but if it is sensed in the due to DR. Color images of the fundus of the eye are also
early stages major vision loss can prevent with laser taken to document the presence of disease and to observe its
treatment. This is why diabetics need to have regular fundus change over time.
camera examinations of the back of their eyes. A variety of

Fig. 1. RGB-colored retinal Image was taken by fundus (Osareh and Shadgar 2009).

The main objective of the study was to develop an attained an average sensitivity of 95%, specificity of 95%,
automated method for processing retinal images based on an and accuracy of 97%.
image captured by a fundus camera, which would allow
effective detection and mass screening of DR markers. This The authors (Shengchun et al. 2019) also proposed an
research paper explains the holistic approach to retinal image algorithm to detect HEs. First, they perform an automatic
analysis, texture feature extraction for detection of HE, preprocessing method for retinal images by using the active
signature map created to enhance EXs, and OD, classification thresholding technique and the fuzzy Cmeans clustering
used to identify abnormal retinal images, and development of (FCM) technique, and then use a classifier called Support
user-friendly software that allows medical experts to easily Vector Machine. Their proposed algorithm includes four
operate the system. stages. The first stage is preprocessing, the second is optic
nerve head localization, the third stage is determining the
II. RELATED WORKS eligible HEs by using a dynamic threshold in clustering with
a global threshold based on FCM, and the last stage is feature
The authors (Satya et al. 2019) proposed an algorithm to extraction. In the last stage, the eight texture features were
detect HEs using morphological operations. Their algorithm extracted from the candidate regions, which were then fed
consists of three stages: First, preprocessing, second, feature into an SVM classifier for automatic classification of HEs.
extraction, and third, HEs detection. They apply contrast DIARETDB1 and the retinal image database eophtha EX
enhancement and noise removal to the green component of were used to try to evaluate the algorithm. Trained and cross-
the retinal color image, and only this channel is used for validated at the pixel level (10 times) using the eophtha EX
further analysis. In the second phase, features are extracted to and DIARETDB1 retinal databases. Mean sensitivities of
detect the candidates of HE using morphological operations 76.5%, PPV of 82.7%, and Fscore of 76.7% were achieved
(top-hat and bottom-hat). Finally, the algorithm detects HEs using eophtha EX archives and using the DIARETDB1
by finding the difference between the bottom-hat and top-hat retinal database. Average sensitivity of 97.5%, specificity of
features. To investigate their proposed algorithm, they used 97.8% and accuracy of 97.7%.
the DIARETDB1 and the High-Resolution Fundus (HRF)
image database. Using the DIARETDB1 database, they (Anup et al. 2017) have proposed an algorithm for
achieved an average sensitivity of 94%, a specificity of 96%, feature-based classification of HE in retinal images. First,
and an accuracy of 95%. With the HRF retina database, they each of the three color components is preprocessed, then the
optic disk and blood vessels are extracted from the image and

IJISRT21OCT275 www.ijisrt.com 714


Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
finally the exudate pixels on the image are For standard retinal image, the following publicly
identified/classified using region growing technique. They available standard retinal image data sets archives have been
have used standard retinal image databases and achieved an used for the developed algorithm performance evaluation,
average accuracy of 99% in segmenting or classifying HE analysis, and testing. These datasets are DIARETDB0,
pixels in an image. (Vikram et al. 2020) have also proposed a DIARETDB1, High-Resolution Fundus (HRF) image
neural network-based method for diagnosis of DR -related database, STARE (STructured Analysis of the REtinal image)
diseases. database and DRIVE (Digital Retinal Image for Vessel
Extraction) dataset. From each database both normal and
(SHILPA et al. 2018) have proposed an algorithm for abnormal retinal images have been used.
detecting HEs. Their proposed algorithm includes three
stages: the first is preprocessing, the second is morphological The above publicly available archives consist of high-
operation, and the third stage is segmentation of HEs. They quality images with useful medical findings. Expert
apply the morphological reconstruction to the green delineations are used as the gold standard for performance
component of the retinal color image. In the second stage, evaluation and comparison of different segmentation
features are extracted to detect the candidates HE using the methods.
morphological operation. The final segmentation algorithm
recognizes the HEs considering their features. To evaluate the different situations and to produce
In a literature review, many researchers have developed robust image processing algorithms, we have tried to include
various methods for segmenting HE, vascular, fossa, and the following types of retinal images in image data sets.
external diameters in fundus images (Eadgahi and Pourreza  Normal retinal images
2012, Niemeijer et al. 2007, Ehsan and Somayeh 2012, Liu et  Retinal images with mild and moderate nonproliferative
al. 2008 and Godse et al. al. al., 2013). In general, most DR, which have HE, SE, MA, and H on them.
attempts at color fundus imaging have focused on analyzing  Retinal images with glaucoma.
each color component sequentially and combining results
from different channels. However, these sequential methods 1.2 Image Analysis Methodology
hide the existing cross-correlations between color channels, The suggested method analyses color retinal images
and the associated computational costs are often high. In this captured with digital fundus cameras from individuals with
regard, a more holistic approach to presentation and analysis DR using a mathematical framework. It extracts important
can have tremendous benefits. A recent application of imaging features for classification and segmentation of retinal
multidimensional algebra in color image analysis uses vector pictures using a general representation of color images in
representations of the three color components in quaternion three-dimensional space and trinion-based Fourier
and trinion spaces. The corresponding integral transform transforms. The technique includes a suitable color space
allows analysis to be performed as a whole or as an entity transformation and a method for extracting robust higher-
while retaining information about cross-correlation (Assefa et order features, followed by HE segmentation using a Neural
al. 2010a and Assefa et al. 2011). Based on such a vectorial Network (NN). The efficacy of picture segmentation
principle, in this study, we have developed a method for algorithms based on NN and Neuro Fuzzy (NF) in detecting
designing a robust retinal image segmentation algorithm that the existence of major abnormality markers (HEs, SEs, MA,
facilitates the task of medical experts to automatically H) and attenuating intra an interimage fluctuations due to
identify HEs. undesirable artifacts is comprehensively studied. The
technique has been used to analyse images from a variety of
III. MATERIALS AND METHODS typical retinal image data set, with encouraging results. The
proposed methodology as shown in Fig.2, concentrates on the
1.1 Image Data Set following two issues:
For analyzing the proposed image processing schemes a  Based on feature maps developed by extracting robust te
set of the color retinal images generated by fundus camera is xture descriptors, effective augmentation of EXs, the OD
collected from the Black Lion Hospital Diabetic Centre (in , blood vessels, and other background structures.
Ethiopia) (a total of 66 images), and standard retinal image  EXs are accurately segmented using a neural network (N
data sets (a total of 393 images from 5 standard retinal image N)-based classifier that has been created.
data sets).

IJISRT21OCT275 www.ijisrt.com 715


Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig. 2. The suggested image processing framework is depicted as a block diagram.

IV. IMAGE PROCESSING ALGORITHM as large peaks are grouped together in small areas of the
DEVELOPMENT AND IMPLEMENTATION histogram.

Retinal Image processing has series of steps from For further analysis, two color spaces were selected and
holistic processing to feature extraction and NN. Hereunder tested for better contrast and uniformity for EX and other
we explain each phase of the image processing algorithm step retinal structures. HSL (Hue, Saturation,
by step. Brightness/Value/Intensity) and GLM` (RGB G Component,
LUV L Component, and CMYK Inverse Magenta
1.3 Preprocessing Component). The first HSL color space was chosen because
In the pre-processing phase, we have performed it is similar to human color perception and causes less
normalization of the image data and suitable color space variation within and between images due to various artifacts,
selection. After the retinal image is read or loaded from the potentially useful for our anomaly detection system. The
storage, the image data type is converted from unit8 to second GLM color space was chosen based on the results of
double. Then each color component is divided by 255 and the scattering matrix of maximum interclass separability in
their value becomes between 0 and 1. This image (Lu and Fang G 2013). GLM color bars are important for
normalization process is applied to reduce the overflow error improving DR performance.
due to further analysis.
As mentioned earlier, when using an image processing
When the normalization process is completed program to inspect DR, the green channel of the original
appropriate color space must be selected. Because of retinal color fundus image is typically used. The reason for this is
pigmentation and the acquisition process, retinal images vary that most methods rely on color intensity information as the
greatly in brightness and contrast. This makes it more basis for developing image processing algorithms for color
difficult to distinguish retinal features and lesions, which images. The green channel of the fundus retinal image is
hinders automatic segmentation of abnormalities such as often used for HE detection and segmentation due to the high
EXs. contrast of HE (Snchez et al. 2009). However, a more holistic
analysis is generally more meaningful and should therefore
Determining the color space that sends dominant light lead to a more effective analysis of color images. In this
to the retina in an ideal way than the original RGB color regard, the current work is a combination of three typical
space has proven useful for in-depth exploration. To this end, channels for anomaly detection based on DR and OD
several color spaces such as RGB, HSL, Lab, LUV, CMYK, localization of fundus images. The G channel from the RGB
and Ycrbcrr were tested and compared to their suitability for color space was chosen based on the above facts. HE appears
extracting highlights (surface descriptors) in a powerful and brighter in the fundus image than in the background, which is
valuable way. dominated by green channels. Due to the consistent EX and
OD luminance information, the L channel in the LUV color
In a normal retinal image, the red component is space was chosen as the second channel (Kande et al. 2009).
oversaturated, resulting in low contrast in bright areas. The In most fundus images, the background color is red, while HE
blue component is less saturated and the black areas has less is yellowish. The inverse magenta channel from the CMYK
contrast. A single green component enhances the contrast of color space was selected as the third channel. This is to allow
the entire area. As a result, the image has very low contrast, good separation of dark red blood vessels from OD (see, for
example, Figure 3). The GLM color space scatter matrix

IJISRT21OCT275 www.ijisrt.com 716


Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
calculated in (Lu and Fang G 2013) is larger than the RGB, the product µ1 µ2 =-1. The choice between µ1and µ2 is
HSV, LUV, and Lab color space scatter matrices in terms of optional. Similarly, to the previous studies (Assefa, et al.,
separating EX from non-EX pixels. (𝑖−𝑗)
2011), the choices of µ1 and µ2 are given by µ1 = and
√2
(−1−𝑖+𝑗)
µ2 = .
√2

Based on this the discrete TFT and its inverse are


computed as follows:
1 𝑢𝑥 𝑣𝑦
𝑇(𝑢, 𝑣) = 𝑀𝑁 ∑𝑀−1 𝑁−1
𝑥=0 ∑𝑦=0 ℎ(𝑥, 𝑦) (cos (2π ( 𝑀 + 𝑁 )) −
𝑢𝑥 𝑣𝑦
µ1 sin( 2𝜋 ( 𝑀 + 𝑁
))) (3)

𝑢𝑥 𝑣𝑦
ℎ(𝑥, 𝑦) = ∑𝑀−1 𝑁−1
𝑢=0 ∑𝑣=0 𝑇(𝑢, 𝑣) (cos (2π ( + )) +
𝑀 𝑁
𝑢𝑥 𝑣𝑦
µ2 sin( 2𝜋 ( 𝑀 + 𝑁
))) (4)

Where:
𝑀 × 𝑁 is the total number of voxels (vectors) found in the
selected region of interest (window) of the source image.
u=0…N-1, v=1….M-1, are the discrete frequencies along
with the horizontal and vertical directions respectively.

4.2.2. Feature Dimension Reduction


Principal component analysis (PCA) was applied to
each locally transformed 3 × 3 TFT image, each resulting in
the output of the trinion value in the new PCA space, each
Fig. 3. (a)Sample original image, (b)green channel,
image being formed from a 3 × 3 matrix in our case. This step
(c)luminance channel, and
is necessary to reduce some redundancy in our multichannel
data. Then each value of the resulting 3x3 matrix is
1.4 Feature Extraction
normalized from 0 to 1 and these are the probability density
After the preprocessing phase is completed, the next
functions used to calculate the properties of the texture.
phase is feature extraction which includes the following
tasks:
4.2.3. Extract texture features for candidate detection and
 Mapping GLM’ data to the three-color vectors in Trinion
image enhancement
space Nine different Haralick texture features (Assefa et al.
 Feature Dimension Reduction 2010a) were calculated: Sum Mean, Variance, Energy
 Textural feature extraction for candidate detection and (Angular Second Moment), Correlation, Homogeneity,
improved visualization Contrast, Entropy, Cluster Shadow and Cluster Prominence
 Signature map generation and have been tested for their effectiveness in the
quantification of different subjects. in our retinal samples.
4.2.1. Mapping GLM’ data to the three-color vectors in Each texture feature is extracted as a component of the PCA
Trinion space matrix with three values. The computed characteristic is then
The converted RGB color retinal image is mapped to assigned to the centre value of the voxel in this window. This
Trinion as H (x, y) = G + iL + jM`. We then performed a step is then repeated for all voxels contained in the selected
spatially localized analysis in the selected color space by region of interest in the image.
calculating the Trinion-Fourier transform (TFT) in a 3x3
translation window. The above textural features were computed as follows:
Sum-mean=0.5 ∑3𝑢=1 ∑3𝑣=1(𝑢(𝑝(𝑢, 𝑣)) + 𝑣(𝑝(𝑢, 𝑣)))
Two practical definitions of the Trinion-Fourier (5)
transform (TFT) have been proposed (Assefa et al. 2011).
The TFT of type I and vice versa (ITFT) are given by: Variance=0.5 ∑3𝑢=1 ∑3𝑣=1[(𝑢 − µ)2 𝑝(𝑢, 𝑣) + (𝑣 −
T(u,v)= µ)2 𝑝(𝑢, 𝑣)]
∞ ∞
∫−∞ ∫−∞ ℎ(𝑥, 𝑦) (cos (2𝜋(𝑢𝑥 + 𝑣𝑦)) − µ1 sin(2𝜋(𝑢𝑥 + 𝑣𝑦))) 𝑑𝑥𝑑𝑦(6)
(1)
h(x,y)= Energy= ∑3𝑢=1 ∑3𝑣=1 𝑝(𝑢, 𝑣)2
∞ ∞
∫−∞ ∫−∞ 𝑇(𝑢, 𝑣) (cos(2𝜋(𝑢𝑥 + 𝑣𝑦)) + µ2 𝑠𝑖𝑛(2𝜋(𝑢𝑥 + 𝑣𝑦))) 𝑑𝑢𝑑𝑣(7)
(2) (𝑢−µ𝑥 )(𝑣−µ𝑦)𝑝(𝑢,𝑣)
Correlation= ∑3𝑢=1 ∑3𝑣=1 𝜎𝑥 𝜎𝑦
Where h(x, y) is generally an image function of the (8)
trinion value, µ1 is a pure unit trinion, and µ2 is a trinion, so

IJISRT21OCT275 www.ijisrt.com 717


Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
𝑝(𝑢,𝑣)
Homogeneity= ∑3𝑢=1 ∑3𝑣=1 1+(𝑢−𝑣)2
(9)

Contrast= ∑3𝑢=1 ∑3𝑣=1(𝑢 − 𝑣)2 𝑙𝑜𝑔(𝑝(𝑢, 𝑣))


(10)

Entropy=− ∑3𝑢=1 ∑3𝑣=1 𝑝(𝑢, 𝑣)𝑙𝑜𝑔(𝑝(𝑢, 𝑣))


(11)
3
Cluster shade=∑3𝑢=1 ∑3𝑣=1(𝑢 + 𝑣 − µ𝑥 − µ𝑦 ) 𝑝(𝑢, 𝑣)
(12)
4
Cluster prominence=∑3𝑢=1 ∑3𝑣=1(𝑢 + 𝑣 − µ𝑥 − µ𝑦 ) 𝑝(𝑢, 𝑣)
(13) Fig. 4 the developed feed-forward back-propagating neural
network
Where:
𝑝(𝑢, 𝑣) is the normalized spectral value (which can be 2. We have used the tan sigmoid activation function for all
thought of as a function of probability density) obtained after neurons in the hidden layer and linear activation function
applying PCA to the TFT-transformed image matrix. for the output neuron.
3. We have used a learning rate and moment values of 0.01
1
µ=9 ∑3𝑢=1 ∑3𝑣=1 𝑝(𝑢, 𝑣) is mean of the matrix and 0.0071 respectively for all the neurons in the output
µx=∑3𝑢=1 𝑢 ∑3𝑣=1 𝑝(𝑢, 𝑣) is the sum of row means, and hidden layers. These parameter values are chosen
µy= ∑3𝑣=1 𝑣 ∑3𝑢=1 𝑝(𝑢, 𝑣) is the sum of column means, after various testing and validation stages of the NN.
4. We have used the 1000 number of iterations. This
σx2=∑3𝑢=1(𝑢 − µ𝑥 )2 ∑3𝑣=1 𝑝(𝑢, 𝑣) is the sum of row variance,
parameter value is chosen after looking at the sum mean
and
square errors of the validation, testing, and early training
σy2=∑3𝑣=1(𝑣 − µ𝑦 )2 ∑3𝑢=1 𝑝(𝑢, 𝑣) is the sum of column
data sets.
variance. 5. We have used a total of 257000 training data set where
each training data contains three features that are found in
4.2.4. Signature map generation the signature map generation phase. The training data sets
The above feature values are normalized and used to are manually selected from different structural positions of
create the final signature map. This allows you to distinguish various images and it contains various normal retinal
between areas that correspond to other areas of clinical structures, and other structures with a symptom of mild
significance associated with HE, OD, and DR. The final non-proliferative DR, that have HE, SE, MA, and H on
result of the signature map was generated as a color using a them. The training data set contains:
3-level local texture descriptor. The performance of signature
 64500 voxels (Feature vectors) of HE from different
maps generated by the various texture features was compared
images,
and evaluated accordingly by qualitative and quantitative
 53400 voxels (Feature vectors) of SE from different
comparisons between the signature maps and the available
images,
ground truths.
 50800 voxels (Feature vectors) of H from different
1.5 Image segmentation images, and spatial positions
To differentiate the HE from the normal part of the  12000voxels (Feature vectors) of MA from different
retinal image, the image resulted from the feature extraction images,
process divided or segmented in to different regions based on  76300 voxels (Feature vectors) of normal anatomical
the characteristic of pixels to identify the infected place. areas from different images, spatial positions, and
There are different techniques for image segmentation but structures
due to its efficiency we used feed forward back propagating 6. To decide the stopping point of iterations/number of
neural network to segment HE. epochs, and tune the NN parameters the total learning data
set is first divided into three separate categories. These are
We have used the following specification to segment the training, validating, and testing data sets where each of
the image using Feed Forward Back Propagating Neural them contains a total of 112400,112400, and 32200
Network: Voxels (Feature vectors) respectively. After assessing the
1. As shown in the Fig. 4, we have developed a NN which results, the best performing NN parameters and number of
have 3 layers. The first layer/input layer contains three epochs are chosen. Then the NN is trained and its training
neurons, which is equal to the number of input feature error is measured and compared with the testing error. The
vectors for the NN. In the second layer/ the hidden layer, final training and testing mean square error results of the
we have used five neurons. The third layer/output layer trained NN are in an acceptable range in iteration number
contains one neuron which is equal to the number of 1000. Based on this we use the final trained NN to be used
output vector for the classifier. as a pixel-based HE classifier for a new feature.

IJISRT21OCT275 www.ijisrt.com 718


Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
After the image segmentation of HE is implemented Table 1. Quantitative index of layered separability of
using feed forward back propagating neural network, we have different color models and texture descriptors
also developed a six-layer adaptive neuro-fuzzy (ANF) Color model Texture descriptor Scatter matrix
network to see its efficacy in classifying HE, SE, H, and MA, (J)
which are the most characteristic features of DR. In the six GLM' Sum means 5.81
layer adaptive neuro-fuzzy network we have developed, the HSL Sum means 5.4
first layer is the input layer and it has three neurons each GLM' Variance 5.63
accepting the input vector. The second is the membership HSL Variance 5.13
function layer and it has nine neurons, where each of them
GLM' Cluster prominence 13.6
fuzzified each input based on the membership function
HSL Cluster prominence 9.8
parameters of each class. And the next third layer is the rule
base layer and it has 27 neurons. The fourth layer is the GLM' Sum means 5.81
normalized rule base layer and it has 108 neurons. Layer five
is the defuzzification layer and it has 108 neurons. In this 1.7 Signature maps Analysis
layer, there are 4 by 108 defuzzification parameters (learning Based on available facts, the best performing feature
capable weights and the product of the weights with the input maps were first selected based on their accuracy in
vector gives defuzzification output when multiplied by the classifying different structures in retinal images. To quantify
fired normalized rule base. The final layer is the output layer the accuracy of the classification, a trace index J was
and it has four neurons for each output variable. It calculates determined. It estimates the separability of pixel classes (EX
the exact crisp output for each variable based on calculating vs. non-EX). To quantify the accuracy of the classification,
the weighted sum of each fired and defuzzied rule. In the the trace data were calculated according to Equation (14). The
forward pass rule, consequent parameters are obtained based J-metric estimates the separability of pixel classes (e.g., EX
on given inputs and membership function parameters. In the vs. non-EX) using intralayer (Sw) and mid-layer (Sb) scatter
backward pass, the antecedent parameters are updated based matrices as follows:
on the error found in the output layer.
𝐽 = 𝑡𝑟𝑎𝑐𝑒(𝑺𝒃 𝑺𝒘 −𝟏 )
V. PERFORMANCE ANALYSIS OF THE (14)
ALGORITHM
Where: trace is a function which returns the sum of diagonal
The methods used at each phase of the retinal image elements of a square matrix.
processing must be evaluated and analyzed to see its 𝑺𝒃 = (𝜇𝑒𝑥 − 𝜇𝑛𝑜𝑛−𝑒𝑥 ) (𝜇𝑒𝑥 − 𝜇𝑛𝑜𝑛−𝑒𝑥 )𝑇
performance. At the preprocessing stage different analysis has (15)
been to select the best optimal feature map for segmentation
of retinal images. The performance of signature map analyzed 𝑺𝒘 = 𝑺𝒆𝒙 + 𝑺𝒏𝒐𝒏−𝒆𝒙
at the feature extraction phase and then performance NN in (16)
segmenting HEs is evaluated.
𝜇𝑒𝑥 𝑎𝑛𝑑 𝜇𝑛𝑜𝑛−𝑒𝑥 are the means of the EX and non-EX classes
1.6 Evaluation of the optimal feature map selection for retinal respectively, estimated using the corresponding training sets.
image segmentation 𝑿𝒆𝒙 and 𝑿𝒏𝒐𝒏−𝒆𝒙 are the training sets for EX and non-EX
Quantitative analysis of the available baseline data classes respectively.
revealed that signature maps were generated based on three
characteristics: cluster prominence, mean sum and variance N and M are the total numbers of 𝑋𝑒𝑥 and 𝑋𝑛𝑜𝑛−𝑒𝑥 that are
calculated in GLM` and HSV color spaces, superior to other present in the training set respectively.
1
features in its ability to clearly identify various objects in the 𝜇𝑒𝑥 = ∑𝑁 𝑿𝒊
𝑁 𝑖=1 𝒆𝒙
retina. samples studied. To evaluate the classification (17)
accuracy of the proposed method in terms of discriminating
between HE and non-HE pixels, Table 1 shows the values of 1
the dispersion matrix (Jindex) for cluster prominence, 𝜇𝑛𝑜𝑛−𝑒𝑥 = 𝑀 ∑𝑀 𝒊
𝑖=1 𝑿𝒏𝒐𝒏−𝒆𝒙
characteristics, and sum mean and variance were calculated in (18)
both the GLM` and the HSL color space. The table also
includes calculated metric results for RGB, YLQ, HSL, 𝑺𝒆𝒙 𝑎𝑛𝑑 𝑺𝒏𝒐𝒏−𝒆𝒙 are the scatter matrices of each class:
1
GLM`, YCbCr and Lab color spaces. For this test, 1009 EX 𝑺𝒆𝒙 = 𝑁 ∑𝑁 𝒊 𝒊
𝑖=1(𝑿𝒆𝒙 − 𝜇𝑒𝑥 ) (𝑿𝒆𝒙 − 𝜇𝑒𝑥 )
𝑇

and 2095 non-EX pixels were manually selected to analyze (19)


the separability of the EX and non-EX pixel classes. The EX
training package is taken from the available facts. As can be 1
seen in Table 1, the Cluster Promise feature computed for the 𝑺𝒏𝒐𝒏−𝒆𝒙 = 𝑀 ∑𝑀 𝒊 𝒊
𝑖=1(𝑿𝒏𝒐𝒏−𝒆𝒙 − 𝜇𝑛𝑜𝑛−𝑒𝑥 ) (𝑿𝒏𝒐𝒏−𝒆𝒙 −

GLM color space provides the highest index and is therefore 𝜇𝑛𝑜𝑛−𝑒𝑥 )𝑇 (20)
used for optimal segmentation of the retina (especially HE)
images.

IJISRT21OCT275 www.ijisrt.com 719


Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
A higher J value means that the classes are more c) Positive predictive value (PPV): Measures the likelihood
segregated, while the members of each class are closer of an actual positive being predicted positively. It is
together. In addition, a series of tests were performed on the 𝑇𝑃
defined as 𝑃𝑃𝑉 =
𝑇𝑃+𝐹𝑃
resulting signature maps by modifying some parameters of
d) Accuracy (ACC): This is the probability of correctly
the algorithm. The following cases were investigated:
identifying a person, i.e. the percentage of correct results
 Effects of extraction with and without PCA application. (positive or negative correction). Calculated as: 𝐴𝐶𝐶 =
 Effect of applying PCA on the maximum value of the 𝑇𝑃+𝑇𝑁
modified image matrix. 𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁
 The effect of the positioning window measurement.
 Reduce the background with modifying the size of the VI. RESULTS AND DISCUSSION
window.
At each image processing phase results has been
In each case, the effectiveness of the resulting signature collected and evaluated before preceding to the next phase of
map was appropriately assessed by quantitatively comparing the image processing. The result found at each phase of the
the signature map to available ground truth values. For processes is presented and discussed below.
normalization, each component (channel) is divided by its
maximum. However, the appearance of false peaks or noise 1.9 Results from Signature Map Analysis
signals during this process can suppress the actual image Signature maps created with cluster prominence
information in the resulting color image. Also, changing these calculated in the GLM color space performed better than
maximums from image to image or on the same image in other features. Correctly distinguished OD and HE. Even in
different areas will cause unwanted color changes in the the case of glaucoma, this feature can be used to successfully
signature map. localize OD. In the absence of DR, the signing card will place
the OD correctly. Fig. 5 shows a calculated feature map of
To solve this problem, the value located in the top 95% representative fundus images of two patients treated for
of the distribution was taken as the nominal maximum. The moderate nonproliferative DR. In each case, HE (white-blue)
top 5% of all feature distributions in the feature space were was correctly identified by the proposed scheme. OD (cyan)
truncated to the maximum, that is, the nominal maximum. was also correctly identified.
Then divide all values by the maximum. To solve this
problem, the values in the top 95% of the distribution were
taken as their nominal maximums. The top 5% of all feature
distributions in the feature space were truncated to the
maximum, that is, the nominal maximum. Then divide all the
values by the maximum.

1.8 Performance evaluation of the NN in segmenting HEs


Quantitative analysis is carried out to evaluate the
performance of the proposed NN in segmenting to detecting
HEs from the other structures. For this, a set of tests HE and
non-HE pixels taken from different spatial positions, and
structures of different sample retinal images were taken. For
the sake of performance evaluation, four commonly used
metrics were computed: sensitivity, specificity, positive
predictive value, and accuracy.
a) Sensitivity (SE): The percentage of true positive
outcomes predicted correctly. Sensitivity can be
𝑇𝑃
mathematically defined as 𝑆𝐸 = 𝑇𝑃+𝐹𝑁: where TP
represents true positives (total number of pixels detected
by the proposed method while HE was present) and FN Fig. 5 Original image containing medium non-proliferative
represents false negatives. (Total number of pixels DR (1st row) and corresponding signature map (2nd row)
detected by the proposed method as non-HE while HE created with the proposed scheme.
was present).
b) Specificity (SP): This is the proportion of actual Fig. 6 shows the results to demonstrate the effectiveness
𝑇𝑁
predicted to be negative. It can be defined as 𝑃 = 𝑇𝑁+𝐹𝑃 , of the proposed program in OD localization. For glaucoma
where TN represents true negatives (total number of and normal cases, very compact signatures were generated for
pixels detected as non-HE by the proposed method) and the OD (cyan), which stood out clearly from the background.
FP is false positive (total number of pixels detected as At this stage, only qualitative analysis is performed
HE by the proposed method in the absence of HE). comparing the results of the signature map with the
underlying data available.

IJISRT21OCT275 www.ijisrt.com 720


Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
1.10 Results of Image Segmentation
The experimental results so far show that the extracted
features are very promising in automatic segmentation of
HEs, classification of abnormal retinal images based on DR,
and visual enhancement of retinal images. The automatic
segmentation scheme for HEs proposed in Section IV -C uses
an adaptive neural network to detect the presence of exudates
(which are the most common markers of DR in the earliest
stage) in an image.

Fig.7 HE shows the segmentation results of two


representative images with EXs. In both cases, HE was
detected very well on each map (pure blue). Results for two
retinal images without HE pixels are shown in Fig. 8. In each
case, it is clear that the resulting image has no signs of HE
Fig. 6 Original images with moderate non-proliferative DR
(pure blue). In addition to segmentation and classification
(1st column) and the corresponding signature cards created
functions, the proposed method provides improved color
with the proposed scheme (2nd column).
contrast for OD and background regions.
The results of the signature tag suggest that the method
proposed in this study is capable of identifying secretions. A
clear textural feature of these unusual markers was observed
in the produced signature maps. In general, there is a good
agreement between the generated signature maps and the
underlying truth available. Therefore, the proposed
multichannel texture map can be used as a powerful tool for
image segmentation and classification.

Experiments were performed with 214 retinal stock


images. Images were obtained from healthy individuals and
patients treated for DR and glaucoma. The test group includes
not only good quality images with different background and
lighting colors, but also low-quality images due to poor
lighting and noise. Experimental tests have shown that the
algorithm works best when using a 3x3 rendering window
and applying PCA to the TFT-converted image matrix. The Fig. 7 HE Segmentation results; original retinal images with
resulting signature map was found to be very useful for visual HE (1st column) and results after segmentation of HE pixels
enhancement of pathological signs with DR, OD localization with pure blue color (2nd column).
and EX resistance.

As can be seen in Fig.5, the EX of the generated


signature map is displayed in bluish white and contrasts with
the background of the other structures and the rest of the
background. Signature maps provided significant visual
improvement and distinction between HE and OD. A total of
125 DR-induced retinal abnormal images were acquired
during the experiment, and the corresponding signature map
recognized HE pixels as unique colors. Overall, the
robustness of the algorithm to changes in background color
and lighting conditions was found to be very satisfactory. Our
results showed that the most informative for identifying EX is
the third component of the signature map. In addition to
visually enhancing HE and OD, signature maps automate the
segmentation process by providing critical information that
sets thresholds for features that can serve as decision Fig. 8 Results for retinal images containing no exudates;
boundaries to differentiate EX from other regions. original images (1st column) and results after segmentation
Experiments have shown that channel 3 provides the highest- (2nd column).
grade distinction between HE and OD. The second channel
also provides good separation between background pixels and Quantitative analysis was performed to evaluate the
pixels due to HE and OD, but the ability to separate HE and performance of the proposed system. For this purpose, two
OD is less satisfactory. criteria have been established: a pixel-based criterion and an

IJISRT21OCT275 www.ijisrt.com 721


Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
image-based criterion. The first criterion examines the ability
of the proposed algorithm to perform pixel-based detection
(i.e. segmentation) of HE, while the second criterion
evaluates the ability of the algorithm to distinguish between
HE and healthy retinal images. For pixel-based criteria,
64,500 HE pixels from 65 images of retinas with non-
proliferative DR and 192,500 pixels without HE (from
various anatomical and non-pathological structures) were
used. 7 normal images and 30 abnormal images of the retina
with H were used as the criteria according to the images.
Based on this, the proposed NN-based HE classifier obtained
96.4% SE, 98.7% SP, 96.13% PPV, and 98.12% ACC for
pixel-based criteria. For image-based classification, the SE
algorithm reached 96.66%, SP 100%, PPV 100%, and ACC
97.3%. In general, it is believed that the clinical use of these Fig. 11 Applying Artificial neural net to detect the Hard
systems is acceptable if the SE and SP values are greater than Exudates
80% and 90%, respectively. This means that our method is
sufficient for clinical use for early detection of exudates. VII. CONCLUSIONS
1.11 Developed User-Friendly Software Color retinal image processing employs a three-
To make the application user-friendly and easy to use dimensional representation of color pictures and performs
for the medical personnel, the above image processing trinion-based Fourier transformations to extract valuable
algorithm is first built in MATLAB and then it is imaging features. The technique also includes a suitable color
implemented on C# programming language using Microsoft space transformation, a method for obtaining strong higher-
Visual .Net 2016. The graphic user interface (GUI) developed order features, and ANN-based exudate segmentation
using C# lets the user to upload the original retinal image and algorithms. The following two major applications have had
save the processed image result. The user can visualize the their results analysed to illustrate the efficacy of the proposed
stage of the image processing. The user can see RGB retinal methods:
image converted to different color space like CMYK, YcbCr  In retinal imaging, visual amplification of the major anato
and GLM. Some of the Image processing results using the mical and pathological features.
application is shown in the figure below:
 Automatic HE segmentation and classification of DR ano
malies.

The first application explored the potential of signature


maps based on multiple higher-order statistical features. A
sufficient number of image samples with a wider range of
difficulty were used to evaluate the ability of signature maps
to extract useful information relevant to DR studies. Priority
was the identification of EX and localization of OD.
Signature cards were also examined to identify clinically
relevant information if glaucoma was present. Texture maps
extracted from cluster bump functions and computed in the
GLM color space, according to our findings, provide
enhanced texture information that can be used to localize OD
Fig. 9 The original RGB retinal image converted to a GLM in the presence of EX, detect HE, and classify retinal
image abnormalities based on images. showed Accurate
identification of this retinal shape greatly aids the patient's
diagnosis and prognosis. With the help of an ANN classifier
applied to the acquired texture data, we were able to
accurately segment the HE in the background and other bright
spots. The performance of the algorithm was evaluated using
statistical indicators such as sensitivity, specificity, positive
predictive value, and accuracy calculated for both pixel and
image criteria. For HE pixel segmentation, the algorithm
reached SE 96.4%, SP 98.7, PPV 96.13%, and ACC 98.12%,
and for image-based HE anomaly classification, SE 96.66%,
SP 100%. , PPV 100% and ACC 97.3%. The results showed
that the proposed method works well and has potential for use
in intelligent medical systems using computing to detect DR-
Fig. 10 Image processing result after textural feature related eye illnesses.
extraction

IJISRT21OCT275 www.ijisrt.com 722


Volume 6, Issue 10, October – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Although the findings of this study show that the [10]. Lu H. and Fang G. (2013) an effective framework for
proposed method is effective, there is still room for automatic Segmentation of hard exudates in fundus
development in the scheme before it can be implemented in images, J Circuit Syst Comp., 22(1).
automatic retinal image processing systems. Aside from EXs, [11]. M. Afzal Mir,(2003) Atlas of Clinical Diagnosis,
other lesion forms considered relevant in DR research include SAUNDERS,Elsevier Science Limited,2nd ed.
hemorrhages, microaneurysms, and cotton wool patches, and [12]. Niemeijer M., Ginneken B.V., Russell S.R., Suttorp-
the suggested method should take these structures into Schulten A., and Abramoff M.(2007) Automated
account. detection and differentiation of drusen, exudates, and
cotton-wool spots in digital color fundus photographs
ACKNOWLEDGEMENTS for diabetic retinopathy diagnosis, Invest. Ophthalmol.
Vis. Sci., Vol. 48, pp. 2260-2267.
We would like to thank Hawassa University NORD [13]. Osareh, A. and B. Shadgar (2009) AUTOMATIC
(HU-NORD) coordinating office and Vice President of BLOOD VESSEL SEGMENTATION IN COLOR
Research and Technology Transfer Office for supporting and IMAGES OF RETINA, Iranian Journal of Science &
financing this research project. We would also like to extend Technology, Transaction B, Engineering, Vol. 33, No.
our gratitude for Black Lion Hospital for providing us B2, pp 191- 206.
different collection retinal image take by digital fundus [14]. Phillips R., Forrester J. and Sharp P.(1993) Automated
camera. detection and quantification of retinal exudates, Graefes
Arch. Clin. Exp. Ophthalmol. 231(2): 90-94.
REFERENCES [15]. R.VIKRAM, P.RAJESH,DISEASE DIAGNOSIS
USING RETINAL IMAGE PROCESSING THROUGH
[1]. Assefa D., Mansinha L., Tiampo K.F.,Rasmussen H., NEURAL NETWORKS, International Journal of Grid
and Abdella K.(2011)The trinion Fourier transform of and Distributed Computing, Vol.13, No.1,2020, pp. 76-
color images, Sig Proc. 91(8): 887- 1900. 85.
[2]. Assefa D., Keller H., Mnard C., Laperriere N., Ferrari [16]. Satya B., Abhay K.(2019) Detection of Hard Exudates
R.J., and Yeung I.(2010a) Robust texture features for in Retinopathy Images, Advances in Distributed
response monitoring of glioblastoma multiforme on T1- Computing and Artificial Intelligence Journal, Vol.8
weighted and T2-FLAIR MR images: A preliminary N.4, 41-48 , DOI:
investigation in terms of identification and http://dx.doi.org/10.14201/ADCAIJ2019844148
segmentation, Med Phys., 37(4), pp. 1722-1736. [17]. Shengchun L., Xiaoxiao H., Zhiqing C., Shahina P.,
[3]. Anup V. Deshmukh, Tejas G. Patil, Sanika S. Patankar, Dingchang Z. (2019) Automatic Detection of Hard
and Jayant V. Kulkarni,Features (2015) Based Exudates in Color Retinal Images Using Dynamic
Classification of Hard Exudates in Retinal Images, Threshold and SVM Classification: Algorithm
International Conference on Advances in Computing, Development and Evaluation, BioMed Research
Communications and Informatics (ICACCI), DOI: International, Article ID 3926930, 13 pages,
10.1109/ICACCI.2015.7275850. https://doi.org/10.1155/2019/3926930
[4]. Eadgahi M.G. and Pourreza H.(2012) Localization of [18]. SHILPA JOSHI and P. T. KARULE, (2018), Detection
hard exudates in retinal fundus image by mathematical of Hard Exudates Based on, Morphological Feature
morphology operations, Int Conf Comput Knowledge Extraction, in Biomedical & Pharmacology Journal,
Eng. (ICCKE).pp. 185-189. Vol. 11(1), pp. 215-225.
[5]. Ehsan S. and Somayeh Z. (2012) Automatic [19]. Snchez C., Garca M., Mayo A, Lpez M., and Hornero
Segmentation of Retina Vessels by Using Zhang R.(2009) Retinal image analysis based on mixture
Method, World Academy of Science, Engineering and models to detect hard exudates,, Med Image Anal.,
Technology International Journal of Biomedical and 13(4):650-658.
Biological Engineering,Vol:6, No:12, 2012
[6]. Godse, Deepali A., and Dattatraya S. Bormane.(2013)
Automated localization of optic disc in retinal images.
International Journal of Advanced computer science and
Applications 4.2.
[7]. John E. Dowling, (2002) Encyclopedia of the Human
Brain, Volume 4. USA: Elsevier Science.
[8]. Kande G.B., Subbaiah P.V., and Savithri T.S. (2009)
Feature extraction in digital fundus images, J Med Bio
Eng.29 (3):122-130.
[9]. Liu C., Lu H., Zhang J. (2008) Using Fast Marching in
Automatic Segmentation of Retinal Blood Vessels. In:
Peng Y., Weng X. (eds) 7th Asian-Pacific Conference
on Medical and Biological Engineering. IFMBE
Proceedings, vol 19. Springer, Berlin, Heidelberg.https :
==doi:org=10:1007=978- 3 - 540 - 79039 - 660.

IJISRT21OCT275 www.ijisrt.com 723

You might also like