0% found this document useful (0 votes)
32 views43 pages

Diabetic

The document discusses developing a deep learning model for detecting the severity of diabetic retinopathy from retinal images. It provides background on diabetic retinopathy and issues with current detection methods. The objectives are to develop a convolutional neural network model to classify retinal images by stage of severity, train it on a large dataset, and evaluate its performance on classification accuracy.

Uploaded by

Manasa Krishna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views43 pages

Diabetic

The document discusses developing a deep learning model for detecting the severity of diabetic retinopathy from retinal images. It provides background on diabetic retinopathy and issues with current detection methods. The objectives are to develop a convolutional neural network model to classify retinal images by stage of severity, train it on a large dataset, and evaluate its performance on classification accuracy.

Uploaded by

Manasa Krishna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Severity Detection Of Diabetic Retinopathy

CHAPTER - 1
INTRODUCTION

1.1 OVERVIEW
Healthcare industry is a very large and sensitive meta data and must be carefully handled. One
of the growing extremely fatal diseases all over the world is Diabetes Mellitus. Medical
professionals require a reliable prediction system to diagnose this disease. Some of the useful deep
learning techniques for examining the data from diverse perspectives and synopsizing it into
valuable information. The accessibility and availability of huge amounts of data are able to provide
us useful knowledge unless certain data mining techniques are applied to it. The main goal to this
is to determine new patterns and interpret these patterns to deliver acute significant and useful
information for the process. Diabetes may leads to heart disease, kidney disease, nerve damage,
and blindness. Diabetes data mining in an efficient way for a crucial concerns.

The data mining techniques and its way be discovered to create the appropriate approaches and
techniques for an efficient classification of Diabetes dataset and in extracting it data patterns. In
this study, medical bioinformatics analyses is done in diabetes prediction. The WEKA software is
employed as a mining tool in diabetics diagnosis. The Pima Indian diabetes Data base was acquired
from UCI repository that used for future analysis. The dataset was researched and analyzed to build
an effective model that used to predict and diagnoses diabetes disease. Diabetic Retinopathy may
be a complication of diabetes that's caused thanks to the changes within the blood vessels of the
retina and is one among the leading causes of blindness in the developed world.

Up to this , Diabetic Retinopathy remains screened manually by ophthalmologist which may


be a time consuming process and hence this paper aims at automatic diagnosis of the disease into
its different stages using deep learning. In our approach, we trained a Deep Convolutional Neural
Network model on an outsized dataset consisting of around 35,000 images to automatically
diagnose and thereby classify high resolution fundus images of the retina into five stages supported
their severity. Within this paper, an application system is made which takes the input parameters
because the patient’s details along side the fundus image of the attention .

A trained deep Convolutional neural network model will further extract the features of the
fundus images and later with the assistance of the activation functions like Relu and softmax along
with optimizing techniques like Adam an output is obtained. The output obtained from the
Convolutional Neural Network (CNN) model and therefore the patient details will collectively
make a uniform report. In this study, we aim to apply the bootstrapping resembling technique to

Dept of CSE, SJBIT 2023-24 Page 1


Severity Detection Of Diabetic Retinopathy
enhance the accuracy and then applying ResNet, CNN and compare their performance.

The idea of Machine learning is to predict the longer term from past data. Machine learning
focuses on the event of Computer Programs which will change when exposed to new data and
therefore the basics of Machine Learning, implementation of an easy machine learning algorithm
using python. It feed the training data to an algorithm, and therefore the algorithm uses this training
data to offer predictions on a replacement test data.

Machine learning are often roughly separated in to three categories. Supervised learning
program is given to both the input file and therefore the corresponding labelling to find out the info
that has got to be labelled by a person's being beforehand. Unsupervised learning has no labels. It
is directly fed to the learning algorithm. This algorithm has got to discover the clustering of the
input file. Finally, Reinforcement learning dynamically interacts with its environment and it
receives positive or feedback to enhance its performance.

Fig 1..1 Retina Stages

Deep learning is a part of machine learning in artificial intelligence that has networks able to
adapt in learning unsupervised data that are unlabeled or unstructured . This proess is otherwise
known as deep neural learning or deep neural networks.

Diabetic retinopathy is a medical complication that is caused by the damage to the blood vessels
of the light-sensitive tissue which is present at the back of the eye, retina, which can gradually lead
to complete blindness and various other eye problems depending on the severity of Diabetic
Retinopathy. It is observed that 40% − 45% of diabetic patients are likely to have DR in their life,
but due to lack of knowledge and delayed diagnosis, the condition escalates quickly.

Diabetes was once thought of as a disease of the affluent but it's now reached epidemic
proportion in both developed and developing countries. Currently, a minimum of 366 million
people worldwide has diabetes, and this number is probably going to extend as a results of an aging
global population Globally, the quantity of individuals with DR will grow from 126.6 million in
2010 to 191.0 million by 2030, and that we estimate.

That the quantity with vision-threatening diabetic retinopathy (VTDR) will increase from
37.3 million to 56.3 million, if prompt action isn't taken. Diabetes is the best disease to apply deep

Dept of CSE, SJBIT 2023-24 Page 2


Severity Detection Of Diabetic Retinopathy
learning principles to. There are numerous specialists chipping away at forecast of diabetes sickness
and intricacies emerging from diabetes.

There are many applications available which help the practitioners to study the disease and
complications but many applications have their own advantages and flaws. According, Indian
peoples are more prone to diabetes because of lots of reasons including lifestyle, consumption of
type of food and inadequate physical activities.

Diabetic Retinopathy is one of the major complications that affects the human eye of diabetic
people. Damage to the blood vessels of light-sensitive tissue of the retina causes this disease.
Diabetic Retinopathy (DR) is a compilation of diabetes that causes the blood vessel of the retina to
swell and leak fluids and blood. It is the leading cause of blindness for people aged 20 to 64 years.
Diabetic Retinopathy (DR) is the most common cause of visual loss in people across the world.

1.2 EXISTING SYSTEM


Apart from a binocular model for the various classes of Diabetic Retinopathy detection task is
also trained and evaluated to further prove the effectiveness of the binocular design. The final result
shows that, on a 10% validation set, the binocular model achieves a kappa score of 0.829 which is
higher than that of existing non ensemble model. In the end the analogy between confusion matrices
obtained through models with paired and unpaired inputs is performed and it demonstrates that the
binocular architecture does improve the classification performance.

1.3 PROBLEM STATEMENT


The problem addressed in this study is the need for a cost-effective and scalable solution for
the detection of diabetic retinopathy (DR), a leading cause of blindness in adults with diabetes.
Current methods for DR detection are time-consuming, expensive, and require specialized
equipment and trained personnel, making them inaccessible in many regions. The objective of this
study is to develop and evaluate a deep learning model for automated DR detection from retinal
images, which can improve the accessibility and affordability of DR screening and diagnosis.

1.4 MOTIVATION
The motivation behind developing a deep learning model for diabetic retinopathy detection is
the significant burden of this disease on patients, healthcare systems, and society. Early detection
and treatment of DR are critical to preventing blindness and improving patient outcomes. However,
current methods for DR detection are expensive and inaccessible in many regions, leading to delays
in diagnosis and treatment. By developing a cost-effective and scalable solution for DR detection
Dept of CSE, SJBIT 2023-24 Page 3
Severity Detection Of Diabetic Retinopathy
using deep learning, this study aims to improve the accessibility and affordability of DR screening
and diagnosis, ultimately improving patient outcomes and reducing the burden on healthcare
systems.

1.5 OBJECTIVES
 To develop a deep learning model for automated diabetic retinopathy (DR) detection from
retinal images.

 To evaluate the performance of the deep learning model using standard techniques, such as
backpropagation and gradient descent, and performance metrics such as accuracy,
sensitivity,and specificity.

 To compare the performance of the deep learning model with other state-of-the-art DR
detection methods to assess its superiority and effectiveness.

 To assess the clinical implications of the developed deep learning model, including early
detection and treatment of DR, reducing the workload of ophthalmologists, and improving
patient outcomes.

 To identify any limitations or challenges faced in the development and evaluation of the deep
learning model, such as limited sample size or biased data, that may affect the generalizability
of the model.

1.6 SCOPE
The scope of this study is to develop and evaluate a deep learning model for automated diabetic
retinopathy (DR) detection from retinal images. The study will use publicly available retinal image
datasets with labeled DR severity to train and validate the deep learning model. The deep learning
model will use a combination of convolutional neural networks (CNNs) and recurrent neural
networks (RNNs) for feature extraction, sequence modeling, and classification of DR severity. The
performance of the deep learning model will be compared with other state-of-the-art DR detection
methods to assess its superiority and effectiveness. The developed deep learning model can have
significant clinical implications, including early detection and treatment of DR, reducing the
workload of ophthalmologists, and improving patient outcomes. However, the study may face
limitations such as limited sample size and biased data, which can affect the generalizability of the
deep learning model. The scope of this study is limited to the development and evaluation of a deep
learning model for automated DR detection and does not include implementation and validation in
real-world clinical settings.

Dept of CSE, SJBIT 2023-24 Page 4


Severity Detection Of Diabetic Retinopathy

CHAPTER - 2

LITERATURE SURVEY
SL. AUTHOR METHOD DATASET RESULT LIMITATION
NO
1 Balla Goutam Moham Deep learning. Owndataset. Effective in improving Not applicable forreal
-mad Farukh Hashmi accuracy andmaintain time usage.
(Senior Member,Ieee), -ing a low false positi
Zong WooGeem , (Seni -ve rate.
-or Member, Ieee), and
Neeraj DhanrajBokde

2 Banu Machine Learnin Own dataset ( Increased detection Cannot be applied for leng
-g algorithms. Ebbu 2017 phis rate, high efficiency in -thy URL.
-hing dataset) detecting shorter URL
in real time scenario.
3 Vahid Decision tree , Own dataset. Accurate classificatio Limited features consider
randomforest, -n,improved performa- -ed forthe dataset creation,
KNN, Gradient nce. high computation time.
boosting,
adabooster, XG
boost.

4 Ting Deep belief net Data from the Increase detection rate Fails to detectin real
works,K-step, ISP. faster running time. time.
CD-k.

5 Shahreen Machine learning Email dataset, Accurate detection and Consumes more training
models. SMS message high performance. time.
dataset.
6 Nguyet Gated recurrent Website phishi Was able to solve the co Only uses balance data For
unit, Machine -ngdataset, ema -mmon problem of man detection, highest possible
learning and -il phishing data -ual parameter tunning, accuracy wasnot obtained,
deep learning -set. long trainingtime and real time detection is not
techniques deficient detection accu satisfied.
-racy.

7 Fatima Machine learning Own dataset High performance and Not applicable for real
techniques. accuracy, fast phishing time usage.
attack detection.
8 Alan Normalized comp Own dataset Highest performance Not able to detect variants
-ressiondistance, was obtained of attack sobstacles were
further point first encountered while prototy
algorithm. -ping.

9 Ayman Machine learning Own dataset. Retraining model to obt Not applicable for other da
algorithms,Hellin -ain more accurate resul -taset fail to detect in real
-ger distance dec -ts, increase performan time scenario.
-ision tree.
- -ce considered imbalan
-ced data.

Table: 2.1 Literature Survey

Dept of CSE, SJBIT 2023-24 Page 5


Severity Detection Of Diabetic Retinopathy

CHAPTER - 3

SYSTEM REQUIREMENTS SPECIFICATION

3.1 FUNCTIONAL REQUIREMENTS


Functional requirements for a system that uses deep learning for diabetic retinopathy detection may
include:

 Image acquisition: The system should be able to acquire high-quality retinal images for
analysis.

 Image preprocessing: The system should be able to preprocess images to remove noise, adjust
contrast, and enhance features for analysis.

 Diabetic retinopathy detection: The system should be able to accurately detect diabetic
retinopathy in retinal images using deep learning algorithms.

 Classification: The system should be able to classify diabetic retinopathy into different stages
based on the severity of the disease.

 User interface: The system should have an intuitive and user-friendly interface for healthcare
professionals to interact with the system.

 Patient database: The system should be able to maintain a patient database to store and retrieve
patient information and retinal images.

 Reporting: The system should generate reports summarizing the results of diabetic retinopathy
detection and classification for each patient.

 Integration with electronic health records (EHRs): The system should be able to integrate with
EHRs to facilitate patient care and management.

 System security: The system should have robust security measures to protect patient data and
comply with relevant regulations.

 System maintenance and updates: The system should be easy to maintain and update to ensure
optimal performance and accuracy.

3.2 NON FUNCTIONAL REQUIREMENTS


Non-functional requirements for a system that uses deep learning for diabetic retinopathy detection
may include:

 Performance: The system should have a fast and efficient processing speed to ensure timely
Dept of CSE, SJBIT 2023-24 Page 6
Severity Detection Of Diabetic Retinopathy
diagnosis and treatment.

 Accuracy: The system should have high accuracy in detecting diabetic retinopathy to minimize
false positives and false negatives.

 Robustness: The system should be able to handle variations in retinal images due to factors such
as age, ethnicity, and image quality.

 Scalability: The system should be able to handle large volumes of patient data and
accommodate additional retinal diseases and conditions.

 Usability: The system should be easy to use and require minimal training for healthcare
professionals to operate.

 Reliability: The system should be reliable and available for use 24/7 to ensure patient care is
not interrupted.

 Interoperability: The system should be able to integrate with other healthcare systems and
technologies to facilitate patient care and management.

 Data privacy and security: The system should have strong data privacy and security measures
to protect patient data and comply with relevant regulations.

 Maintenance and support: The system should be easy to maintain and provide adequate
technical support for users.

 Accessibility: The system should be accessible to all patients, regardless of their geographic
location, language, or disability.

3.3 HARDWARE REQUIREMENTS

 System : Pentium IV 2.4 GHz/intel i3/i4.

 Hard Disk : 40GB.

 Monitor : 15 VGAColor.

 RAM : 512 MbMinimum

3.4 SOFTWARE REQUIREMENTS

 Operating system : Windows XP/ Windows10

 Software Packages : Tensorflow , OpenCv

 Coding Language :Python.

Dept of CSE, SJBIT 2023-24 Page 7


Severity Detection Of Diabetic Retinopathy

CHAPTER - 4

SYSTEM DESIGN AND ARCHITECTURE


DESIGN OVERVIEW

Design overview explains the architecture that would be used for developing a software
product. It is an overview of an entire system, identifying the main components that would be
developed for the product and their interfaces.

4.1 SYSTEM DESIGN

 The System design mainly consists of

1. Image Collection
2. Image Preprocessing
3. Image Segmentation
4. Feature Extraction
5. Training
6. Classification

1. Image Collection

The dataset that we have used in this project is available publicly on the internet

2. Image Preprocessing

The goal of pre-processing is an improvement of image data that reduces unwanted distortions
and enhances some image features important for further image processing. Image pre-processing
involves three main things a) Grayscale conversion b) Noise removal c) Image enhancement

a)Grayscale conversion: Grayscale image contains only brightness information. Each pixel
value in a grayscale image corresponds to an amount or quantity of light. The brightness
graduation can be differentiated in grayscale image. Grayscale image measures only light
intensity 8 bit image will have brightness variation from 0 to 255 where ‘0’ represents black and
‘255’ represent white. In grayscale conversion color image is converted into grayscale image
shows. Grayscale images are easier and faster to process than colored images. All image
processing technique are applied on grayscale image.

b) Noise Removal: The objective of noise removal is to detect and remove unwanted noise from
digital image. The difficulty is in deciding which features of an image are real and which are

Dept of CSE, SJBIT 2023-24 Page 8


Severity Detection Of Diabetic Retinopathy
caused by noise. Noise is random variations in pixel values. We are using median filter to remove
unwanted noise. Median filter is nonlinear filter, it leaves edges invariant. Median filter is
implemented by sliding window of odd length. Each sample value is sorted by magnitude, the
centermost value is median of sample within the window, is a filter output.

c) Image Enhancement: The objective of image enhancement is to process an image to increase


visibility of feature of interest. Here contrast enhancement is used to get better quality result.

3. Image Segmentation

Image segmentation are of many types such as clustering, threshold, neural network based and
edge based. In this implementation we are using the clustering algorithm called mean shift
clustering for image segmentation. This algorithm uses the sliding window method for converging
to the Centre of maximum dense area. This algorithm makes use of many sliding windows to
converge the maximum dense region. Mean shift clustering Algorithm This algorithm is mainly
used for detecting highly dense region.

4.. Feature Extraction

There are many features of an image mainly color, texture, and shape. Here we are considering
three features that are color histogram, Texture which resembles color, shape, and texture.

5. Training

Training dataset was created from images of known Cancer stages. Classifiers are trained on the
created trainingdataset. A testing dataset is placed in a temporary folder. Predicted results from the
test case, Plots classifiers graphs and add feature-sets to test case file, to make image processing
models more accurate

6. Classification

The binary classifier which makes use of the hyper-plane which is also called as the decision
boundary between two of the classes is called as Convolution Neural Network. Some of the
problems are pattern recognition like texture classification makes use of CNN. Mapping of
nonlinear input data to the linear data provides good classification in high dimensional space in
CNN. The marginal distance is maximized between different classes by CNN. Different Kernels
are usedto divide the classes. CNN is basically a binary classifier that determines hyper plane in
dividing two classes. The boundary is maximized between the hyperplane and two classes. The
samples that are nearest to the margin will be selected in determining the hyperplane is called
support vectors.

Dept of CSE, SJBIT 2023-24 Page 9


Severity Detection Of Diabetic Retinopathy

4.2 SYSTEM ARCHITECTURE


A Deep-CNN is type of a DNN consists of multiple hidden layers such as convolutional layer,
RELU layer, Pooling layer and fully connected a normalized layer. CNN shares. weights in the
convolutional layer reducing the memory footprint and increases the performance of the network.
The important features of CNN lie with the 3D volumes of neurons, local connectivity and shared
weights. A feature map is produced by convolution layer through the convolution of different sub-
regions of the input image with a learned kernel. Then, a non-linear activation function is applied
through ReLu layer to improve the convergence properties when the error is low. In pooling layer,
a region of the image/feature map is chosen and the pixel with the maximum value among them or
average values is chosen as the representative.

Fig 4.1 System Architecture

This results a large reduction in the sample size. Sometimes, traditional Fully- Connected (FC)
layer will be used in conjunction with the convolutional layers towards the output stage. In CNN
architecture, usually convolution layer and pool layer are used in some combination. The pooling
layer usually carries out two types of operations viz. max pooling and means pooling. In mean
pooling, the average neighborhood is calculated within the feature points and in max pooling it is
calculated within a maximum of feature points. Mean pooling reduces the error caused by the
neighborhood size limitation and retains background information. Max pooling reduces the
convolution layer parameter estimated error caused by the mean deviation and hence retains more
Dept of CSE, SJBIT 2023-24 Page 10
Severity Detection Of Diabetic Retinopathy
texture information.

A system architecture for diabetic retinopathy detection using deep learning may involve several
components, including:

 Image acquisition: A retinal camera or imaging device is used to acquire high-quality retinal
images of patients.

 Preprocessing: Image preprocessing techniques such as noise reduction, contrast adjustment,


and image enhancement are applied to the retinal images to improve image quality.

 Feature extraction: Deep learning algorithms are used to extract relevant features from the
preprocessed images.

 Training and validation: A deep learning model is trained on a large dataset of retinal images
with known diabetic retinopathy labels. The model is validated on a separate dataset to assess
its performance.

 Testing and evaluation: The trained model is tested on new retinal images to assess its
diagnostic accuracy and performance.

 User interface: A user-friendly interface is developed to enable healthcare professionals to


interact with the system and input patient information.

 Patient database: A patient database is maintained to store patient information and retinal
images for future reference.

 Reporting: A reporting module is developed to generate reports summarizing the results of


diabetic retinopathy detection and classification for each patient.

 Integration with EHRs: The system is integrated with electronic health records (EHRs) to
facilitate patient care and management.

 System maintenance and updates: The system is designed for easy maintenance and updates
to ensure optimal performance and accuracy over time.

4.3 CONVOLUTIONAL NEURAL NETWORKS


A CNN is type of a DNN consists of multiple hidden layers such as convolutional layer,
RELU layer, Pooling layer and fully connected a normalized layer. CNN shares weights in the
convolutional layer reducing the memory footprint and increases the performance of the network.
The important features of CNN lie with the 3D volumes of neurons, local connectivity and shared
weights. A feature map is produced by convolution layer through convolution of different sub
regions of the input image with a learned kernel. Then, anon- linear activation function is applied
Dept of CSE, SJBIT 2023-24 Page 11
Severity Detection Of Diabetic Retinopathy
through ReLu layer to improve the convergence properties when the error is low. In pooling layer,
a region of the image/feature map is chosen and the pixel with maximum value among them or
average values is chosen as the representative pixel so that a 2x2 or 3x3 grid will be reduced to a
single scalar value. This results a large reduction in the sample size. Sometimes, traditional Fully-
Connected (FC) layer will be used in conjunction with the convolutional layers.

Figure 4.3 Deep-Convolutional Neural Network Architecture.

In CNN architecture, usually convolution layer and pool layer are used in some combination.
The pooling, the average neighborhood is calculated within the feature points and in max pooling
it is calculated within a maximum of feature points. Mean pooling reduces the error caused by the
neighborhood size limitation and retains background information. Max pooling reduces the
convolution layer parameter estimated error caused by the mean deviation and hence retains more texture

A CNN is composed of several kinds of layers:

4.3.1 Convolutional layer

In convolution layer after the computer reads an image in the form of pixels, then with the
help of convolution layers we take a small patch of the images. These images or patches are called
the features or the filters. By sending these rough feature matches is roughly the same position in
the two images, convolutional layer gets a lot better at seeing similarities than whole image matching
scenes. These filters are compared to the new input images if it matches then the image is classified
correctly. Here line up the features and the image and then multiply each image, pixel by the
corresponding feature pixel, add the pixels up and divide the total number of pixels in the feature.
We create a map and put the values of the filter at that corresponding place. Similarly, we will move
the feature to every other position of the image and will see how the feature matches that area.
Finally, we will get a matrix as an output.

Dept of CSE, SJBIT 2023-24 Page 12


Severity Detection Of Diabetic Retinopathy

4.3.2 Relu Layer

ReLU layer is nothing but the rectified linear unit, in this layer we remove every negative
value from the filtered images and replaces it with zero. This is done to avoid the values from
summing up to zeroes. This is a transform function which activates a node only if the input value
is above a certain number while the input is below zero the output will be zero then remove all the
negative values from the matrix.

Fig 4.3.1 Convolutional Neural Network General Architecture

4.3.3 Pooling layer

In this layer we reduce or shrink the size of the image. Here first we pick a window size, then
mention the required stride, then walk your window across your filtered images. Then from each
window take the maximum values. This will pool the layers and shrink the size of the image as well
as the matrix. The reduced size matrix is given as the input to the fully connected layer.

4.3.4 Fully connected layer

We need to stack up all the layers after passing it through the convolutional layer, ReLU layer
and the pooling layer. The fully connected layer used for the classification of the input image. These
layers need to be repeated if needed unless you get a 2x2 matrix. Then at the end the fully connected
layer is used where the actual classification happens.

4.4 DATA FLOW DIAGRAM


A dataflow outline is a tool for referring to knowledge progression from one module to the
next module as shown in Fig 4.4 This graph gives the data of each module's info and yield. The

Dept of CSE, SJBIT 2023-24 Page 13


Severity Detection Of Diabetic Retinopathy
map has no power flow and there are no circles at the same time.

Fig 4.4 Data Flow Diagram

4.5 USE CASE DIAGRAM


Use case diagram is the boundary, which defines the system of interest in relation to the world
around it. The actors, usually individuals involved with the system defined according to their roles.
The use cases, which are the specific roles played by the actors within and around the system.

Fig 4.5 Use Case Diagrams

Dept of CSE, SJBIT 2023-24 Page 14


Severity Detection Of Diabetic Retinopathy

4.6 CLASS DIAGRAM


Class diagrams are the main building block in object-oriented modeling. They are used to show the
different objects in a system, their attributes, their operations and the relationships among them as
shown in the Fig

Fig 4.6 Class Diagram

4.7 SEQUENCE DIAGRAM


sequence diagram simply depicts interaction between objects in a sequential order i.e. the order in
which these interactions take place as shown in Fig

Fig 4.7 Sequence Diagram

Dept of CSE, SJBIT 2023-24 Page 15


Severity Detection Of Diabetic Retinopathy

4.8 ACTIVITY DIAGRAM


Activity diagrams are graphical representations of workflows of stepwise activities and actions with
support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams
can be used to describe the business and operational step-by-step workflows of components in a
system. An activity diagram shows the overall flow of control.

Fig 4.8 Activity Diagram

4.8.1 Data Flow Diagram for Pre-processing

Fig 4.8.1 Pre-processing Diagram

Here is an example of a data flow diagram for pre-processing of retinal images for diabetic
retinopathy detection using deep learning:

Dept of CSE, SJBIT 2023-24 Page 16


Severity Detection Of Diabetic Retinopathy

 [Input]: Raw retinal images

 [Process 1]: Image cropping - The retinal images are cropped to remove any areas of the image
that are not relevant to diabetic retinopathy detection.

 [Process 2]: Image resizing - The cropped images are resized to a standard size to ensure
consistency in feature extraction.

 [Process 3]: Image normalization - The resized images are normalized to adjust the contrast and
brightness, remove any noise, and enhance relevant features.

 [Output]: Preprocessed retinal images ready for feature extraction and deep learning algorithms.

 [Feedback loop]: The output of the pre-processing stage is fed back into the deep learning model
for feature extraction and diabetic retinopathy detection. If the detection results are not
satisfactory, the pre-processing stage may need to be adjusted and repeated to improve image
quality and accuracy of detection.

4.8.2 Data Flow Diagram for Identification

Fig 4.8.2 Identification Diagram

Here is an example of a data flow diagram for identification of diabetic retinopathy using
deep learning:

 [Input]: Preprocessed retinal images

 [Process 1]: Feature extraction - Deep learning algorithms are used to extract relevant features
from the preprocessed retinal images.

 [Process 2]: Diabetic retinopathy detection - The extracted features are used to classify the
retinal images into categories of diabetic retinopathy severity.

Dept of CSE, SJBIT 2023-24 Page 17


Severity Detection Of Diabetic Retinopathy

 [Output]: Detection results - The system generates detection results indicating the severity of
diabetic retinopathy in the retinal images.

 [Feedback loop]: The detection results are fed back into the system to evaluate the performance
of the deep learning algorithms. If the accuracy of detection is not satisfactory, the system may
need to be adjusted and retrained to improve its performance.

4.8.3 Data Flow Diagram for Feature Extraction

Fig 4.8.3 Feature Extraction Diagram

Here is an example of a data flow diagram for feature extraction in diabetic retinopathy
detection using deep learning:

 [Input]: Preprocessed retinal images

 [Process 1]: Convolutional neural network (CNN) - A CNN is used to analyze the preprocessed
images and extract relevant features.

 [Process 2]: Feature selection - The CNN outputs a set of features, and a feature selection
algorithm is used to identify the most important features for diabetic retinopathy detection.

 [Process 3]: Feature normalization - The selected features are normalized to ensure consistency
in classification.

 [Output]: Extracted features - The system generates a set of extracted features that are used in
the diabetic retinopathy detection process.

 [Feedback loop]: The extracted features are fed back into the system to evaluate the
performance of the deep learning algorithms. If the accuracy of detection is not satisfactory, the
system may need to be adjusted and retrained to improve its performance.

Dept of CSE, SJBIT 2023-24 Page 18


Severity Detection Of Diabetic Retinopathy

4.8.4 Data Flow Diagram for Classification and Detection

Fig 4.8.4 Classification and Detection

Here is an example of a data flow diagram for diabetic retinopathy classification and
detection using deep learning:

 [Input]: Extracted features

 [Process 1]: Classification - The extracted features are used to classify the retinal images into
categories of diabetic retinopathy severity using a deep learning model such as a support vector
machine (SVM) or a random forest classifier.

 [Process 2]: Detection - The classified images are analyzed to detect signs of diabetic
retinopathy, including microaneurysms, hemorrhages, and exudates.

 [Output]: Detection results - The system generates detection results indicating the severity of
diabetic retinopathy in the retinal images and the location of any abnormalities.

 [Feedback loop]: The detection results are fed back into the system to evaluate the performance
of the deep learning algorithms. If the accuracy of detection is not satisfactory, the system may
need to be adjusted and retrained to improve its performance.

Dept of CSE, SJBIT 2023-24 Page 19


Severity Detection Of Diabetic Retinopathy

CHAPTER 5

METHODOLOGY

5.1 METHODOLOGY DIAGRAM


1. Image Collection
2. Image Preprocessing
3. Image Segmentation
4. Feature Extraction
5. Training
6. Classification

Fig 5.1 Methodology Diagram

1. Image Collection

The dataset that we have used in this project is available publicly on the internet

2. Image Preprocessing
The goal of pre-processing is an improvement of image data that reduces unwanted distortions and
enhances some image features important for further image processing. Image pre-processing
involves three main things a) Grayscale conversion b) Noise removal c) Image enhancement

a) Grayscale conversion: Grayscale image contains only brightness information. Each pixel

Dept of CSE, SJBIT 2023-24 Page 20


Severity Detection Of Diabetic Retinopathy
value in a grayscale image corresponds to an amount or quantity of light. The brightness
graduation can be differentiated in grayscale image. Grayscale image measures only light
intensity 8 bit image will have brightness variation from 0 to 255 where ‘0’ represents black and
‘255’ represent white. In grayscale conversion color image is converted into grayscale image
shows. Grayscale images are easier and faster to process than colored images. All image
processing technique are applied on grayscale image.

b) Noise Removal: The objective of noise removal is to detect and remove unwanted noise from
digital image. The difficulty is in deciding which features of an image are real and which are
caused by noise. Noise is random variations in pixel values. We are using median filter to remove
unwanted noise. Median filter is nonlinear filter, it leaves edges invariant. Median filter is
implemented by sliding window of odd length. Each sample value is sorted by magnitude, the
centermost value is median of sample within the window, is a filter output.

c) Image Enhancement: The objective of image enhancement is to process an image to increase


visibility of feature of interest. Here contrast enhancement is used to get better quality result.

3. Image Segmentation

Image segmentation are of many types such as clustering, threshold, neural network based and
edge based. In this implementation we are using the clustering algorithm called mean shift
clustering for image segmentation. This algorithm uses the sliding window method for converging
to the Centre of maximum dense area. This algorithm makes use of many sliding windows to
converge the maximum dense region. Mean shift clustering Algorithm This algorithm is mainly
used for detecting highly dense region.

4. Feature Extraction

There are many features of an image mainly color, texture, and shape. Here we are considering
three features that are color histogram, Texture which resembles color, shape, and texture.

5. Training

Training dataset was created from images of known Cancer stages. Classifiers are trained on the
created trainingdataset. A testing dataset is placed in a temporary folder. Predicted results from the
test case, Plots classifiers graphs and add feature-sets to test case file, to make image processing
models more accurate

6. Classification

The binary classifier which makes use of the hyper-plane which is also called as the decision
Dept of CSE, SJBIT 2023-24 Page 21
Severity Detection Of Diabetic Retinopathy
boundary between two of the classes is called as Convolution Neural Network. Some of the
problems are pattern recognition like texture classification makes use of CNN. Mapping of
nonlinear input data to the linear data provides good classification in high dimensional space in
CNN. The marginal distance is maximized between different classes by CNN. Different Kernels
are usedto divide the classes. CNN is basically a binary classifier that determines hyper plane in
dividing two classes. The boundary is maximized between the hyperplane and two classes. The
samples that are nearest to the margin will be selected in determining the hyperplane is called
support vectors

5.2 FLOWCHART FOR DATA ACQUISITION

Fig 5.2 Flowchart for data acquisition

The flowchart for collecting data is as depicted in the figure 5.2. The data set is collected from
a source and a complete analysis is carried out. The image is selected to be used for training/testing
purposes only if it matches our requirements and is not repeated.

5.3 FLOWCHART FOR PRE-PROCESSING


The figure 5.3 shows the flowchart for the pre-processing of the images received from the
output of the previous step. This involves converting the image from the RGB format to greyscale
to ease processing, the use of an averaging filter to filter out the noise, global basic thresholding to
remove the background and consider only the image and a high- pass filter to sharpen the image by

Dept of CSE, SJBIT 2023-24 Page 22


Severity Detection Of Diabetic Retinopathy
amplifying the finer details.

Fig 5.3 Flowchart for the preprocessing module

 Conversion from RGB to Greyscale

The first step in pre-processing is converting the image from RGB to Greyscale. It can be
obtained by applying the below formula to the RGB image. The figure 5.5 depicts the Conversion
from RGB to grayscale.

Fig 5.3.1 Conversion from RGB to grayscale

Dept of CSE, SJBIT 2023-24 Page 23


Severity Detection Of Diabetic Retinopathy

 Advantages of converting RGB color space to gray

1. To store a single-color pixel of an RGB color image we will need 8*3 = 24 bits (8 bit for each
color component).

2. Only 8 bit is required to store a single pixel of the image. So we will need 33 % less memory to
store grayscale image than to store an RGB image.

3. Grayscale images are much easier to work within a variety of task like In many morphological
operation and image segmentation problem, it is easier to work with single layered image
(Grayscale image) than a three-layered image (RGB color image).

4. It is also easier to distinguish features of an image when we deal with a single layered image.

 Noise removal

Noise removal algorithm is the process of removing or reducing the noise from the
image. The noise removal algorithms reduce or remove the visibility of noise by smoothing the
entire image leaving areas near contrast boundaries. Noise removal is the second step in image pre-
processing. Here the grayscale image which was obtained in the previous step is given as input.
Here we are making use of Median Filter which is a Noise Removal Technique.

 Median Filtering

1. The median filter is a non-linear digital filtering technique, often used to remove noise from an
image or signal.

2. Here 0’s are appended at the edges and corners to the matrix which is the representation of the
grey scale image.

3. Then for every3*3 matrix, arrange elements in ascending order, then find median/middle
element of those 9 elements, and write that median value to that particular pixel position. The
figure 4.6 depicts Noise filtering using Median Filter.

5.4 FLOWCHART FOR FEATURE EXTRACTION


Here, we use a method called Histogram Orientation Gradient (HOG) to extract the features
from the preprocessed image received as input. It involves multiple steps like finding Gx and Gy,
which are gradients about each pixel in the x and y axes. Then, these gradients are substituted in
relevant formulae to get the magnitude and gradient of the pixel’s orientation. Then, the angles and
their respective frequencies are plotted to form a histogram, which is the output of this module. The
flowchart for feature extraction model is shown in figure 5.4
Dept of CSE, SJBIT 2023-24 Page 24
Severity Detection Of Diabetic Retinopathy

Fig 5.4 Flowchart for Feature Extraction

5.4.1 Feature Extraction

Feature extraction is a process of dimensionality reduction by which an initial set of raw


data is reduced to more manageable groups for processing.

 Histogram Orientation Gradient

The Histogram of Oriented Gradients (HOG) is a feature descriptor used in computer vision and
image processing for the purpose of object detection. The technique counts occurrences of gradient
orientation in in localized portions of an image.

Here 0’s are appended at the edges and corners to the matrix. Then Gx and Gy are calculated. Gx
is calculates as Gx = value on right –value on left and Gy is calculated as Gy=value on top-value
on left. Figure 5.4.1 shows Gx and Gy in HOG.

After the angle of orientation is calculated, the frequency the angles for the particular intervals area
are noted and they are given as input for the classifier. Here we zeroes are not considered for finding
the frequency. For example for the interval from 40 to 59 there are 2 occurrences, so we are writing
the frequency as 2.

Dept of CSE, SJBIT 2023-24 Page 25


Severity Detection Of Diabetic Retinopathy

Then, using the formula given in figure 5.4.1 Magnitude and the orientation are calculated. Feature
Extraction using HOG is shown in figure 5.4.1 Magnitude is the illumination and degree the
orientation is the angle of orientation.

Figure 5.4.1 Feature Extraction using HOG

Dept of CSE, SJBIT 2023-24 Page 26


Severity Detection Of Diabetic Retinopathy

5.4.2 Layers in CNN

Fig 5.4.2 Layers in CNN

 Convolutional Layer

Convolutional Layer is the first step in CNN, here 3*3 part of the given matrix which was
obtained from High-pass filter is given as input. That 3*3 matrix is multiplied with the filter matrix
for the corresponding position and their sum is written in the particular position. This is shown in
the below figure. This output is given to pooling layer where the matrix is further reduced. Figure
5.4.3 shows the Convolutional Layer.

Dept of CSE, SJBIT 2023-24 Page 27


Severity Detection Of Diabetic Retinopathy

Fig 5.4.3 Convolutional Layer

Convolution is followed by the rectification of negative values to 0s, before pooling. Here, it is

Dept of CSE, SJBIT 2023-24 Page 28


Severity Detection Of Diabetic Retinopathy
not demonstrable, as all values are positive. In fact, multiple iterations of both are needed before
pooling.

 Pooling Layer

Fig 5.4.4 Pooling Layer

In Pooling layer 3*3 matrix is reduced to 2*2 matrix, this is done by selecting the maximum
of the particular 2*2 matrix for the particular position. Figure 4.16 shows the Pooling Layer.

 Fully connected layer and Output Layer

Fig 5.4.5 Fully connected layer and Output Layer

The output of the pooling layer is flattened and this flattened matrix is fed into the Fully
Connected Layer. In the fully connected layer there are many layers, Input layer, Hidden layer and
Output layers are parts of it. Then this output is fed into the classifier, in this case SoftMax
Activation Function is used to classify the image into covid present or not. Figure 4.17 shows the
Fully connected layer and Output Layer.

5.5 CLASSIFICATION AND DETECTION


In CNN, we take the output from the high-pass filter as input, leaving out feature extraction,
as CNN is a classifier which simply has a feature extracting process of its own, using convolution,
rectification and pooling as the 3 sub-modules, which work in iterations to give out a final
comparison matrix, which is then classified by classifying algorithms like Softmax.

Dept of CSE, SJBIT 2023-24 Page 29


Severity Detection Of Diabetic Retinopathy

CHAPTER 6
IMPLEMENTATION
Implementation is the process of converting a new system design into an operational one. It
is the key stage in achieving a successful new system. It must therefore be carefully planned and
controlled. The implementation of a system is done after the development effort is completed

6.1 STEPS FOR IMPLEMENTATION

 Front-End Development Using Python Flask:

Modern computer applications are user-friendly. User interaction is not restricted to console-
based I/O. They have a more ergonomic graphical user interface (GUI) thanks to high-speed
processors and powerful graphics hardware. These applications can receive inputs through mouse
clicks and can enable the user to choose from alternatives with the help of radio buttons, dropdown
lists, and other GUI elements.

 Flask Programming:

Flask is the standard GUI library for Python. Python when combined with Flask provides a
fast and easy way to create GUI applications. Flask provides a powerful object- oriented interface
to the Tk GUI toolkit. Flask has several strengths. It’s cross- platform, so the same code works on
Windows, macOS, and Linux. Visual elements are rendered using native operating system
elements, so applications built with Flask look like they belong on the platform where they’re run.

6.2 IMPLEMENTATION ISSUES


The implementation phase of software development is concerned with translating design
specifications into source code. The primary goal of implementation is to write source code and
internal documentation so that conformance of the code to its specifications can be easily verified
and so that debugging testing and modification are eased. This goal can be achieved by making the
source code as clear and straightforward as possible. Simplicity clarity and elegance are the
hallmarks of good programs and these characteristics have been implemented in each program
module.

The goals of implementation are as follows.

 Minimize the memory required.

 Maximize output readability.


Dept of CSE, SJBIT 2023-24 Page 30
Severity Detection Of Diabetic Retinopathy

 Maximize source text readability.

 Minimize the number of source statements.

 Minimize development time

6.3 MODULE SPECIFICATION


Module Specification is the way to improve the structure design by breaking down the system
into modules and solving it as independent task. By doing so the complexity is reduced and the
modules can be tested independently. The number of modules for our model is three, namely
preprocessing, identification, feature extraction and detection. So each phase signify the
functionalities provided by the proposed system. In the data pre-processing phase noise removal
using median filtering is done.

Database Image Input Image

Image collection Image Collection

Image Pre-Processing Image Pre-Processing

Image Segmentation Image Segmentation

Feature Extraction Classification using CNN

Database Image Feature Disease or Healthy

Fig 6.3 Module Specification

Dept of CSE, SJBIT 2023-24 Page 31


Severity Detection Of Diabetic Retinopathy

CHAPTER - 7

TESTING
Testing is the process of evaluating a system or its component(s) with the intent to find
whether it satisfies the specified requirements or not. Testing is executing a system to identify any
gaps, errors, or missing requirements in contrary to the actual requirements. System testing of a
software or hardware is a testing conducted on a complete, integrated system to evaluate the
system’s compliance with its specified requirements.

System testing fails within the scope of black-box testing, and such, should require no
knowledge of the inner design of the code or logic. As a rule, system testing takes, as its input, of
all the integrated’ software components that have passed integration testing and the software system
itself integrated with any applicable software systems.

The purpose of integration testing is to detect any inconsistencies betweenthe software units
that are integrated together. System testing is a more limited type of testing. It seeks to detect defects
both within the inter-assemblages and within the system. System testing is performed on the entire
system in the context of a Functional Requirement Specification (FRS) and / or a System
Requirement Specification (SRS). System testing testsnot only the design, but also the behavior
and even the believed expectation of the customer.

It is also intended to test up to and beyond the bounds defined in the software / hardware
requirement specification. Before applying methods to design effective test cases, a software
engineer must understand the basic principle that guides software testing. All the tests should be
traceable to customer requirements.

7.1 TYPES OF TESTING


Software testing methods and traditionally divided into two: white-box and black-box testing.
These two approaches are used to describe the point of view that a test engineer takes when
designing test cases.

a) White-box testing (also known as clear box testing, glass box testing, transparent box testing
and structural testing, by seeing the source code) tests internal structures or workings of a
program, as opposed to the functionality exposed to the end-user. In white-box testing an
internal perspective of the system, as well as programming skills, are used to design test cases.
The tester chooses inputs to exercise paths through the code and determine the appropriate
outputs. While white-box testing can be applied at the unit, integration, and system levels of
Dept of CSE, SJBIT 2023-24 Page 32
Severity Detection Of Diabetic Retinopathy
the software testing process, it is usually done at the unit level. It can test paths within a unit,
paths between units during integration, and between sub systems duringa system-level test.
Though this method of test design can uncover many errors or problems, it might not detect
unimplemented parts of the specification or missing requirements.

Fig 7.1 Type of Testing


b) Black box testing: The technique of testing without having any knowledge of the interior
workings of the application is called black-box testing. The tester is oblivious to the system
architecture and does not have access to the source code. Typically, while performing a black-
box test, a tester will interact with the system's user interface by providing inputs and examining
outputs without knowing how and where the inputs are worked upon. This project has been
tested under different circumstances, which includes different types such as Unit testing,
Integration testing and System testing that are described below.

7.2 Levels of Testing


There are different levels during the process of testing. Levels of testing include different
methodologies that can be used while conducting software testing. The main levels of software
testing are:

 Functional Testing: This is a type of black-box testing that is based on the specifications of
the software thatis to be tested. The application is tested by providing input and then the results
are examined that need to conform to the functionality it was intended for. Functional testingof
software is conducted on a complete, integrated system to evaluate the system's compliance
with its specified requirements. There are five steps that are involved while testing an

Dept of CSE, SJBIT 2023-24 Page 33


Severity Detection Of Diabetic Retinopathy
application for functionality.

1. The determination of the functionality that the intended application is meant to perform.

2. The creation of test data based on the specifications of the application.

3. The output based on the test data and the specifications of the application.

4. The writing of test scenarios and the execution of testcases.

5. The comparison of actual and expected results based on the executed testcases.

 Non-functional Testing: This section is based upon testing an application from its non-
functional attributes. Nonfunctional testing involves testing software from the requirements
which are non- functional in nature but important such as performance, security, user interface,
etc.

7.3 UNIT TESTING


Unit testing is a method by which individual units of source code, sets of one or more computer
program modules together with associated control data, usage procedures and operating procedures
are tested to determine if they are fit for use. Intuitively, one can view a unit as the smallest testable
part of an application. During the development process itself all the syntax errors etc. got rooted
out. For this developed test case that resultin executing every instruction in the program or module
i.e. every path through program wastested. Test cases are data chosen at random to check every
possible branch after all the loop

 Unit Testing Test Case 1

Table 7.3.1 Unit Testing Test Case 1

Dept of CSE, SJBIT 2023-24 Page 34


Severity Detection Of Diabetic Retinopathy

 Unit Testing Test Case 2

S1 # Test Case: UTC-2

Name of Test Detecting ct scan retina images

Items being tested Test for different retina disease


images
Sample Input Tested for different images of retina
scan eye images
Expected output Retina images should be displayed

Actual output Should Display disease or Healthy

Remarks Predicted result

Table 7.3.2 Unit Testing Test Case 2

7.4 INTEGRATION TESTING

Integration testing is a level of software testing where individual units are combined and
tested as a group. The purpose of this level of testing is to expose faults in the interaction between
integrated units. Test drivers and test stubs are used to assist in Integration Testing.

Integration testing is defined as the testing of combined parts of an application to determine


if they function correctly.

It occurs after unit testing and before validation testing. Integration testing can be done in
two ways: Bottom-up integration testing and Top-down integration testing.

1. Bottom-up Integration: This testing begins with unit testing, followed by tests of
progressively higher-level combinations of units called modules or builds.
2. Top-down Integration: In this testing, the highest-level modules are tested first and
progressively, lower-level modules are tested thereafter. In a comprehensive software
development environment, bottom-up testing is usually done first, followed by top-down
testing. The process concludes with multiple tests of the complete application, preferably in
scenarios designed to mimic actual situations.

Dept of CSE, SJBIT 2023-24 Page 35


Severity Detection Of Diabetic Retinopathy

 Functional Testing Test Case 1

S1 # Test Case: UTC-2

Name of Test Detecting ct scan retina images

Items being tested Test for different retina disease


images
Sample Input Tested for different images of retina
scan eye images
Expected output Retina images should be displayed

Actual output Should Display disease or Healthy

Remarks Predicted result

Fig 7.4.1 Functional Testing Test Case 1

 Functional Testing Test Case 2

S1 # Test Case: ITC-2

Name of Test Working of Segmentation and


Displaying retina eye Image
Items being tested Selecting different images and
verifying retina Image
Sample Input Click and select the image

Expected output Should show retina eye and predict


disease or healthy
Actual output Image Segmented, retina eye image
detected should be displayed

Remarks Pass.

Fig 7.4.2 Functional Testing Test Case 2

7.5 SYSTEM TESTING


System testing of software or hardware is testing conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements. System testing falls within the
scope of black-box testing, and as such, should require no knowledge of the inner design of the
code or logic. System testing is important because of the following reasons:

Dept of CSE, SJBIT 2023-24 Page 36


Severity Detection Of Diabetic Retinopathy

 System testing is the first step in the Software Development Life Cycle, where the application
is tested.

 The application is tested thoroughly to verify that it meets the functional and technical
specifications.

 The application is tested in an environment that is very close to the production environment
where the application will be deployed.

 System testing enables us to test, verify, and validate both the business requirements as well as
the application architecture.

7.5.1 System Testing Table


S1 # Test Case: STC-1
Name of Test System testing in various
versions of OS
Items being tested OS compatibility.
Sample Input Execute the program in
windows XP/Windows-10
Expected output Performance is better in
windows-10
Actual output Same as expected
output,performance is better in
windows-10
Remarks Pass.

Fig 7.5.1 System Testing Table

Dept of CSE, SJBIT 2023-24 Page 37


Severity Detection Of Diabetic Retinopathy

CHAPTER – 8
RESULT
8.1 DATASET

FIG 8.1 A dataset is a structured collection of data organized and stored together for analysis or processing.

8.2 PRE-PROCESSING OF RGB TO GRAY

Fig 8.2 Pre-processing of RGB to GRAY

Dept of CSE, SJBIT 2023-24 Page 38


Severity Detection Of Diabetic Retinopathy

8.3 TEST DATASET

Fig 8.2 Dataset used for testing purposes.

8.4 TRAIN DATASET

Fig 8.3 Dataset used for training purposes.

Dept of CSE, SJBIT 2023-24 Page 39


Severity Detection Of Diabetic Retinopathy

8.5 TRAIN MODEL USING CNN

Fig 8.5 Train Model Using CNN

8.6 URL LINK

Fig 8.6 URL Link

Dept of CSE, SJBIT 2023-24 Page 40


Severity Detection Of Diabetic Retinopathy

8.7 HOME PAGE OF DIABETIES RETINOPATHY

Fig 8.7 Home Page Of Diabetic Retinopathy

8.8 STAGE OF NORMAL_DR

Fig 8.8 Stages of normal diabetic retinopathy and their accuracy.

Dept of CSE, SJBIT 2023-24 Page 41


Severity Detection Of Diabetic Retinopathy

8.9 STAGE OF MILD_DR

Fig 8.9 Stages of Mild Retinopathy here the images is converted to gray & it performs edge detection and gives
the results & remedies & their accuracy.

8.10 STAGE OF MODERATE_DR

Fig 8.10 Stages of Moderate Retinopathy here the images is converted to gray & it performs edge detection
and gives the results & remedies & their accuracy.

Dept of CSE, SJBIT 2023-24 Page 42


Severity Detection Of Diabetic Retinopathy

8.11 STAGE OF SEVERE_DR

Fig 8.11 Stages of Severe Retinopathy here the images is converted to gray & it performs edge detection and
gives the results & remedies & their accuracy.

8.12 STAGE OF PROLIFERATIVE_DR

Fig 8.12 Stages of Proliferative Retinopathy here the images is converted to gray & it performs edge detection
and gives the results & remedies & their accuracy.

Dept of CSE, SJBIT 2023-24 Page 43

You might also like