Diabetic
Diabetic
CHAPTER - 1
INTRODUCTION
1.1 OVERVIEW
Healthcare industry is a very large and sensitive meta data and must be carefully handled. One
of the growing extremely fatal diseases all over the world is Diabetes Mellitus. Medical
professionals require a reliable prediction system to diagnose this disease. Some of the useful deep
learning techniques for examining the data from diverse perspectives and synopsizing it into
valuable information. The accessibility and availability of huge amounts of data are able to provide
us useful knowledge unless certain data mining techniques are applied to it. The main goal to this
is to determine new patterns and interpret these patterns to deliver acute significant and useful
information for the process. Diabetes may leads to heart disease, kidney disease, nerve damage,
and blindness. Diabetes data mining in an efficient way for a crucial concerns.
The data mining techniques and its way be discovered to create the appropriate approaches and
techniques for an efficient classification of Diabetes dataset and in extracting it data patterns. In
this study, medical bioinformatics analyses is done in diabetes prediction. The WEKA software is
employed as a mining tool in diabetics diagnosis. The Pima Indian diabetes Data base was acquired
from UCI repository that used for future analysis. The dataset was researched and analyzed to build
an effective model that used to predict and diagnoses diabetes disease. Diabetic Retinopathy may
be a complication of diabetes that's caused thanks to the changes within the blood vessels of the
retina and is one among the leading causes of blindness in the developed world.
A trained deep Convolutional neural network model will further extract the features of the
fundus images and later with the assistance of the activation functions like Relu and softmax along
with optimizing techniques like Adam an output is obtained. The output obtained from the
Convolutional Neural Network (CNN) model and therefore the patient details will collectively
make a uniform report. In this study, we aim to apply the bootstrapping resembling technique to
The idea of Machine learning is to predict the longer term from past data. Machine learning
focuses on the event of Computer Programs which will change when exposed to new data and
therefore the basics of Machine Learning, implementation of an easy machine learning algorithm
using python. It feed the training data to an algorithm, and therefore the algorithm uses this training
data to offer predictions on a replacement test data.
Machine learning are often roughly separated in to three categories. Supervised learning
program is given to both the input file and therefore the corresponding labelling to find out the info
that has got to be labelled by a person's being beforehand. Unsupervised learning has no labels. It
is directly fed to the learning algorithm. This algorithm has got to discover the clustering of the
input file. Finally, Reinforcement learning dynamically interacts with its environment and it
receives positive or feedback to enhance its performance.
Deep learning is a part of machine learning in artificial intelligence that has networks able to
adapt in learning unsupervised data that are unlabeled or unstructured . This proess is otherwise
known as deep neural learning or deep neural networks.
Diabetic retinopathy is a medical complication that is caused by the damage to the blood vessels
of the light-sensitive tissue which is present at the back of the eye, retina, which can gradually lead
to complete blindness and various other eye problems depending on the severity of Diabetic
Retinopathy. It is observed that 40% − 45% of diabetic patients are likely to have DR in their life,
but due to lack of knowledge and delayed diagnosis, the condition escalates quickly.
Diabetes was once thought of as a disease of the affluent but it's now reached epidemic
proportion in both developed and developing countries. Currently, a minimum of 366 million
people worldwide has diabetes, and this number is probably going to extend as a results of an aging
global population Globally, the quantity of individuals with DR will grow from 126.6 million in
2010 to 191.0 million by 2030, and that we estimate.
That the quantity with vision-threatening diabetic retinopathy (VTDR) will increase from
37.3 million to 56.3 million, if prompt action isn't taken. Diabetes is the best disease to apply deep
There are many applications available which help the practitioners to study the disease and
complications but many applications have their own advantages and flaws. According, Indian
peoples are more prone to diabetes because of lots of reasons including lifestyle, consumption of
type of food and inadequate physical activities.
Diabetic Retinopathy is one of the major complications that affects the human eye of diabetic
people. Damage to the blood vessels of light-sensitive tissue of the retina causes this disease.
Diabetic Retinopathy (DR) is a compilation of diabetes that causes the blood vessel of the retina to
swell and leak fluids and blood. It is the leading cause of blindness for people aged 20 to 64 years.
Diabetic Retinopathy (DR) is the most common cause of visual loss in people across the world.
1.4 MOTIVATION
The motivation behind developing a deep learning model for diabetic retinopathy detection is
the significant burden of this disease on patients, healthcare systems, and society. Early detection
and treatment of DR are critical to preventing blindness and improving patient outcomes. However,
current methods for DR detection are expensive and inaccessible in many regions, leading to delays
in diagnosis and treatment. By developing a cost-effective and scalable solution for DR detection
Dept of CSE, SJBIT 2023-24 Page 3
Severity Detection Of Diabetic Retinopathy
using deep learning, this study aims to improve the accessibility and affordability of DR screening
and diagnosis, ultimately improving patient outcomes and reducing the burden on healthcare
systems.
1.5 OBJECTIVES
To develop a deep learning model for automated diabetic retinopathy (DR) detection from
retinal images.
To evaluate the performance of the deep learning model using standard techniques, such as
backpropagation and gradient descent, and performance metrics such as accuracy,
sensitivity,and specificity.
To compare the performance of the deep learning model with other state-of-the-art DR
detection methods to assess its superiority and effectiveness.
To assess the clinical implications of the developed deep learning model, including early
detection and treatment of DR, reducing the workload of ophthalmologists, and improving
patient outcomes.
To identify any limitations or challenges faced in the development and evaluation of the deep
learning model, such as limited sample size or biased data, that may affect the generalizability
of the model.
1.6 SCOPE
The scope of this study is to develop and evaluate a deep learning model for automated diabetic
retinopathy (DR) detection from retinal images. The study will use publicly available retinal image
datasets with labeled DR severity to train and validate the deep learning model. The deep learning
model will use a combination of convolutional neural networks (CNNs) and recurrent neural
networks (RNNs) for feature extraction, sequence modeling, and classification of DR severity. The
performance of the deep learning model will be compared with other state-of-the-art DR detection
methods to assess its superiority and effectiveness. The developed deep learning model can have
significant clinical implications, including early detection and treatment of DR, reducing the
workload of ophthalmologists, and improving patient outcomes. However, the study may face
limitations such as limited sample size and biased data, which can affect the generalizability of the
deep learning model. The scope of this study is limited to the development and evaluation of a deep
learning model for automated DR detection and does not include implementation and validation in
real-world clinical settings.
CHAPTER - 2
LITERATURE SURVEY
SL. AUTHOR METHOD DATASET RESULT LIMITATION
NO
1 Balla Goutam Moham Deep learning. Owndataset. Effective in improving Not applicable forreal
-mad Farukh Hashmi accuracy andmaintain time usage.
(Senior Member,Ieee), -ing a low false positi
Zong WooGeem , (Seni -ve rate.
-or Member, Ieee), and
Neeraj DhanrajBokde
2 Banu Machine Learnin Own dataset ( Increased detection Cannot be applied for leng
-g algorithms. Ebbu 2017 phis rate, high efficiency in -thy URL.
-hing dataset) detecting shorter URL
in real time scenario.
3 Vahid Decision tree , Own dataset. Accurate classificatio Limited features consider
randomforest, -n,improved performa- -ed forthe dataset creation,
KNN, Gradient nce. high computation time.
boosting,
adabooster, XG
boost.
4 Ting Deep belief net Data from the Increase detection rate Fails to detectin real
works,K-step, ISP. faster running time. time.
CD-k.
5 Shahreen Machine learning Email dataset, Accurate detection and Consumes more training
models. SMS message high performance. time.
dataset.
6 Nguyet Gated recurrent Website phishi Was able to solve the co Only uses balance data For
unit, Machine -ngdataset, ema -mmon problem of man detection, highest possible
learning and -il phishing data -ual parameter tunning, accuracy wasnot obtained,
deep learning -set. long trainingtime and real time detection is not
techniques deficient detection accu satisfied.
-racy.
7 Fatima Machine learning Own dataset High performance and Not applicable for real
techniques. accuracy, fast phishing time usage.
attack detection.
8 Alan Normalized comp Own dataset Highest performance Not able to detect variants
-ressiondistance, was obtained of attack sobstacles were
further point first encountered while prototy
algorithm. -ping.
9 Ayman Machine learning Own dataset. Retraining model to obt Not applicable for other da
algorithms,Hellin -ain more accurate resul -taset fail to detect in real
-ger distance dec -ts, increase performan time scenario.
-ision tree.
- -ce considered imbalan
-ced data.
CHAPTER - 3
Image acquisition: The system should be able to acquire high-quality retinal images for
analysis.
Image preprocessing: The system should be able to preprocess images to remove noise, adjust
contrast, and enhance features for analysis.
Diabetic retinopathy detection: The system should be able to accurately detect diabetic
retinopathy in retinal images using deep learning algorithms.
Classification: The system should be able to classify diabetic retinopathy into different stages
based on the severity of the disease.
User interface: The system should have an intuitive and user-friendly interface for healthcare
professionals to interact with the system.
Patient database: The system should be able to maintain a patient database to store and retrieve
patient information and retinal images.
Reporting: The system should generate reports summarizing the results of diabetic retinopathy
detection and classification for each patient.
Integration with electronic health records (EHRs): The system should be able to integrate with
EHRs to facilitate patient care and management.
System security: The system should have robust security measures to protect patient data and
comply with relevant regulations.
System maintenance and updates: The system should be easy to maintain and update to ensure
optimal performance and accuracy.
Performance: The system should have a fast and efficient processing speed to ensure timely
Dept of CSE, SJBIT 2023-24 Page 6
Severity Detection Of Diabetic Retinopathy
diagnosis and treatment.
Accuracy: The system should have high accuracy in detecting diabetic retinopathy to minimize
false positives and false negatives.
Robustness: The system should be able to handle variations in retinal images due to factors such
as age, ethnicity, and image quality.
Scalability: The system should be able to handle large volumes of patient data and
accommodate additional retinal diseases and conditions.
Usability: The system should be easy to use and require minimal training for healthcare
professionals to operate.
Reliability: The system should be reliable and available for use 24/7 to ensure patient care is
not interrupted.
Interoperability: The system should be able to integrate with other healthcare systems and
technologies to facilitate patient care and management.
Data privacy and security: The system should have strong data privacy and security measures
to protect patient data and comply with relevant regulations.
Maintenance and support: The system should be easy to maintain and provide adequate
technical support for users.
Accessibility: The system should be accessible to all patients, regardless of their geographic
location, language, or disability.
Monitor : 15 VGAColor.
CHAPTER - 4
Design overview explains the architecture that would be used for developing a software
product. It is an overview of an entire system, identifying the main components that would be
developed for the product and their interfaces.
1. Image Collection
2. Image Preprocessing
3. Image Segmentation
4. Feature Extraction
5. Training
6. Classification
1. Image Collection
The dataset that we have used in this project is available publicly on the internet
2. Image Preprocessing
The goal of pre-processing is an improvement of image data that reduces unwanted distortions
and enhances some image features important for further image processing. Image pre-processing
involves three main things a) Grayscale conversion b) Noise removal c) Image enhancement
a)Grayscale conversion: Grayscale image contains only brightness information. Each pixel
value in a grayscale image corresponds to an amount or quantity of light. The brightness
graduation can be differentiated in grayscale image. Grayscale image measures only light
intensity 8 bit image will have brightness variation from 0 to 255 where ‘0’ represents black and
‘255’ represent white. In grayscale conversion color image is converted into grayscale image
shows. Grayscale images are easier and faster to process than colored images. All image
processing technique are applied on grayscale image.
b) Noise Removal: The objective of noise removal is to detect and remove unwanted noise from
digital image. The difficulty is in deciding which features of an image are real and which are
3. Image Segmentation
Image segmentation are of many types such as clustering, threshold, neural network based and
edge based. In this implementation we are using the clustering algorithm called mean shift
clustering for image segmentation. This algorithm uses the sliding window method for converging
to the Centre of maximum dense area. This algorithm makes use of many sliding windows to
converge the maximum dense region. Mean shift clustering Algorithm This algorithm is mainly
used for detecting highly dense region.
There are many features of an image mainly color, texture, and shape. Here we are considering
three features that are color histogram, Texture which resembles color, shape, and texture.
5. Training
Training dataset was created from images of known Cancer stages. Classifiers are trained on the
created trainingdataset. A testing dataset is placed in a temporary folder. Predicted results from the
test case, Plots classifiers graphs and add feature-sets to test case file, to make image processing
models more accurate
6. Classification
The binary classifier which makes use of the hyper-plane which is also called as the decision
boundary between two of the classes is called as Convolution Neural Network. Some of the
problems are pattern recognition like texture classification makes use of CNN. Mapping of
nonlinear input data to the linear data provides good classification in high dimensional space in
CNN. The marginal distance is maximized between different classes by CNN. Different Kernels
are usedto divide the classes. CNN is basically a binary classifier that determines hyper plane in
dividing two classes. The boundary is maximized between the hyperplane and two classes. The
samples that are nearest to the margin will be selected in determining the hyperplane is called
support vectors.
This results a large reduction in the sample size. Sometimes, traditional Fully- Connected (FC)
layer will be used in conjunction with the convolutional layers towards the output stage. In CNN
architecture, usually convolution layer and pool layer are used in some combination. The pooling
layer usually carries out two types of operations viz. max pooling and means pooling. In mean
pooling, the average neighborhood is calculated within the feature points and in max pooling it is
calculated within a maximum of feature points. Mean pooling reduces the error caused by the
neighborhood size limitation and retains background information. Max pooling reduces the
convolution layer parameter estimated error caused by the mean deviation and hence retains more
Dept of CSE, SJBIT 2023-24 Page 10
Severity Detection Of Diabetic Retinopathy
texture information.
A system architecture for diabetic retinopathy detection using deep learning may involve several
components, including:
Image acquisition: A retinal camera or imaging device is used to acquire high-quality retinal
images of patients.
Feature extraction: Deep learning algorithms are used to extract relevant features from the
preprocessed images.
Training and validation: A deep learning model is trained on a large dataset of retinal images
with known diabetic retinopathy labels. The model is validated on a separate dataset to assess
its performance.
Testing and evaluation: The trained model is tested on new retinal images to assess its
diagnostic accuracy and performance.
Patient database: A patient database is maintained to store patient information and retinal
images for future reference.
Integration with EHRs: The system is integrated with electronic health records (EHRs) to
facilitate patient care and management.
System maintenance and updates: The system is designed for easy maintenance and updates
to ensure optimal performance and accuracy over time.
In CNN architecture, usually convolution layer and pool layer are used in some combination.
The pooling, the average neighborhood is calculated within the feature points and in max pooling
it is calculated within a maximum of feature points. Mean pooling reduces the error caused by the
neighborhood size limitation and retains background information. Max pooling reduces the
convolution layer parameter estimated error caused by the mean deviation and hence retains more texture
In convolution layer after the computer reads an image in the form of pixels, then with the
help of convolution layers we take a small patch of the images. These images or patches are called
the features or the filters. By sending these rough feature matches is roughly the same position in
the two images, convolutional layer gets a lot better at seeing similarities than whole image matching
scenes. These filters are compared to the new input images if it matches then the image is classified
correctly. Here line up the features and the image and then multiply each image, pixel by the
corresponding feature pixel, add the pixels up and divide the total number of pixels in the feature.
We create a map and put the values of the filter at that corresponding place. Similarly, we will move
the feature to every other position of the image and will see how the feature matches that area.
Finally, we will get a matrix as an output.
ReLU layer is nothing but the rectified linear unit, in this layer we remove every negative
value from the filtered images and replaces it with zero. This is done to avoid the values from
summing up to zeroes. This is a transform function which activates a node only if the input value
is above a certain number while the input is below zero the output will be zero then remove all the
negative values from the matrix.
In this layer we reduce or shrink the size of the image. Here first we pick a window size, then
mention the required stride, then walk your window across your filtered images. Then from each
window take the maximum values. This will pool the layers and shrink the size of the image as well
as the matrix. The reduced size matrix is given as the input to the fully connected layer.
We need to stack up all the layers after passing it through the convolutional layer, ReLU layer
and the pooling layer. The fully connected layer used for the classification of the input image. These
layers need to be repeated if needed unless you get a 2x2 matrix. Then at the end the fully connected
layer is used where the actual classification happens.
Here is an example of a data flow diagram for pre-processing of retinal images for diabetic
retinopathy detection using deep learning:
[Process 1]: Image cropping - The retinal images are cropped to remove any areas of the image
that are not relevant to diabetic retinopathy detection.
[Process 2]: Image resizing - The cropped images are resized to a standard size to ensure
consistency in feature extraction.
[Process 3]: Image normalization - The resized images are normalized to adjust the contrast and
brightness, remove any noise, and enhance relevant features.
[Output]: Preprocessed retinal images ready for feature extraction and deep learning algorithms.
[Feedback loop]: The output of the pre-processing stage is fed back into the deep learning model
for feature extraction and diabetic retinopathy detection. If the detection results are not
satisfactory, the pre-processing stage may need to be adjusted and repeated to improve image
quality and accuracy of detection.
Here is an example of a data flow diagram for identification of diabetic retinopathy using
deep learning:
[Process 1]: Feature extraction - Deep learning algorithms are used to extract relevant features
from the preprocessed retinal images.
[Process 2]: Diabetic retinopathy detection - The extracted features are used to classify the
retinal images into categories of diabetic retinopathy severity.
[Output]: Detection results - The system generates detection results indicating the severity of
diabetic retinopathy in the retinal images.
[Feedback loop]: The detection results are fed back into the system to evaluate the performance
of the deep learning algorithms. If the accuracy of detection is not satisfactory, the system may
need to be adjusted and retrained to improve its performance.
Here is an example of a data flow diagram for feature extraction in diabetic retinopathy
detection using deep learning:
[Process 1]: Convolutional neural network (CNN) - A CNN is used to analyze the preprocessed
images and extract relevant features.
[Process 2]: Feature selection - The CNN outputs a set of features, and a feature selection
algorithm is used to identify the most important features for diabetic retinopathy detection.
[Process 3]: Feature normalization - The selected features are normalized to ensure consistency
in classification.
[Output]: Extracted features - The system generates a set of extracted features that are used in
the diabetic retinopathy detection process.
[Feedback loop]: The extracted features are fed back into the system to evaluate the
performance of the deep learning algorithms. If the accuracy of detection is not satisfactory, the
system may need to be adjusted and retrained to improve its performance.
Here is an example of a data flow diagram for diabetic retinopathy classification and
detection using deep learning:
[Process 1]: Classification - The extracted features are used to classify the retinal images into
categories of diabetic retinopathy severity using a deep learning model such as a support vector
machine (SVM) or a random forest classifier.
[Process 2]: Detection - The classified images are analyzed to detect signs of diabetic
retinopathy, including microaneurysms, hemorrhages, and exudates.
[Output]: Detection results - The system generates detection results indicating the severity of
diabetic retinopathy in the retinal images and the location of any abnormalities.
[Feedback loop]: The detection results are fed back into the system to evaluate the performance
of the deep learning algorithms. If the accuracy of detection is not satisfactory, the system may
need to be adjusted and retrained to improve its performance.
CHAPTER 5
METHODOLOGY
1. Image Collection
The dataset that we have used in this project is available publicly on the internet
2. Image Preprocessing
The goal of pre-processing is an improvement of image data that reduces unwanted distortions and
enhances some image features important for further image processing. Image pre-processing
involves three main things a) Grayscale conversion b) Noise removal c) Image enhancement
a) Grayscale conversion: Grayscale image contains only brightness information. Each pixel
b) Noise Removal: The objective of noise removal is to detect and remove unwanted noise from
digital image. The difficulty is in deciding which features of an image are real and which are
caused by noise. Noise is random variations in pixel values. We are using median filter to remove
unwanted noise. Median filter is nonlinear filter, it leaves edges invariant. Median filter is
implemented by sliding window of odd length. Each sample value is sorted by magnitude, the
centermost value is median of sample within the window, is a filter output.
3. Image Segmentation
Image segmentation are of many types such as clustering, threshold, neural network based and
edge based. In this implementation we are using the clustering algorithm called mean shift
clustering for image segmentation. This algorithm uses the sliding window method for converging
to the Centre of maximum dense area. This algorithm makes use of many sliding windows to
converge the maximum dense region. Mean shift clustering Algorithm This algorithm is mainly
used for detecting highly dense region.
4. Feature Extraction
There are many features of an image mainly color, texture, and shape. Here we are considering
three features that are color histogram, Texture which resembles color, shape, and texture.
5. Training
Training dataset was created from images of known Cancer stages. Classifiers are trained on the
created trainingdataset. A testing dataset is placed in a temporary folder. Predicted results from the
test case, Plots classifiers graphs and add feature-sets to test case file, to make image processing
models more accurate
6. Classification
The binary classifier which makes use of the hyper-plane which is also called as the decision
Dept of CSE, SJBIT 2023-24 Page 21
Severity Detection Of Diabetic Retinopathy
boundary between two of the classes is called as Convolution Neural Network. Some of the
problems are pattern recognition like texture classification makes use of CNN. Mapping of
nonlinear input data to the linear data provides good classification in high dimensional space in
CNN. The marginal distance is maximized between different classes by CNN. Different Kernels
are usedto divide the classes. CNN is basically a binary classifier that determines hyper plane in
dividing two classes. The boundary is maximized between the hyperplane and two classes. The
samples that are nearest to the margin will be selected in determining the hyperplane is called
support vectors
The flowchart for collecting data is as depicted in the figure 5.2. The data set is collected from
a source and a complete analysis is carried out. The image is selected to be used for training/testing
purposes only if it matches our requirements and is not repeated.
The first step in pre-processing is converting the image from RGB to Greyscale. It can be
obtained by applying the below formula to the RGB image. The figure 5.5 depicts the Conversion
from RGB to grayscale.
1. To store a single-color pixel of an RGB color image we will need 8*3 = 24 bits (8 bit for each
color component).
2. Only 8 bit is required to store a single pixel of the image. So we will need 33 % less memory to
store grayscale image than to store an RGB image.
3. Grayscale images are much easier to work within a variety of task like In many morphological
operation and image segmentation problem, it is easier to work with single layered image
(Grayscale image) than a three-layered image (RGB color image).
4. It is also easier to distinguish features of an image when we deal with a single layered image.
Noise removal
Noise removal algorithm is the process of removing or reducing the noise from the
image. The noise removal algorithms reduce or remove the visibility of noise by smoothing the
entire image leaving areas near contrast boundaries. Noise removal is the second step in image pre-
processing. Here the grayscale image which was obtained in the previous step is given as input.
Here we are making use of Median Filter which is a Noise Removal Technique.
Median Filtering
1. The median filter is a non-linear digital filtering technique, often used to remove noise from an
image or signal.
2. Here 0’s are appended at the edges and corners to the matrix which is the representation of the
grey scale image.
3. Then for every3*3 matrix, arrange elements in ascending order, then find median/middle
element of those 9 elements, and write that median value to that particular pixel position. The
figure 4.6 depicts Noise filtering using Median Filter.
The Histogram of Oriented Gradients (HOG) is a feature descriptor used in computer vision and
image processing for the purpose of object detection. The technique counts occurrences of gradient
orientation in in localized portions of an image.
Here 0’s are appended at the edges and corners to the matrix. Then Gx and Gy are calculated. Gx
is calculates as Gx = value on right –value on left and Gy is calculated as Gy=value on top-value
on left. Figure 5.4.1 shows Gx and Gy in HOG.
After the angle of orientation is calculated, the frequency the angles for the particular intervals area
are noted and they are given as input for the classifier. Here we zeroes are not considered for finding
the frequency. For example for the interval from 40 to 59 there are 2 occurrences, so we are writing
the frequency as 2.
Then, using the formula given in figure 5.4.1 Magnitude and the orientation are calculated. Feature
Extraction using HOG is shown in figure 5.4.1 Magnitude is the illumination and degree the
orientation is the angle of orientation.
Convolutional Layer
Convolutional Layer is the first step in CNN, here 3*3 part of the given matrix which was
obtained from High-pass filter is given as input. That 3*3 matrix is multiplied with the filter matrix
for the corresponding position and their sum is written in the particular position. This is shown in
the below figure. This output is given to pooling layer where the matrix is further reduced. Figure
5.4.3 shows the Convolutional Layer.
Convolution is followed by the rectification of negative values to 0s, before pooling. Here, it is
Pooling Layer
In Pooling layer 3*3 matrix is reduced to 2*2 matrix, this is done by selecting the maximum
of the particular 2*2 matrix for the particular position. Figure 4.16 shows the Pooling Layer.
The output of the pooling layer is flattened and this flattened matrix is fed into the Fully
Connected Layer. In the fully connected layer there are many layers, Input layer, Hidden layer and
Output layers are parts of it. Then this output is fed into the classifier, in this case SoftMax
Activation Function is used to classify the image into covid present or not. Figure 4.17 shows the
Fully connected layer and Output Layer.
CHAPTER 6
IMPLEMENTATION
Implementation is the process of converting a new system design into an operational one. It
is the key stage in achieving a successful new system. It must therefore be carefully planned and
controlled. The implementation of a system is done after the development effort is completed
Modern computer applications are user-friendly. User interaction is not restricted to console-
based I/O. They have a more ergonomic graphical user interface (GUI) thanks to high-speed
processors and powerful graphics hardware. These applications can receive inputs through mouse
clicks and can enable the user to choose from alternatives with the help of radio buttons, dropdown
lists, and other GUI elements.
Flask Programming:
Flask is the standard GUI library for Python. Python when combined with Flask provides a
fast and easy way to create GUI applications. Flask provides a powerful object- oriented interface
to the Tk GUI toolkit. Flask has several strengths. It’s cross- platform, so the same code works on
Windows, macOS, and Linux. Visual elements are rendered using native operating system
elements, so applications built with Flask look like they belong on the platform where they’re run.
CHAPTER - 7
TESTING
Testing is the process of evaluating a system or its component(s) with the intent to find
whether it satisfies the specified requirements or not. Testing is executing a system to identify any
gaps, errors, or missing requirements in contrary to the actual requirements. System testing of a
software or hardware is a testing conducted on a complete, integrated system to evaluate the
system’s compliance with its specified requirements.
System testing fails within the scope of black-box testing, and such, should require no
knowledge of the inner design of the code or logic. As a rule, system testing takes, as its input, of
all the integrated’ software components that have passed integration testing and the software system
itself integrated with any applicable software systems.
The purpose of integration testing is to detect any inconsistencies betweenthe software units
that are integrated together. System testing is a more limited type of testing. It seeks to detect defects
both within the inter-assemblages and within the system. System testing is performed on the entire
system in the context of a Functional Requirement Specification (FRS) and / or a System
Requirement Specification (SRS). System testing testsnot only the design, but also the behavior
and even the believed expectation of the customer.
It is also intended to test up to and beyond the bounds defined in the software / hardware
requirement specification. Before applying methods to design effective test cases, a software
engineer must understand the basic principle that guides software testing. All the tests should be
traceable to customer requirements.
a) White-box testing (also known as clear box testing, glass box testing, transparent box testing
and structural testing, by seeing the source code) tests internal structures or workings of a
program, as opposed to the functionality exposed to the end-user. In white-box testing an
internal perspective of the system, as well as programming skills, are used to design test cases.
The tester chooses inputs to exercise paths through the code and determine the appropriate
outputs. While white-box testing can be applied at the unit, integration, and system levels of
Dept of CSE, SJBIT 2023-24 Page 32
Severity Detection Of Diabetic Retinopathy
the software testing process, it is usually done at the unit level. It can test paths within a unit,
paths between units during integration, and between sub systems duringa system-level test.
Though this method of test design can uncover many errors or problems, it might not detect
unimplemented parts of the specification or missing requirements.
Functional Testing: This is a type of black-box testing that is based on the specifications of
the software thatis to be tested. The application is tested by providing input and then the results
are examined that need to conform to the functionality it was intended for. Functional testingof
software is conducted on a complete, integrated system to evaluate the system's compliance
with its specified requirements. There are five steps that are involved while testing an
1. The determination of the functionality that the intended application is meant to perform.
3. The output based on the test data and the specifications of the application.
5. The comparison of actual and expected results based on the executed testcases.
Non-functional Testing: This section is based upon testing an application from its non-
functional attributes. Nonfunctional testing involves testing software from the requirements
which are non- functional in nature but important such as performance, security, user interface,
etc.
Integration testing is a level of software testing where individual units are combined and
tested as a group. The purpose of this level of testing is to expose faults in the interaction between
integrated units. Test drivers and test stubs are used to assist in Integration Testing.
It occurs after unit testing and before validation testing. Integration testing can be done in
two ways: Bottom-up integration testing and Top-down integration testing.
1. Bottom-up Integration: This testing begins with unit testing, followed by tests of
progressively higher-level combinations of units called modules or builds.
2. Top-down Integration: In this testing, the highest-level modules are tested first and
progressively, lower-level modules are tested thereafter. In a comprehensive software
development environment, bottom-up testing is usually done first, followed by top-down
testing. The process concludes with multiple tests of the complete application, preferably in
scenarios designed to mimic actual situations.
Remarks Pass.
System testing is the first step in the Software Development Life Cycle, where the application
is tested.
The application is tested thoroughly to verify that it meets the functional and technical
specifications.
The application is tested in an environment that is very close to the production environment
where the application will be deployed.
System testing enables us to test, verify, and validate both the business requirements as well as
the application architecture.
CHAPTER – 8
RESULT
8.1 DATASET
FIG 8.1 A dataset is a structured collection of data organized and stored together for analysis or processing.
Fig 8.9 Stages of Mild Retinopathy here the images is converted to gray & it performs edge detection and gives
the results & remedies & their accuracy.
Fig 8.10 Stages of Moderate Retinopathy here the images is converted to gray & it performs edge detection
and gives the results & remedies & their accuracy.
Fig 8.11 Stages of Severe Retinopathy here the images is converted to gray & it performs edge detection and
gives the results & remedies & their accuracy.
Fig 8.12 Stages of Proliferative Retinopathy here the images is converted to gray & it performs edge detection
and gives the results & remedies & their accuracy.