0% found this document useful (0 votes)
17 views

Detecting Anemia Based On Palm Images Using Convolutional Neural Network

Hemoglobin is a protein in the blood that conveys oxygen from the lungs to the body's tissues. Hemoglobin levels under the normal limit cause anemia. Hemoglobin estimation is generally utilizing a needle to take the patient’s blood as a sample and afterward testing it at the chemicals laboratory. This technique has a shortcoming, specifically, it is less proficient because it requires a few hours. Likewise, it needs to hurt the patient's skin with a hypodermic needle. In this study, we will disc

Uploaded by

IJAERS JOURNAL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Detecting Anemia Based On Palm Images Using Convolutional Neural Network

Hemoglobin is a protein in the blood that conveys oxygen from the lungs to the body's tissues. Hemoglobin levels under the normal limit cause anemia. Hemoglobin estimation is generally utilizing a needle to take the patient’s blood as a sample and afterward testing it at the chemicals laboratory. This technique has a shortcoming, specifically, it is less proficient because it requires a few hours. Likewise, it needs to hurt the patient's skin with a hypodermic needle. In this study, we will disc

Uploaded by

IJAERS JOURNAL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

International Journal of Advanced Engineering Research

and Science (IJAERS)


Peer-Reviewed Journal
ISSN: 2349-6495(P) | 2456-1908(O)
Vol-9, Issue-9; Sep, 2022
Journal Home Page Available: https://ijaers.com/
Article DOI: https://dx.doi.org/10.22161/ijaers.99.29

Detecting Anemia Based on Palm Images using


Convolutional Neural Network
Ahmad Saiful Rizal1, Alfian Futuhul Hadi2, Sudarko3, Supangat4

1
Department of Mathematics, Faculty of Mathematics and Natural Sciences, Jember University, Indonesia
Email: [email protected]
2
Department of Mathematics, Faculty of Mathematics and Natural Sciences, Jember University, Indonesia
Email: [email protected]
3
Department of Chemistry, Faculty of Mathematics and Natural Sciences, Jember University, Indonesia
Email: [email protected]
4
Department of Pharmacology, Faculty of Medicine, Jember University, Indonesia
Email: [email protected]

Received: 15 Aug 2022, Abstract— Hemoglobin is a protein in the blood that conveys oxygen from
Received in revised form: 05 Sep 2022, the lungs to the body's tissues. Hemoglobin levels under the normal limit
cause anemia. Hemoglobin estimation is generally utilizing a needle to take
Accepted: 11 Sep 2022,
the patient’s blood as a sample and afterward testing it at the chemicals
Available online: 17 Sep 2022 laboratory. This technique has a shortcoming, specifically, it is less
©2022 The Author(s). Published by AI proficient because it requires a few hours. Likewise, it needs to hurt the
Publication. This is an open access article patient's skin with a hypodermic needle. In this study, we will discuss the
under the CC BY license Convolutional Neural Network (CNN) in classifying hemoglobin levels
(https://creativecommons.org/licenses/by/4.0/). based on palm images. Hemoglobin levels are partitioned into two classes,
to be anemia and non-anemia. The image size utilized is 500×375 pixels
Keywords— Classification, Clustering,
with the number of Red, Green, and Blue (RGB) channels. The data utilized
Convolutional Neural Network,
in this study were images of the patient's palm. The first important phase in
Hemoglobin, Image Processing.
this research was data retrieval, which went on with preprocessing data,
then the data is clustered into two clusters using a random state, then at
that point, each cluster will be classified using the CNN algorithm.
The best results are obtained by the value of accuracy reached 96.43%
with a precision score of 93.75% achieved, recall of 100%, and specificity
of 92.31% for cluster 1 in random state 1, and the similar random state for
cluster 2 is obtained the value of accuracy reached 96.43% with a
precision score of 93.33%, recall of 100%, and specificity of 92.86% were
achieved this way.

I. INTRODUCTION Ray images [3], classify Covid-19 contamination [4], early


Machine Learning (ML) is a modeling technique that diagnosis of Coronavirus impacted patients [5],
can recognize patterns in data automatically without myocardial infarction detection [6], recognizing
human assistance. ML enables the analysis of massive cardiovascular disease from mammograms [7], brain
quantities of data [1]. Algorithms in machine learning can tumor detection [8], and dermatologist level classification
predict based on the given dataset. Machine learning in the of skin cancer [9]. There is additionally related research,
medical sector can be utilized for breast cancer for example, Ozturk et al. [10] have detected covid-19
classification [2], pneumonia detection based on chest X- automatically using raw chest X-ray images for binary

www.ijaers.com Page | 280


Rizal et al. International Journal of Advanced Engineering Research and Science, 9(9)-2022

classification (Covid-19 cases vs Normal) and multiclass In this study, we will detect anemia and non-anemia
classification (Covid-19, Normal and Pneumonia). based on palm images using a Convolutional Neural
Rajpurkar et al. [11] presented radiologist-level pneumonia Network (CNN) to give development to patients without
detection on chest X-rays, containing more than 100000 utilizing a hypodermic needle. This algorithm is used with
frontal view X-ray images with fourteen diseases. Talo et the expectation of having high accuracy. By using the
al. [12] designed a convolutional neural network for brain accuracy of the model, the detection of anemia and non-
disorder classification. Comelli et al. [13] employed deep anemia should be possible only with a palm image.
learning for lung segmentation on high-resolution
computerized tomography images.
II. METHOD
The model of machine learning can make a prediction
2.1 Image Processing
based on current observations. The results of the prediction
will be compared to the actual data that has been tested to An image might be characterized as a two-dimensional
measure the exactness of the model. Machine learning can function that contains a set of pixels. Every pixel is
make a model for image recognition. One of the represented by two integers to indicate its area in the
algorithms is Convolutional Neural Network (CNN). CNN image field, while to show the brightness of the pixels, it
is one of the deep neural networks that apply a model frequently utilizes a value of 8 bits, and that implies there
based on an Artificial Neural Network (ANN) with multi- are 28 or 256 degrees of gray with the interval [0, 255],
layers consisting of two or more hidden layers. ANN has where 0 is considered black, 255 represents white and all
the principle of imitating the brain process in the visual intermediate values between 0 and 255 are shades of gray
cortex with its main components being neuron cells and varying from black to white or gray level. A digital image
synapses. The model of the deep neural network can be is an image that can be processed directly using a personal
trained utilizing existing data and ANN structures [14]. computer. Digital image size M × N is represented with
matrix size M rows and N columns :
Hemoglobin is an important component in red blood
cells. The main capability of hemoglobin is responsible for
carrying oxygen and carbon dioxide in the body [15]. The
lack of hemoglobin level can cause anemia, while
hemoglobin conditions above the normal limit can cause
polycythemia. Anemia is a decrease in the concentration of
red blood cells that are circulating or the concentration of Indexes of x and y are utilized to denote the rows and
hemoglobin which hinders the most common way of columns. The x-index moves down and the y-index moves
transporting oxygen [16]. One of the clinical symptoms right. The origin is f (0,0) which is located in the top-left
that are many times found in anemia patients is pale skin corner and finishes in f (M - 1, N - 1). It intends to show
and conjunctiva. Therefore, paramedics frequently check the area of pixels [19].
the fingertips and conjunctiva to determine the condition 2.2 Convolutional Neural Network (CNN)
of anemia patients [17]. The normal level for hemoglobin
CNN is an improvement of the Multi-Layer Perceptron
in the blood is as given: women 12-16 g/ 100 ml blood,
(MLP) and the most recent technique for processing two-
men 14-18 g/ 100 ml blood, and newborns 14-20 g/ 100 ml
dimensional data. CNN is a deep learning algorithm
blood [18].
because of its network depth and is widely applied to
Hemoglobin measurements are generally carried out image data. CNN consists of an input layer, output layer,
using a hypodermic needle to take the patient's blood as and hidden layer. Hidden layers generally contain
samples and keep on testing at the chemicals laboratory. convolutional layers, pooling layers, and fully connected
Their hemoglobin levels will be recognized after a few layers [20].
hours. Symptoms of anemia and non-anemia should be
2.2.1 Convolutional Layer
visible based on hemoglobin levels. This method is less
The convolutional layer is the core of CNN, most of
efficient because it requires a long process and causes pain
the computation is finished in this layer. The convolutional
to patients. It is also considered less kindly for certain
layer in the CNN architecture generally uses more than
individuals since it should hurt the patient's skin with a
one filter. The filter as a feature detector, convolved with
hypodermic needle, especially in infants, elderly patients,
an image input, thereby producing the convolved feature
and other vulnerable groups.
by using a stride until all receptive fields are covered [21].
The convolution process can be illustrated in Fig 1.

www.ijaers.com Page | 281


Rizal et al. International Journal of Advanced Engineering Research and Science, 9(9)-2022

convolution layer should be changed into one-dimensional


data with the goal that the data can be classified linearly.
The fully connected layer can be implemented at the end
of the network.

Fig 1: Convolution process


The convolution operation is the sum of products of the
elements that are located at the same position of two
matrices.
2.2.2 Pooling Layer
The pooling layer can keep up the size of data during
the convolution process, by doing downsampling. In this Fig 3: Fully connected layer
layer, we can represent data to be smaller, easier to
manage, and simple to control overfitting. The pooling
system that is normally utilized is max pooling, which Overall, this is a Convolutional Neural Network model for
chooses the maximum value in a specific area [22]. The image classification, consisting of convolutional layers,
pooling system is shown in Fig 2. pooling layers, fully connected layers, and output layers
[24]

Fig 2: Max pooling operation


2.2.3 Activation Function
The activation function is a function to initiate neurons
and its process should be possible after the convolutional
or pooling process [23]. There are three activation
functions: Fig 4: CNN Architecture
Sigmoid function, the output of this function is
dependably in the range 0 and 1. The disadvantage of the
2.3 Performance Evaluation
sigmoid is that the gradient tends to be small or zero when
the dataset is too small or large. This causes the network to The confusion matrix is a two-dimensional matrix that
refuse to learn further or learning is drastically slow. This describes the performance of the model on the test data.
function is used for classification problems. Each column of it represents the predicted class and each
row represents the actual class. In the confusion matrix,
Tanh function, the output has a range between -1 and 1.
there is a true negative (TN) indicating a correct prediction
This function tends to learn slowly but is slightly faster or
for a negative class, false negative (FN) implying that the
quite stronger for Tanh than sigmoid. This is used for
actual data is positive but predicted to be negative. True
machine learning classification.
positive (TP) shows the correct prediction for the positive
ReLu function, the output value of this function will be class, while false positive (FP) is the actual negative data
0 if the input value is negative, if the input value is but is predicted to be positive. From the results of the
positive then the resulting output is the activation value confusion matrix, it may be utilized to calculate the value
itself. The advantage of this function, it is more of accuracy, recall, specificity, and precision [25].
computationally efficient and converges faster because it's
linear than sigmoid and Tanh, meaning it overcomes the
weakness of sigmoid and Tanh. 2.3.1 Accuracy
2.2.4 Fully Connected Layer Accuracy is a comparison between correct value
prediction and general data. It describes how precisely the
In this layer, every neuron has a full connection to all
model can predict accurately.
activations in the previous layer. Each neuron in the

www.ijaers.com Page | 282


Rizal et al. International Journal of Advanced Engineering Research and Science, 9(9)-2022

Accuracy = (1)

2.3.2 Recall
Recall or sensitivity (True Positive Rate) is the
correlation of a true positive prediction with the general
positive actual data.

Recall = (2)

2.3.3 Specificity
Specificity (True Negative Rate) is the proportion of
true negative prediction with the general negative actual
data.

Specificity = (3)

2.3.4 Precision
Precision is the ratio of positive correct predictions
with the general positive predicted data.

Precision = (4)

2.4 Data
The data used in this study were images of patients'
palms obtained directly from Soebandi General Hospital, Fig 6: Research Framework
Jember Regency, Indonesia. Data retrieval was taken using First, data retrieval was done by applying the right
an android camera with the right health protocol. The health protocol. Data that has been got will be
research population was patients in the hospital. Every preprocessed. The image will be taken its pixel values so
image and label embedded is to be utilized as a dataset. the machine can analyze the pattern of pixel values. Next
The image data obtained 193 images. The patients have stage, these pixels will be matched with its label. This step
tested their hemoglobin levels in the clinical laboratory. In needs a balancing strategy to adjust samples for
the image data, 57 patients with anemia (25 men and 32 classification cases.
women) because hemoglobin level is under the normal After that, data was clustered into two clusters via a
limit, and 136 patients with non-anemia (73 men and 63 random state. Cluster results were separated into a training
women). The examples of image data are shown in Fig 5. set (80%) and a testing set (20%). Data is fit to be applied
to a current model. This study uses the CNN model. The
model is trained using the accessible training set. This
training process goes through several repetitions. In one
repetition, the model will learn the pattern of all training
data so it can do a decent prediction. This model is hoped
to have high accuracy and low loss. In this process, the
model will check the accuracy value of validation data.
Then, the process has not been completed. Sometimes
necessary to change parameters to get the accuracy of a
decent model.
Testing model is a truly important part of machine
Fig 5: Image Data learning since it aims to test the model that has been made,
so classification results are obtained. Likewise, its result is
evaluated by the standard goodness of fit predictive
This research was finished with systematic steps, the models with machine learning by analyzing accuracy,
architecture can be shown in Fig 6. precision, recall, and specificity.

www.ijaers.com Page | 283


Rizal et al. International Journal of Advanced Engineering Research and Science, 9(9)-2022

There is also a hyperparameter, that can be adjusted Table 2: Classification results for Cluster 2
and plans to control the model improvement. Different Precisi Recal
hyperparameter values can affect model training. The next RS loss Accuracy Specificity
on l
phase is the hyperparameters used in the training process.
Epochs, the number of times to iterate over the dataset. 0 1.27 89.29 91.67 84.62 93.33
Batch size is the quantity of sample data that should be 1 0.11 96.43 93.33 100 92.86
visible to the model each time. Learning rate, the amount 2 1.01 85.71 84.62 84.62 86.67
to update the model parameters at each batch. A small
3 1.31 89.29 92.86 86.67 92.31
value of learning rate will make the training process run
slowly. 4 0.61 92.86 88.24 100 84.62
In this study, we use Python programming on the web 5 0.17 92.86 85.71 100 87.50
application also known as Google Colab. Its official site is 6 0.87 92.86 92.86 92.86 92.86
www.colab.research.google.com. The Google Colab has
7 1.22 82.14 87.50 82.35 81.82
various libraries including NumPy, Pandas, TensorFlow,
Matplotlib, etc. It additionally gives storage media 8 0.82 82.14 88.89 84.21 77.78
connected to google drive. In this research, we use the 9 0.72 89.29 100 76.92 100
processor Graphics Processing Unit (GPU) because it has
more cores so it's able to do parallel computing and is
suitable for image processing [26]. In model optimization, a loss and an optimizer are
needed to train the model. The best model of training data
is Cluster 1 in random state 1. Meanwhile, for Cluster 2 in
III. RESULTS random state 1 as well, as shown in Fig 7.
In the classification results, there are 2 classes namely
anemia and non-anemia. The testing model uses 28 images
on each cluster for the different random states. Here are
the results of the testing model that has been done:

Table 1: Classification results for Cluster 1


Precisi Recal Specificit
RS loss Accuracy
on l y
0 2.26 78.57 84.62 73.33 84.62
1 0.26 96.43 93.75 100 92.31 (a)
2 0.18 92.86 92.86 92.86 92.86
3 0.03 96.43 91.67 100 94.12
4 1.39 71.43 60.00 81.82 64.71
5 0.54 89.29 93.33 87.50 91.67
6 0.65 89.29 83.33 90.91 88.24
7 0.17 96.43 90.00 100 94.74
8 0.39 82.14 80.95 94.44 60.00
9 0.99 89.29 93.33 87.50 91.67
(b)
RS: Random state
Fig 7: The best model training of all random states
Based on the test results above, this indicates that the
model can classify anemia well. The value of accuracy (a) Cluster 1; (b) Cluster 2
reaches 96.43%, it is called excellent classification. A confusion matrix is also needed to evaluate the
Meanwhile, the testing model for Cluster 2 is shown in performance of the model. It is utilized to show how the
Table 2. model when making predictions, not only provides
information about the errors made by the model but also

www.ijaers.com Page | 284


Rizal et al. International Journal of Advanced Engineering Research and Science, 9(9)-2022

the types of errors made. The best result confusion matrix 3 11 0 1 16


is shown in Fig 8. 4 9 2 6 11
5 14 2 1 11
6 10 1 2 15
7 9 0 1 18
8 17 1 4 6
9 14 2 1 11
TP: True Positive; FP: False Positive
FN: False Negative; TN: True Negative

Table 4: Classification results for the random state in


ranges 0 and 9 for Cluster 2
Random
TP FN FP TN
State
0 11 2 1 14
(a) 1 14 0 1 13
2 11 2 2 13
3 13 2 1 12
4 15 0 2 11
5 12 0 2 14
6 13 1 1 13
7 14 3 2 9
8 16 3 2 7
9 10 3 0 15

IV. DISCUSSION
The loss function will measure the level of
dissimilarity of predicted data to the target data. To
calculate the loss value, we make a prediction using the
(b)
given input data sample and compare it with the actual
label value. The lower the loss value, the more accurate the
Fig 8: The best result testing of all random states prediction model. The loss function used in this study is
(a) Cluster 1; (b) Cluster 2 BinaryCrossEntropy. The loss value of the testing data for
each random state is shown in Table 1 and Table 2.
Detailed classification results for the random state in Optimization is the process of adjusting model
ranges 0 and 9 are shown in Table 3 and Table 4 : parameters to reduce model errors in every step of the
training process. Optimization aims to do a fitting process
Table 3: Classification results for the random state in between training data and target data, also avoiding
ranges 0 and 9 for Cluster 1 overfitting. It occurs because the model is too focused on
Random training data and the performance of the model will be bad
TP FN FP TN when tested with other data.
State
0 11 4 2 11 In Fig 7, the loss function of the model represented that
Cluster 2 converged faster than Cluster 1. For Cluster 2,
1 15 0 1 12 loss value reached 0,0065 in 5 epochs. While Cluster 1 for
2 13 1 1 13 the same loss value needs 17 epochs.

www.ijaers.com Page | 285


Rizal et al. International Journal of Advanced Engineering Research and Science, 9(9)-2022

Based on the results of testing accuracy in Table 1, the the model detected 11 out of 13 non-anemia patients as
model produces the highest accuracy 96.43%, meaning non-anemia, and it detected 15 patients as anemia
that it can recognize the images very well. The lowest correctly. In random state-5, the model misclassified 2
accuracy is 71.43% however still includes fair patients with non-anemia as having anemia. In random
classification. While for Table 2, the highest accuracy was state-6, CNN can’t predict one patient with anemia and
96.43% achieved. The lowest accuracy is 82.14%. one patient with non-anemia. In random state-7, there are 3
As shown in Fig 8, anemia is symbolized by 0, and patients as false negative and 2 patients as false positive.
non-anemia is represented by 1. The best result is Cluster 1 In random state-8, the model detected 16 out of 19 patients
in random state 1, and so does Cluster 2. In Cluster 1, the with anemia as having anemia, and 7 out of 9 non-anemia
model detected 15 patients with anemia as having anemia, patients as non-anemia. In random state-9, CNN
and 12 out of 13 patients with non-anemia as having non- misidentified 3 patients with anemia as having non-
anemia. However, the model can't detect one patient, the anemia.
model classifies it as anemia even though the patient is There is some information about Cluster 1 and Cluster
non-anemia. While Cluster 2, the model can classify well 2 in Fig 8. Cluster 1 in random state 1, the average age of
but there is 1 patient as False Positive, which means is patients is 47.54 years. The minimum and maximum ages
detected as anemia even though the actual is non-anemia. of them are 14 years and 70 years. The average hematocrit
14 patients True positive, the model predicts patients with level of them is 32,69. While for cluster 2 in random state
anemia as having anemia. True negative consists of 13 1, the minimum and maximum ages of patients are 18
patients with non-anemia and the model predict correctly years and 79 years, with the average age of them being
as having non-anemia. 41,23 years. Their average hematocrit level is 33,55.
From Table 3, the model of Cluster 1 in random state-0 Different random state values will cause the members
has classified 11 out of 15 patients with anemia as having of each cluster to be different as well. If the image
anemia, 11 out of 13 non-anemia patients as non-anemia; it resolution is too large and there are many random states, it
misclassified 4 anemia patients as having non-anemia, and will cause an error in the training process. Google Colab
2 non-anemia patients as having anemia. In random state- only provides 12GB of free RAM. The error is basically
1, the model misidentified one patient with non-anemia as because out of memory on Google Colab. The session
having anemia. In random state-2, there is 1 patient as a crashed after using all available RAM
false positive and 1 patient as a false negative. In random
state-3, CNN detected 11 patients as anemia, 16 patients
with non-anemia are detected correctly as non-anemia, and V. CONCLUSION
it misclassified 1 patient with non-anemia as having The utilization of CNN is reliable enough to detect
anemia. In random state-4, the model can’t predict 8 anemia, and also possibly be applied in the medical sector.
patients correctly. In random state-5, CNN has classified This is proven by the results of accuracy of 96.43% with a
14 out of 16 patients with anemia as having anemia, and loss value is 0.03 for Cluster 1, and a loss value of 0.11 for
11 out of 12 non-anemia patients as non-anemia. In Cluster 2. Model testing gets maximum results, meaning
random state-6, one patient was a false negative, and two that the model can detect categories in all experiments that
patients are false positive. In random state-7, the model were done. The results of low accuracy are due to the palm
detected one patient as anemia even though the actual is image features that are tested having many similarities
non-anemia. In random state-8, CNN has classified 17 out with other palm image features so the model misclassified
of 18 patients with anemia as having anemia, and 6 out of the palms.
10 non-anemia patients as non-anemia. In random state-9, The best result for Cluster 1 is random state 1, and so
the model predict 2 patients as non-anemia but the actual is does Cluster 2. The average hematocrit level for Cluster 1
anemia, also it misidentified 1 patient non-anemia as in random state 1 is 32.69 with the average age of patients
having anemia. being 47,54 years. While for cluster 2 random state 1, the
From Table 4, the model in random state-0 average hematocrit level is 33.55 with the average age of
misclassified 3 patients. In random state-1, only one patients being 41,23 years.
patient is detected as a false positive. In random state-2,
CNN has predicted 11 out of 13 patients with anemia as
ACKNOWLEDGEMENTS
having anemia, and 13 out of 15 non-anemia patients as
non-anemia. In random state-3, there are 2 patients as false We would like to thank Soebandi General Hospital has
negative and 1 patient as false positive. In random state-4, provided the patient's palm image data. We also tank to all

www.ijaers.com Page | 286


Rizal et al. International Journal of Advanced Engineering Research and Science, 9(9)-2022

members of the Research Group between the Faculty of Preliminary Step for Radiomics Studies,” J. Imaging, vol. 6,
Mathematics and Natural Sciences, and Faculty of no. 11, pp. 1–14, 2020, doi: 10.3390/jimaging6110125.
Medicine, Jember University, Indonesia [14] P. Kim, MATLAB Deep Learning: With Machine Learning,
Neural Networks and Artificial Intelligence. Apress, 2017.
doi: 10.1007/978-1-4842-2845-6.
REFERENCES [15] C. Thomas and A. B. Lumb, “Physiology of haemoglobin,”
Contin. Educ. Anaesthesia, Crit. Care Pain, vol. 12, no. 5,
[1] E. Ghanbari and S. Najafzadeh, “Machine Learning,” Mach.
pp. 251–256, 2012, doi: 10.1093/bjaceaccp/mks025.
Learn. Big Data Concepts, Algorithms, Tools Appl., vol. 8,
[16] C. M. Chaparro and P. S. Suchdev, “Anemia epidemiology,
no.04, pp. 155–207, 2020, doi: 10.1002/9781119654834.ch7
pathophysiology, and etiology in low- and middle-income
[2] Y. Fang, J. Zhao, L. Hu, X. Ying, Y. Pan, and X. Wang,
countries,” Ann. N. Y. Acad. Sci., vol. 1450, no. 1, pp. 15–
“Image classification toward breast cancer using deeply-
31, 2019, doi: 10.1111/nyas.14092.
learned quality features,” J. Vis. Commun. Image
[17] B. Santra, D. P. Mukherjee, and D. Chakrabarti, “A non-
Represent., vol. 64, p. 102609, 2019, doi:
invasive approach for estimation of hemoglobin analyzing
10.1016/j.jvcir.2019.102609.
blood flow in palm,” Proc. - Int. Symp. Biomed. Imaging,
[3] D. S. Kermany et al., “Identifying Medical Diagnoses and
vol. 1, pp. 1100–1103, 2017, doi:
Treatable Diseases by Image-Based Deep Learning,” Cell,
10.1109/ISBI.2017.7950708.
vol. 172, no. 5, pp. 1122-1131.e9, 2018, doi:
[18] V. P. Kharkar and V. R. Ratnaparkhe, “Hemoglobin
10.1016/j.cell.2018.02.010.
Estimation Methods : A Review of Clinical, Sensor and
[4] C. Ouchicha, O. Ammor, and M. Meknassi, “CVDNet: A
Image Processing Methods,” Int. J. Eng. Res. Technol., vol.
novel deep learning architecture for detection of coronavirus
2, no. 1, pp. 1–7, 2013.
(Covid-19) from chest x-ray images,” Chaos, Solitons and
[19] R. C. Gonzalez and R. E. Woods, 4TH EDITION Digital
Fractals, vol. 140, no. January, p. 110245, 2020, doi:
image processing. New York: Pearson, 2018.
10.1016/j.chaos.2020.110245.
[20] Suyanto, Machine Learning Tingkat Dasar dan Lanjut.
[5] D. Dansana et al., “Early diagnosis of COVID-19-affected
Bandung: Informatika, 2018.
patients based on X-ray and computed tomography images
[21] R. Jain, N. Jain, A. Aggarwal, and D. J. Hemanth,
using deep learning algorithm,” Soft Comput., vol.
“Convolutional neural network based Alzheimer’s disease
0123456789, 2020, doi: 10.1007/s00500-020-05275-y.
classification from magnetic resonance brain images,”
[6] M. Hammad, M. H. Alkinani, B. B. Gupta, and A. A. Abd
Cogn. Syst. Res., vol. 57, pp. 147–159, 2019, doi:
El-Latif, “Myocardial infarction detection based on deep
10.1016/j.cogsys.2018.12.015.
neural network on imbalanced data,” Multimed. Syst., 2021,
[22] K. U. Ahamed et al., “A deep learning approach using
doi: 10.1007/s00530-020-00728-8.
effective preprocessing techniques to detect COVID-19
[7] J. Wang et al., “Detecting Cardiovascular Disease from
from chest CT-scan and X-ray images,” Comput. Biol. Med.,
Mammograms with Deep Learning,” IEEE Trans Med
vol. 139, no. October, p. 105014, 2021, doi:
Imaging, vol. 36, no. 5, pp. 1172–1181, 2017, doi:
10.1016/j.compbiomed.2021.105014.
10.1109/TMI.2017.2655486.Detecting.
[23] F. Giannakas, C. Troussas, I. Voyiatzis, and C.
[8] S. Deepak and P. M. Ameer, “Brain tumor classification
Sgouropoulou, “A deep learning classification framework
using deep CNN features via transfer learning,” Comput.
for early prediction of team-based academic performance,”
Biol. Med., vol. 111, no. March, p. 103345, 2019, doi:
Appl. Soft Comput., vol. 106, p. 107355, 2021, doi:
10.1016/j.compbiomed.2019.103345.
10.1016/j.asoc.2021.107355.
[9] A. Esteva et al., “Dermatologist-level classification of skin
[24] F. Sultana, A. Sufian, and P. Dutta, “Advancements in
cancer with deep neural networks,” Nature, vol. 542, no.
image classification using convolutional neural network,”
7639, pp. 115–118, 2017, doi: 10.1038/nature21056.
Proc. - 2018 4th IEEE Int. Conf. Res. Comput. Intell.
[10] T. Ozturk, M. Talo, E. Azra, U. Baran, and O. Yildirim,
Commun. Networks, ICRCICN 2018, pp. 122–129, 2018,
“Automated detection of COVID-19 cases using deep neural
doi: 10.1109/ICRCICN.2018.8718718.
networks with X-ray images,” Comput. Biol. Med., vol. 121,
[25] A. Newaz, N. Ahmed, and F. Shahriyar Haq, “Survival
no. January, 2020.
prediction of heart failure patients using machine learning
[11] P. Rajpurkar et al., “CheXNet: Radiologist-Level
techniques,” Informatics Med. Unlocked, vol. 26, no.
Pneumonia Detection on Chest X-Rays with Deep
August, p. 100772, 2021, doi: 10.1016/j.imu.2021.100772.
Learning,” pp. 3–9, 2017, [Online]. Available:
[26] L. Pan, L. Gu, and J. Xu, “Implementation of medical image
http://arxiv.org/abs/1711.05225
segmentation in CUDA,” 5th Int. Conf. Inf. Technol. Appl.
[12] M. Talo, O. Yildirim, U. B. Baloglu, G. Aydin, and U. R.
Biomed. ITAB 2008 conjunction with 2nd Int. Symp.
Acharya, “Convolutional neural networks for multi-class
Summer Sch. Biomed. Heal. Eng. IS3BHE 2008, pp. 82–85,
brain disease detection using MRI images,” Comput. Med.
2008, doi: 10.1109/ITAB.2008.4570542.
Imaging Graph., vol. 78, p. 101673, 2019, doi:
10.1016/j.compmedimag.2019.101673.
[13] A. Comelli et al., “Lung Segmentation on High-Resolution
Computerized Tomography Images Using Deep Learning: A

www.ijaers.com Page | 287

You might also like