0% found this document useful (0 votes)
15 views

Efficient_Feature_Extraction_and_Classification_Ar

Uploaded by

34 ROOPA M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Efficient_Feature_Extraction_and_Classification_Ar

Uploaded by

34 ROOPA M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Efficient Feature Extraction and Classification

Architecture for MRI-Based Brain Tumor Detection


Plabon Paul1 , Md. Nazmul Islam2 , Fazle Rafsani2,3 , Pegah Khorasani4 , Shovito Barua Soumma2,4
Department of Mechanical Engineering1 ,
Department of Computer Science and Engineering2 ,
Bangladesh University of Engineering & Technology, Dhaka, Bangladesh
School of Computing and Augmented Intelligence3 , Arizona State University
College of Health Solutions4 , Arizona State University
Email: [email protected], [email protected], {frafsani, pkhorasa, shovito}@asu.edu
arXiv:2410.22619v1 [eess.IV] 30 Oct 2024

Abstract—Uncontrolled cell division in the brain is what gives 2870 involved children under the age of 15 [4]. In comparison
rise to brain tumors. If the tumor size increases by more than to tumors in any other organ of the human body, diagnosing
half, there is little hope for the patient’s recovery. This emphasizes a tumor in the brain is particularly difficult. The blood-brain
the need of rapid and precise brain tumor diagnosis. When it
comes to analyzing, diagnosing, and planning therapy for brain barrier (BBB) surrounds the brain, making it impossible for
tumors, MRI imaging plays a crucial role. A brain tumor’s regular radioactive markers to detect the tumor cells’ increased
development history is crucial information for doctors to have. activity. Additionally, tumor size, shape, location, and type
When it comes to distinguishing between human soft tissues, make early detection more challenging [3] [5]. Brain tumor
MRI scans are superior. In order to get reliable classification incident is more common in developed countries. Australia,
results from MRI scans quickly, deep learning is one of the
most practical methods. Early human illness diagnosis has been North America, and Northern Europe have the highest rates
demonstrated to be more accurate when deep learning methods of brain tumors. On the other hand, Africa has the lowest rate
are used. In the case of diagnosing a brain tumor, when even [6].
a little misdiagnosis might have serious consequences, accuracy Deep learning and machine learning are two new tech-
is especially important. Disclosure of brain tumors in medical nologies that have significantly developed different fields of
images is still a difficult task. Brain MRIs are notoriously
imprecise in revealing the presence or absence of tumors. Using applications [7]–[10]. In particular, a vast field of study has
MRI scans of the brain, a Convolutional Neural Network (CNN) been opened up in medical image processing, and several
was trained to identify the presence of a tumor in this research. studies are currently being conducted in this field of study.
Results from the CNN model showed an accuracy of 99.17%. An important aspect of this study area involves automating
The CNN model’s characteristics were also retrieved. In order the segmentation and categorization of brain tumors. Akram
to evaluate the CNN model’s capability for processing images,
we applied the features via the following machine learning et. al [11] introduced a computer-aided method for identifying
models: KNN, Logistic regression, SVM, Random Forest, Naive tumors in which they segmented tumors using the global
Bayes, and Perception. CNN and machine learning models thresholding methodology. Sharjar et al [12] presented an
were also evaluated using the standard metrics of Precision, automated method for detecting brain tumor. The photos were
Recall, Specificity, and F1 score. The significance of the doctor’s preprocessed using image enhancement filters, followed by
diagnosis enhanced the accuracy of the CNN model’s assistance
in identifying the existence of tumor and treating the patient. additional segmentation and the extraction of characteristics to
Index Terms—Brain Tumor, CNN, Deep Neural Network, identify the tumor. Parasuraman et al [13] used a feed-forward
Medical Imaging, Machine Learning, SVM, Feature Extraction, neural network to classify tumor and normal regions. The
ML method involves four steps: pre-processing with filters, image
segmentation with clustering, feature extraction using gray-
I. I NTRODUCTION level co-occurrence matrix (GLCM), and tumor classification
Brain is a core part of CNS (Central Nervous System), with ensemble classifier. Irmak et al. [14] developed three
which regulates all physiological and cognitive activities CNN models for three different datasets. Accuracy for the
i.e. thought, emotion, touch, motor skils, vision, respiration three datasets was 99.33%, 92.66%, and 98.14% respectively.
etc. [1]. Brain tumor is uncontrolled cell division in brain Although multi classification accuracy wasn’t as good as
or other parts of the CNS that causes malfunctioning. The binary classification.
malignancy of a tumor depends on how fast the cell repro-
duces. Non-malignant (benign and not cancerous) tumors grow II. M ATERIALS AND M ETHODS
slowly and do not spread into other tissues. Malignant brain We implemented a CNN model from scratch, three pre-
tumors are cancerous. Most of the time, they multiply and trained models, and five traditional ML models in our proposed
invade neighboring healthy tissues [2] [3]. In USA, there are method. Our main objective is to diagnose brain tumors effec-
an estimated 20,500 primary brain tumors identified each year; tively and precisely by sending MRI pictures of the tumors to
3750 of these instances include people under the age of 19 and a CNN. Fig. 1 represents the workflow of our study. Labeled
MRI images are supplied into a CNN feature extractor after tuned. In addition, a dense layer and a global average pooling
minimum preprocessing, and the retrieved features are fed into layer have been added to the original model to improve its
a classification layer. Finally, we compared the performance performance.
of our trained model to the current state of the art in various 4) CNN model from Scratch: Fig. 3a shows the architecture
aspects. of a twelve layer CNN model that is built from scratch to
classify brain tumors. This model was built by trying different
A. Dataset ways to tune the hyper-parameters. The best one is selected
The Brain Tumor Detection 2020 (BR35H) [15] dataset, by running a random search on various combinations of the
which includes two unique classes of MRIs of brain tu- model’s hyper-parameters while using Keras Tuner. The model
mors (1500 negative and 1500 positive), is utilized to train has four convolution layers, four max pooling layers, one batch
CNN.80% of the images from this dataset are used for training normalization layer, one flatten layer, and one dropout layer.
the model. Fig 2 displays a few samples from datasets that Here the input size is 32*32. After passing through all the
includes data from two different types of brain MRI. convolution, max pooling, batch normalization, and dropout
layers, the input size finally becomes width*height. In every
B. Data Preprocessing convolution layer, the RELU activation function was utilized.
All the images are preprocessed before being fed to CNN, And for optimization, the Adam optimizer was used with the
as described in Fig. 1 These images are initially transformed Sparse Categorical cross-entropy loss function.
into single-channel images, sometimes referred to as greyscale
images. The dimension of each image is different in the D. Machine Learning Classifiers
dataset. Therefore, these images are reshaped into a fixed size. 1) KNN: KNN is a Supervised learning algorithm which
Then, every image is converted to a two-dimensional array. can solve both classification and regression predicting prob-
After that, each image is normalized so that the value of each lems. It is a lazy algorithm because it uses all of the data
array element is converted to the range [0,1]. for training during classification and lacks a training phase.
In this algorithm, the number K of the neighbors has been
C. Feature Extraction chosen. Then the Euclidean distance between K neighbors has
1) Modified DenseNet: DenseNet is a type of network been calculated and K nearest neighbors were chosen based
architecture in which each layer is directly connected to every on distance value. Then, for each category, the number of data
other layer (within each dense block) [16]. For each layer, the points among those K neighbors has been counted. Finally, the
feature maps of all preceding layers are treated as separate category with the highest neighbor count has been given to the
inputs, whereas its own feature maps are passed on as inputs new data points.
to all subsequent layers. The original network’s inputs are 2) SVM: Parametric classification aims to describe the
256*256 in size. This model has 707 layers and it is fine usual feature space values or distribution of each class. SVM,
tuned after 500 layer for better performance. A global average in contrast, focuses solely on the training samples that are
pooling layer and a dense layer have also been added, which situated closest to the ideal class border in the feature space
enhance the performance of the original model. [19] [20]. The name of the approach comes from these
2) Modified ResNet50: The Residual Networks, or ResNet samples, which are known as support vectors. The SVM
for short, took first place in the 2015 ImageNet competition classifier is fundamentally binary in that it recognizes only
[17] and is now employed for a variety of computer vision- one distinction between two classes.
related applications. Here, solving the issue of vanishing 3) Naive Bayes: Naive Bayesian networks (NB) are the
gradients and drastically reducing the number of parameters most basic types of Bayesian networks. They are made up of
is the major objective of training a very deep neural network. DAGs with a single parent (representing the invisible node)
As shown in Fig. 3c,The layer connections are skipped. The and a large number of children (corresponding to the visible
size of the inputs in the original architecture was 224*224. It nodes), and they make a strong assumption that each child
requires 175 layers where after 125 layer rest of them are fine node is independent of the other child nodes.As a result, the
tuned. A global average pooling layer and a dense layer have independence model (Naive Bayes) is based on estimating:
been added, which enhance the performance of the original
model.
P (i|X) P (i)P (X|i) P ((i)ΠP (Xr |i)
3) Modified EfficientNetB0: Tan et al. developed the Ef- R= = = (1)
P (j|X) P (j)P (X|j) P ((j)ΠP (Xr |j)
ficientNetB0 architecture to make scaling models simpler by
balancing the network’s height, depth, and input resolution If R>1, then the Naive Bayes model will predict i, else it
to increase accuracy [18]. This study uses the underlying will predict j. Typically, Bayes classifiers perform less accu-
EfficientNet-B0 network, which is based on the MobileNet- rately than other, more advanced learning algorithms (such as
V2 inverted bottleneck residual blocks. The size of the inputs ANNs).
in the original network is 224*224 and here in the modified 4) Random Forest: Random Forest is a classifier that em-
network it has been changed to 64*64. Having 237 layers in ploys several decision trees on various subsets of the input
this architecture, rest of the layer after 150th layer were fine dataset and averages the outcomes to improve the projected
Fig. 1: Workflow of the suggested deep learning approach demonstrating many essential phases, such as preprocessing, feature
extraction, classification, and model validation for brain tumor

function and an activation function for every neuron. The


summation function is expressed as follows:
n
X
Sj = Wij Ii + βj (2)
i=1

The activation function accepts the output of this summation


function as an input. The set of weights is adjusted to estimate
the desired output after the neural network has been built.

III. R ESULTS AND D ISCUSSION


In this paper, we have implemented four different models
which are Densenet, Resnet50, EfficientnetB0, and our own
CNN model. For optimization of our deep learning model,
we used Adam algorithm. First, the performance is evaluated
Fig. 2: Samples of the brain MRI dataset using several performance metrics. During the training of a
model, we concentrated on reducing loss while simultaneously
boosting accuracy. Table I displays the validation accuracy of
accuracy of the dataset [21]. The final class is determined each model. Here, we can observe that the validation accuracy
by majority voting across all of the trees. The trees will be was best achieved by our own CNN model. Fig. 7b also
individually less accurate because of the lower training data displays the outcome graphically. Then, using these models,
and variables, but the ensemble as a whole will be more extracted features are sent to various machine learning mod-
accurate because of the reduced correlation between the trees. els. And the outcome is revealed on Fig 7a. From the Ta-
Due to the presence of several trees, RF has the specific ble II, we can conclude that Scratch+SVM, DenseNet+SVM,
advantage that no individual trees need to be pruned. ResNet+SVM, and EfficientNet+Logistic Regression perform
5) Multi Layer Perceptron: This classifier solves a better than any other combination of machine learning and
quadratic programming issue with linear constraints as op- pretrained models and the comparison is illustrated in Fig. 7a.
posed to the non-convex, unconstrained minimization problem
that neural networks often use to find their weights [22]. A. Performance metrics
In a multilayer perceptron, neurons are structured in layers Evaluation metrics are used to measure the quality of a
and totally coupled with one another by edges to create model. Evaluating models or algorithms is essential for any
a directed graph, which is a feedforward artificial neural project. A model can be tested using a wide variety of
network. According to the definition of “feedforward”, this evaluation metrics. Confusion metrics had been applied in this
graph is acyclic. Each of these edges has an actual value scenario. An N*N matrix, where N is the number of expected
known as the edge weight attached to it. The input layer, a classes, is a confusion matrix. Since N=2 applies to our issue,
number of hidden layers, and the output layer are the layers of we receive a 2*2 matrix. True Positive (TP) indicates the
multilayer perceptron neural networks. There is a summation number of accurately categorized attack records. True Negative
(a)

(a) ResNet50 (b) EfficientNetB0


(b)

(c)

(c) DenseNet (d) Scratch CNN


Fig. 4: Accuracy and Loss curve for different models

−1
recall−1 + precision−1 (6)
F =( )
2
• Spicificity (SP):

TN
SP = (7)
TN + FP

(d)
Fig. 3: CNN feature extractors used in this paper (a) Proposed TABLE I: Validation accuracy of CNN models
12 layers scratch-built CNN model. (b) Modified Efficient- Our CNN 99.17%
NetB0. (c) Modified ResNet50 (d) Modified DenseNet201 DenseNet 98.5%
ResNet50 99%
EfficientNetB0 98.33%

(TN) is the proportion of correctly categorized normal records.


False Positive (FP) is the quantity of typical records that were 1) Comparison with other works: We will now compare
misclassified. False negatives (FN) is the number of attack our conclusions to a variety of other approaches that were
records that were wrongly categorised. suggested in the literature review. The comparative result is
• Accuracy (AC): shown in Table III. It covers the findings of some of the
TP + TN newest methods suggested for locating brain tumors. The table
AC = (3) demonstrates that WCNN was used to achieve the highest
TP + TN + FP + FN
• Precision (P):
TABLE II: Accuracy of classifiers for CNN models
TP
P = (4) Scratch CNN DenseNet ResNet50 EfficientNetB0
TP + FP KNN 99.33 99.167 99.83 99.5
• Recall (R): Logistic 99.17 99.67 99.83 99.83
SVM 99.83 99.83 100 99.5
TP Naive Bayes 90 99.17 99.67 97
R= (5) Random Forest 98.17 99.83 99.67 98.667
TP + FN
Perception 99.67 99.67 99.67 98.83
• F1 Score (F):
accurate than alternative approaches. A number of pretrained
models that we also trained showed impressive accuracy. Then,
in order to increase the accuracy of our models, we mix these
models with machine learning models. The highest accuracy
we can achieve is 99.83%, which is higher than the paper by
Sarhan et al. [23].

TABLE III: Performance comparison with other models


Authors Technique Accuracy
Sarhan et al. [23] WCNN 99.3
Bhanothu et al. [24] Faster-R-CNN 77.6
Ismael et aI. [25] ResNet50 99
Kaplan et aI. [26] KNN 95.56
Rehman et aI. [27] VGG16 98.69
Tahir et aI. [28] SVM 86
Fig. 5: TPR and TNR of each model. Sethy et aI. [29] VGG19+SVM 97.89
Gajula et aI. [30] U-Net 96.9
Ahmadia et aI. [31] CNN 96
Saffar et aI. [32] MLP+SVM 91.02
Kaldera et aI. [33] Faster-R-CNN 94
Badza et aI. [34] CNN 91.9
Ayadi et aI. [35] CNN 98.49
Sultan et aI. [36] DNN 96.61
Anarakl et aI. [37] CNN 91.05
Kumar et aI. [38] Residual network and global average pooling 98.02
Abiwinanda et aI. [39] CNN 97.08
Deepak et aI. [40] CNN and SVM 97.17
Our Model DenseNet and RF 99.83

IV. F UTURE W ORKS AND C ONCLUSION


Fig. 6: Classification metrics of each model.
In this paper, we explore different models to detect brain
tumor more effectively and precisely. We have surpassed all
previously investigated methods with an accuracy of 99.83%.
Since brain tumor can lead to cancer, the impact of brain
tumors are terrible for us and endanger our lives. We believe
that our approach has the potential to reduce the risk of
developing cancer and save many lives. In the future, we hope
to develop a model that can accurately detect all types of
tumors. Although our model solely analyzes MRI dataset, we
also wish to take into account other medical imaging methods,
such as CT (Computed Tomography) scan, PET (Positron-
Emission Tomography) etc.

R EFERENCES
(a) Accuracy of Classifiers for CNN Models
[1] A. Gumaei, M. M. Hassan, M. R. Hassan, A. Alelaiwi, and G. Fortino,
“A hybrid feature extraction method with regularized extreme learning
machine for brain tumor classification,” IEEE Access, vol. 7, pp. 36 266–
36 273, 2019.
[2] “Brain tumor educations: https://www.abta.org/about-brain-
tumors/brain-tumor-education/.” [Online]. Available: https://www.abta.
org/about-brain-tumors/brain-tumor-education/
[3] J. Graber, C. Cobbs, and J. Olson, “Congress of neurological surgeons
systematic review and evidence-based guidelines on the use of stereotac-
tic radiosurgery in the treatment of adults with metastatic brain tumors,”
Neurosurgery, vol. 84, pp. E168–E170, 03 2019.
[4] K. Hoskinson, C. Fraley, M. Pearson, J. Kuttesch, and B. Compas, “Neu-
(b) Accuracy of CNN models rocognitive late effects of pediatric brain tumors of the posterior fossa:
A quantitative review,” Journal of the International Neuropsychological
Fig. 7: Classification accuracy after feature extraction Society : JINS, vol. 19, pp. 1–10, 10 2012.
[5] K. Aboody, A. Brown, N. Rainov, K. Bower, S. Liu, W. Yang, J. Small,
U. Herrlinger, V. Ourednik, P. Black, X. Breakefield, and E. Snyder,
accuracy, which was 99.3. The other methods are less accurate “From the cover: Neural stem cells display extensive tropism for pathol-
ogy in adult brain: Evidence from intracranial gliomas,” Proceedings
than this. Even if the accuracy of our own CNN model is less of the National Academy of Sciences of the United States of America,
accurate than that of the Sarhan et al. [23], it is still more vol. 97, pp. 12 846–51, 12 2000.
[6] P. de Robles, K. M. Fiest, A. D. Frolkis, T. Pringsheim, C. Atta, [26] K. Kaplan, Y. Kaya, M. Kuncan, and H. M. Ertunç, “Brain tumor
C. St. Germaine-Smith, L. Day, D. Lam, and N. Jette, “The worldwide classification using modified local binary patterns (lbp) feature extraction
incidence and prevalence of primary brain tumors: a systematic review methods,” Medical hypotheses, vol. 139, p. 109696, 2020.
and meta-analysis,” Neuro-Oncology, vol. 17, no. 6, pp. 776–783, 10 [27] A. Rehman, S. Naz, M. I. Razzak, F. Akram, and M. Imran, “A deep
2014. [Online]. Available: https://doi.org/10.1093/neuonc/nou283 learning-based framework for automatic brain tumors classification using
[7] S. B. Soumma, K. Mangipudi, D. Peterson, S. Mehta, and transfer learning,” Circuits, Systems, and Signal Processing, vol. 39,
H. Ghasemzadeh, “Self-supervised learning and opportunistic inference no. 2, pp. 757–775, 2020.
for continuous monitoring of freezing of gait in parkinson’s disease,” [28] B. Tahir, S. Iqbal, M. Usman Ghani Khan, T. Saba, Z. Mehmood,
2024. [Online]. Available: https://arxiv.org/abs/2410.21326 A. Anjum, and T. Mahmood, “Feature enhancement framework for
[8] S. A. Dip, K. H. I. Arif, U. A. Shuvo, I. A. Khan, and N. Meng, brain tumor segmentation and classification,” Microscopy research and
“Equitable skin disease prediction using transfer learning and domain technique, vol. 82, no. 6, pp. 803–811, 2019.
adaptation,” 2024. [Online]. Available: https://arxiv.org/abs/2409.00873 [29] P. K. Sethy and S. K. Behera, “A data constrained approach for brain
[9] I. R. Rahman, S. B. Soumma, and F. B. Ashraf, “Machine learning tumour detection using fused deep features and svm,” Multimedia Tools
approaches to metastasis bladder and secondary pulmonary cancer and Applications, vol. 80, no. 19, pp. 28 745–28 760, 2021.
classification using gene expression data,” in 2022 25th International [30] S. Gajula and V. Rajesh, “Mri brain image segmentation by fully
Conference on Computer and Information Technology (ICCIT), 2022, convectional u-net,” REVISTA GEINTEC-GESTAO INOVACAO E TEC-
pp. 430–435. NOLOGIAS, vol. 11, no. 1, pp. 6035–6042, 2021.
[10] S. B. Soumma, K. Mangipudi, D. Peterson, S. Mehta, and [31] M. Ahmadi, A. Sharifi, M. Jafarian Fard, and N. Soleimani, “Detection
H. Ghasemzadeh, “Wearable-based real-time freezing of gait detection of brain lesion location in mri images using convolutional neural network
in parkinson’s disease using self-supervised learning,” 2024. [Online]. and robust pca,” International journal of neuroscience, pp. 1–12, 2021.
Available: https://arxiv.org/abs/2410.20715 [32] Z. A. Al-Saffar and T. Yildirim, “A hybrid approach based on multiple
[11] M. U. Akram and A. Usman, “Computer aided system for brain tumor eigenvalues selection (mes) for the automated grading of a brain tumor
detection and segmentation,” in International conference on Computer using mri,” Computer Methods and Programs in Biomedicine, vol. 201,
networks and information technology. IEEE, 2011, pp. 299–302. p. 105945, 2021.
[12] T. S. Sazzad, K. T. Ahmmed, M. U. Hoque, and M. Rahman, “Develop- [33] H. Kaldera, S. R. Gunasekara, and M. B. Dissanayake, “Brain tumor
ment of automated brain tumor identification using mri images,” in 2019 classification and segmentation using faster r-cnn,” in 2019 Advances in
International Conference on Electrical, Computer and Communication Science and Engineering Technology International Conferences (ASET).
Engineering (ECCE). IEEE, 2019, pp. 1–4. IEEE, 2019, pp. 1–6.
[34] M. M. Badža and M. Č. Barjaktarović, “Classification of brain tumors
[13] P. Kumar and B. VijayKumar, “Brain tumor mri segmentation and
from mri images using a convolutional neural network,” Applied Sci-
classification using ensemble classifier,” International Journal of Recent
ences, vol. 10, no. 6, p. 1999, 2020.
Technology and Engineering (IJRTE), vol. 8, no. 1S4, 2019.
[35] W. Ayadi, W. Elhamzi, I. Charfi, and M. Atri, “Deep cnn for brain tumor
[14] E. Irmak, “Multi-classification of brain tumor mri images using deep classification,” Neural Processing Letters, vol. 53, no. 1, pp. 671–700,
convolutional neural network with fully optimized framework,” Iranian 2021.
Journal of Science and Technology, Transactions of Electrical Engineer- [36] H. H. Sultan, N. M. Salem, and W. Al-Atabany, “Multi-classification of
ing, vol. 45, no. 3, pp. 1015–1036, 2021. brain tumor images using deep neural network,” IEEE access, vol. 7,
[15] “Ahmed hamada (2021). br35h :: Brain tumor detec- pp. 69 215–69 225, 2019.
tion 2020 [dataset]. https://www.kaggle.com/ahmedhamada0/brain- [37] A. K. Anaraki, M. Ayati, and F. Kazemi, “Magnetic resonance imaging-
tumor-detection.” [Online]. Available: https://www.kaggle.com/ based brain tumor grades classification and grading via convolutional
ahmedhamada0/brain-tumor-detection neural networks and genetic algorithms,” biocybernetics and biomedical
[16] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely engineering, vol. 39, no. 1, pp. 63–74, 2019.
connected convolutional networks,” in Proceedings of the IEEE confer- [38] R. L. Kumar, J. Kakarla, B. V. Isunuri, and M. Singh, “Multi-class brain
ence on computer vision and pattern recognition, 2017, pp. 4700–4708. tumor classification using residual network and global average pooling,”
[17] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning Multimedia Tools and Applications, vol. 80, no. 9, pp. 13 429–13 438,
for Image Recognition,” in Proceedings of 2016 IEEE Conference on 2021.
Computer Vision and Pattern Recognition, ser. CVPR ’16. IEEE, [39] N. Abiwinanda, M. Hanif, S. T. Hesaputra, A. Handayani, and T. R.
Jun. 2016, pp. 770–778. [Online]. Available: http://ieeexplore.ieee.org/ Mengko, “Brain tumor classification using convolutional neural net-
document/7780459 work,” in World congress on medical physics and biomedical engineer-
[18] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for con- ing 2018. Springer, 2019, pp. 183–189.
volutional neural networks,” in International conference on machine [40] S. Deepak and P. Ameer, “Automated categorization of brain tumor from
learning. PMLR, 2019, pp. 6105–6114. mri using cnn features and svm,” Journal of Ambient Intelligence and
[19] G. Mountrakis, J. Im, and C. Ogole, “Support vector machines in remote Humanized Computing, vol. 12, no. 8, pp. 8357–8369, 2021.
sensing: A review,” ISPRS Journal of Photogrammetry and Remote
Sensing, vol. 66, no. 3, pp. 247–259, 2011.
[20] M. Pal and G. M. Foody, “Evaluation of svm, rvm and smlr for accurate
image classification with limited ground data,” IEEE Journal of Selected
Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 5,
pp. 1344–1355, 2012.
[21] M. Belgiu and L. Drăguţ, “Random forest in remote sensing: A review
of applications and future directions,” ISPRS journal of photogrammetry
and remote sensing, vol. 114, pp. 24–31, 2016.
[22] T. O. Ayodele, “Types of machine learning algorithms,” New advances
in machine learning, vol. 3, pp. 19–48, 2010.
[23] A. M. Sarhan et al., “Brain tumor classification in magnetic resonance
images using deep learning and wavelet transform,” Journal of Biomed-
ical Science and Engineering, vol. 13, no. 06, p. 102, 2020.
[24] Y. Bhanothu, A. Kamalakannan, and G. Rajamanickam, “Detection and
classification of brain tumor in mri images using deep convolutional
network,” in 2020 6th international conference on advanced computing
and communication systems (ICACCS). IEEE, 2020, pp. 248–252.
[25] S. A. A. Ismael, A. Mohammed, and H. Hefny, “An enhanced deep
learning approach for brain cancer mri images classification using
residual networks,” Artificial intelligence in medicine, vol. 102, p.
101779, 2020.

You might also like