23Novel Approach to Classify Brain Tumor Based on Transfer Learning
23Novel Approach to Classify Brain Tumor Based on Transfer Learning
ORIGINAL RESEARCH
Received: 3 November 2022 / Accepted: 1 April 2023 / Published online: 17 April 2023
© The Author(s), under exclusive licence to Bharati Vidyapeeth’s Institute of Computer Applications and Management 2023
13
Vol.:(0123456789)
2032 Int. j. inf. tecnol. (April 2023) 15(4):2031–2038
to explain [8]. The manual explanation through human To categorize brain tumors into two classes, we inves-
experts is a time-consuming, expensive, and tiresome task tigated transfer and ensemble learning in this research.
that can yield the most accurate segmentation results. In The transfer learning method applies current knowledge to
addition, subsequent investigations that incorporate extra address issues in different disciplines. On the BRATS data-
inter-observer differences are completely unfeasible [9]. set, we trained three models—MobileNetV2, Inception V3,
Developing reliable, objective and scalable algorithms for and ResNet50—and assessed their effectiveness. Finally, we
quantitatively evaluating brain tumors is one of the critical employed ensemble learning based on the weighted aggre-
goals of computing medical images. In FLAIR and other gate of the component models. The paper has been divided as
white matter lesions (WML) sequences, MS and stroke follows: The literature review is presented in Sect. 2, the data
lesions share the same hyper-intense look. In most cases, set used is shown in Sect. 3, and the proposed Ensemble deep
previous information on the appearance and shape of the learning-based framework is shown in Sect. 4. The overview
lesion is difficult to get [10]. of the suggested algorithm is presented in Sect. 5, along with
These include Random Forests Classifier (RFC), Inten- performance assessment in Sect. 6, results and discussion
sity-based features, and generative Gaussian Mixture Model in Sect. 7, and conclusions and future directions in Sect. 8.
(GMM) [11] for brain lesions segmentation. Different brain
lesions can be detected using contextual and morphological
[12] characteristics. Brain lesions are segmented using the 2 Literature review
Markov Random Field (MRF) [10]. Hand-crafted feature
extraction approaches use the above-mentioned methodolo- This section describes various methodologies, datasets, and
gies; however, they are computationally intensive compared accuracy levels used by the researchers. Anarki [21] utilize
to deep learning methods [13], which solve the same prob- 2DCNN ith softmax to classify brain tumor with an accuracy
lems more quickly. level of 94.2%. Sajjad et al. [22] have given the brief about
On the other hand, deep learning approaches are more VGG 19 to classify public datasets to classify brain tumors
effective than supervised methods since the model can learn with an accuracy of 94.58%. Talo et al. [23] used the very
features that are more discriminating for the task at hand. small dataset of 18 patients and used ResNet to classify brain
Pre-defined and hand-crafted feature sets perform better [14] tumors with an accuracy of 100%. Abiwinanda et al. [24] cus-
than generic ones. It is possible to improve the results of tomize CNN to predict brain tumors with an accuracy level
medical imaging tasks by using Convolutional Neural Net- of 94.39%. Swati et al. [25] utilized VGG19 with an accuracy
works (CNN) [15, 16]. Segmenting neural membranes with level of 94.82%. Mallick et al. [26] used deep neural networks
the help of a GPU is first done using 2D-CNNs. Processing to classify brain tumors; the model was trained on 19 patients’
a 2D slice independently yields 3D brain segmentation [17]. data. An accuracy level of 96% was achieved in this study.
Although the architecture is simplistic, superior results Sriramakrishana et al. [27] utilized FCM and SVM on the
are achieved using these strategies, indicating CNN’s poten- BRATS dataset it an accuracy of 98%. Marghalani et al. [28]
tial. 3D-CNN models demand a significant amount of com- utilized SVM and achieved an accuracy level of 97%. Deepak
puting power and a large amount of memory. A 3D-CNN et al. [29] have used transfer learning-based Deep CNN on
network is avoided entirely by extracting and combining 2D the figshare dataset and achieved an accuracy level of 98%.
patches from multiple-scale photos. 3D-CNN adoption is Ghassemi et al. [30] used a deep neural network on a 3064
discouraged due to its sluggish inference speed and high MRI image dataset and achieved an accuracy level of 95.6%.
computational cost. As a result, over-segmentation may Eluri et al. [31] have used GLCM and SVM and achieved
occur due to classifier biases toward rare classes. CNN an accuracy level of 96.66%. Arasi PR et al. [32] achieved
models are designed to train samples by distributing cat- an accuracy level of 97.69% with BSVM. Chandra SK et al.
egories close to the actual class, but over-segmented pixels [33] have used SVM on BRATS dataset. Hamid et al. [34]
lead to inaccurate classification in the first phase [18]. In used SVM on DICOM format MRI images and achieved an
the second training step, patches on the discrimination layer accuracy level of 95%. Çinar et al. [35] utilized Hybrid CNN
are retrieved uniformly from the input image, as shown in and achieved an accuracy level of 97.2%. Begum et al. [36]
[14]. Overfitting and the first classifier stage may signifi- utilized the Texture feature and RNN on the dataset of 1000
cantly impact the training structure in the second phase. For images and achieved an accuracy level of 96.26%. Alagar-
network training, dense training is utilized [19]. As with samyet al. [37] have used Fish School Optimization algorithm
uniform sampling, this strategy introduced a name for class (SCFSO) on the BRATS-SICAS dataset and achieved 96.21%
imbalances to distinguish them. A weighted cost function accuracy. Chen et al. [38] have used Multi-Model techniques
is employed to circumvent this issue. When dealing with on BRATS 2018 dataset and achieved an accuracy level of
multiclass difficulties, the manual adjustment of network 91.6%. Wang et al. [39] have used the global–local ( KGL)
sensitivity makes it more challenging [20]. data fusion model on 81 patient data and achieved an accuracy
13
Int. j. inf. tecnol. (April 2023) 15(4):2031–2038 2033
level of 91%.Scheufeleet et al. [40] have used a coarse-to-fine of data samples for testing and training purposes. Fig-
multi-resolution continuation scheme on BRATS 2018 data- ure 1,2,3, and 4 shows various types of brain tumors.
set and achieved an accuracy level of 56%. Majib et al. [41]
utilized VGG-SCNet’s scheme on a dataset collected from
Kaggle and achieved an accuracy level of 99.2%. Dissanaya- 4 Proposed framework
keet et al. [42] utilized modified CNN and achieved a 91.54%
accuracy level. Ismail et al. [43] utilized R-DepTH on some Figure 5 shows the suggested framework in detail. The
datasets and achieved an accuracy level of 91.54%. Lei et al. photos from the BRATS dataset were pre-processed,
[44] utilized a CNN-MORF-based approach on the LIDC-
IDRI dataset and an accuracy level of 85.4%.
Table 1 below depicts the literature based on sensitivity,
specificity, accuracy, methodology used, and data set used
by various researchers.
3 Dataset used
13
2034 Int. j. inf. tecnol. (April 2023) 15(4):2031–2038
Fig. 2 Meningioma
Fig. 4 Pituitary
The weighted average of the different models is used by
the ensemble created on top of the component models to
resized, and split into training and testing sets. Using get the final forecast. Component models, transfer learn-
transfer learning, three Convolutional Neural Network ing, and ensemble learning strategies are all explained in
models—MobileNetV2, InceptionV3, and ResNet50— the following subsections.
were each trained separately using the training set photos.
The last layers of these models weren’t loaded, allowing
us to add our pooling and dense layers to output the leaf 4.1 Component models
species from the dataset. These models underwent 200
training iterations. The validation accuracy results were Three architecture-based component models are explained
also reported using five, ten, and 20-fold cross-validation. below.
13
Int. j. inf. tecnol. (April 2023) 15(4):2031–2038 2035
4.1.1 MobileNetV2 with a weighted aggregate to get this final result for auto-
mated brain tumor diagnosis.
The idea of MobileNetV2 was to create a straightforward
CNN that is lightweight and convenient to use on a mobile
device. This model contains 1 Average Pool layer with 350
GFLOP and 53 convolutional layers. It also features two 5 Summary of the EDL‑BTC
kinds of convolutional layers to execute 1*1 convolution
and 3*3 depth-wise convolution. Following is a summary of the model:
Step (1) There are 2787 observations in the BRATS
dataset.
4.1.2 InceptionV3 Step (2) The original resolution of MRI scans in RGB
color space was reduced to 224 pixels on both sides.
By altering the original Inception architecture, Inception Step (3) In order to measure results five fold, ten fold and
V3 was created. It comprises 48 layers. It uses less pro- tenty fold cross validations were applied.
cessing power and is more economical in terms of the {BTC _ train set, BTC _ test set} = 5 Cross(BTC_dataset)
number of parameters the network generates and the costs {BTC _ train set, BTC _ test set} = 10 Cross(BTC_dataset)
associated with memory or other resources. {BTC _ train set, BTC _ test set} = 20 Cross(BTC_dataset)
Step (4) The following classifiers were generated, mak-
ing use of MobileNetV2, InceptionV3, and ResNet 50 as
4.1.3 Resnet 50 the basic models:
MobileNetV2_Relu = TL(MobileNet V2,Relu).
ResNet is short for “Residual Networks”, comprising 50 InceptionV3_Relu = TL(InceptionV3,Relu).
pooling layers. It makes recognition more accurate and ResNet50_Relu = TL(ResNet,Relu).
solves CNN’s problems with a vanishing gradient or deg- Step (5) Feature extractors, CNN models, and fully con-
radation. It brings in the idea of identity mapping, which nected layers with 128 neurons and 30 neurons for output
lets the gradient flow faster if the current layer is not are examples of the models that may be constructed during
needed. step 4.
MobileNetV2_Relu = TL(MobileNetV2_Relu, BTC_train
set)
4.2 Transfer learning InceptionV3_Relu = TL(InceptionV3_Relu, BTC_train
set)
It established a unique strategy by applying transfer learning ResNet50_Relu = TL(ResNet50_Relu, BTC_train set)
techniques to healthcare. The author of this work completed Step (6) Classification ensemble using a weighted sum
considerable changes to various transfer learning models uti- of individual scores.
lising datasets of brain cancer images. Transfer learning can Ensemble classifier = Ensemble (MobileNetV2_Relu,
be helpful when training a model without access to past data InceptionV3_Relu, ResNet50_Relu)
on weights. The general principle is to use features acquired
from an extensive dataset to train a new model, which can
subsequently be modified using more specific data [46].
6 Performance measurement
13
2036 Int. j. inf. tecnol. (April 2023) 15(4):2031–2038
6.3 Precision
SGD(Stochastic Gradient Descent), Naive Bayes, Logis-
The precision (Pre) of the proposed model is denied in equa- tic Regression and Proposed Model (EDL-BTC) predic-
tion number 2. tion models are compared with K fold twenty is shown in
Tables 4 and 5.
TP
Pre = (2)
TP + FP
8 Conclusion and future work
6.4 Recall
The ability to automatically classify brain tumors is a major
step forward in medical science’s quest to diagnose and treat
The recall(Re) of the proposed model is denied in equation
illnesses at an earlier stage. With the help of Artificial Intel-
number 3.
ligence, we can accomplish our goals more easily. To auto-
TP mate the process of brain tumor classification, we suggested
Re = (3) using an ensemble of deep-learning models. The brain tumor
TP + FN
pictures are from the IEEE port dataset [45]. Using trans-
fer learning, we removed the top layers of three-component
6.5 F1‑score classifiers (MobileNetV2, InceptionV3, and ResNet50) so
that they could recognize the features of images of medici-
The F1-Score of the proposed model is denied in equation nal leaves and connect them to Dense Layers that were then
number 4. trained using the Relu classifier to classify brain tumors.
Pre.Re To verify the component classifier’s accuracy on the brain
F1 − Score = (4) tumor dataset, we ran 200 iterations of cross-validation over
Pre + Re
a 5, 10, and 20-fold scheme. We used a weighted aggregate
of the various classifiers to do classification using the EDL-
BTC classifier. Ensemble learning classifier EDL-BTC out-
7 Result and discussion performs the individual base classifiers and the other Novel
Approaches, as measured by their performance on the test
Comparison of Component Models and Proposed Model set and cross-validation using five, ten, and twenty folds. By
based on F1, Precision, Recall is shown in Table 2. Com- obtaining 98.3%, 98.6% and 98.6% at five, ten, and 20-fold
parison of classification accuracy of component models cross-validation, it was shown that the EDL-BTC surpassed
and proposed Model is shown in Table 3. KNN (K-Nearest the most advanced pre-trained models. Soon, we’ll have
Neighbors Algorithm), SVM(Support Vector Machine), our dataset to share with the scientific world. Moreover, we
13
Int. j. inf. tecnol. (April 2023) 15(4):2031–2038 2037
Table 5 Comparison of Classification Accuracy of different state of 6. Carey LM, Seitz RJ, Parsons M, Levi C, Farquharson S, Tournier
art algorithms and proposed model J-D, Palmer S, Connelly A (2013) Beyond the lesion: neuro-
imaging foundations for post-stroke recovery. Future Neurol
Model Accuracy at K Accuracy at K Accuracy 8(5):507–527
Fold 5 Fold 10 at K Fold 7. Wen PY, Macdonald DR, Reardon DA, Cloughesy TF, Sorensen
20 AG, Galanis E, DeGroot J, Wick W, Gilbert MR, Lassman AB
(2010) Updated response assessment criteria for high-grade gli-
KNN 81.35 81.55 81.89 omas: response assessment in neuro-oncology working group. J
SVM 86.13 86.33 85.59 Clin Oncol 28(11):1963–1972
SGD 98.06 98.12 98.04 8. Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani
K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R (2015)
Naive Bayes 95.11 95.71 95.69
The multimodal brain tumor image segmentation benchmark
Logistic regression 97.01 97.12 97.17 (BRATS). IEEE Trans Med Imaging 34(10):1993–2024
EDL-BTC 98.3 98.6 98.6 9. Yuh EL, Cooper SR, Ferguson AR, Manley GT (2012) Quantita-
tive CT improves outcome prediction in acute traumatic brain
injury. J Neurotrauma 29(5):735–746
want to do tests on other datasets and refine the ensemble of 10. Mitra J, Bourgeat P, Fripp J, Ghose S, Rose S, Salvado O, Con-
nelly A, Campbell B, Palmer S, Sharma G (2014) Lesion seg-
machine learning models. mentation from multimodal MRI using random forest following
ischemic stroke. Neuroimage 98:324–335
Data availability The data used to support the result of the study may 11. Domingues R, Filippone M, Michiardi P, Zouaoui J (2018) A
be obtained from the corresponding author. comparative evaluation of outlier detection algorithms: experi-
ments and analyses. Pattern Recogn 74:406–421
Declarations 12. Ledig C, Heckemann RA, Hammers A, Lopez JC, Newcombe
VF, Makropoulos A, Lötjönen J, Menon DK, Rueckert D (2015)
Conflict of interest There is no conflict of interest regarding the Robust whole-brain segmentation: application to traumatic
publication of this paper. brain injury. Med Image Anal 21(1):40–58
13. Kamnitsas K, Ledig C, Newcombe VF, Simpson JP, Kane AD,
Menon DK, Rueckert D, Glocker B (2017) Efficient multi-scale
3D CNN with fully connected CRF for accurate brain lesion
segmentation. Med Image Anal 36:61–78
References 14. Havaei M, Davy A, Warde-Farley D, Biard A, Courville A,
Bengio Y, Pal C, Jodoin P-M, Larochelle H (2017) Brain tumor
1. Amin J, Sharif M, Yasmin M, Fernandes SL (2017) A distinctive segmentation with deep neural networks. Med Image Anal
approach in brain tumor detection and classification using MRI. 35:18–31
Pattern Recognit Lett 138:118–127 15. Kaushik A, Singal N (2022) A hybrid model of wavelet neural
2. Rajinikanth V, Fernandes SL, Bhushan B, Sunder NR (2016) Seg- network and metaheuristic algorithm for software development
mentation and analysis of brain tumor using tsallis entropy and effort estimation. Int J Inf Technol 14:1689–1698. https://doi.org/
regularised level set. Springer, pp 313–321 10.1007/s41870-019-00339-1
3. Rajinikanth V, Satapathy SC (2018) Segmentation of Ischemic 16. Puri D, Kumar A, Virmani J et al (2022) Classification of leaves
stroke lesion in brain MRI based on social group optimization and of medicinal plants using laws’ texture features. Int J Inf Technol
fuzzy-tsallis entropy. Arab J Sci Eng 43:4365–4378 14:931–942. https://doi.org/10.1007/s41870-019-00353-3
4. Rajinikanth V, Satapathy SC, Fernandes SL, Nachiappan S (2017) 17. de Brébisson A, Montana G (2015) Deep neural networks for
Entropy based segmentation of tumor from brain MR images–a anatomical brain segmentation. arXiv preprint arXiv:1502.02445
study with teaching learning-based optimization. Pattern Recognit 18. Ciresan D, Giusti A, Gambardella LM, Schmidhuber J (2012)
Lett 94:87–95 Deep neural networks segment neuronal membranes in electron
5. Sharp DJ, Beckmann CF, Greenwood R, Kinnunen KM, Bonnelle microscopy images. Springer, pp 2843–2851
V, De Boissezon X, Powell JH, Counsell SJ, Patel MC, Leech R 19. Simonyan K, Zisserman A (2014) Very deep convolutional net-
(2011) Default mode network functional and structural connectiv- works for large‐scale image recognition. arXiv preprint arXiv:
ity after traumatic brain injury. Brain 134(8):2233–2247 1409.1556
13
2038 Int. j. inf. tecnol. (April 2023) 15(4):2031–2038
20. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional 36. Begum SS, Lakshmi DR (2020) Combining optimal wave-
networks for biomedical image segmentation. Springer, pp let statistical texture and recurrent neural network for tumor
234–241 detection and classification over MRI. Multimed Tools Appl
21. Anaraki AK, Ayati M, Kazemi F (2019) Magnetic resonance 79(19):14009–14030
imaging-based brain tumor grades classification and grading via 37. Alagarsamy S, Zhang YD, Govindaraj V, Rajasekaran MP,
convolutional neural networks and genetic algorithms. Biocybern Sankaran S (2020) Smart identification of topographically vari-
Biomed Eng 39(1):63–74 ant anomalies in brain magnetic resonance imaging using a fish
22. Sajjad M, Khan S, Muhammad K, Wu W, Ullah A, Baik SW school-based fuzzy clustering approach. IEEE Trans Fuzzy Syst
(2019) Multi-grade brain tumor classification using deep CNN 29(10):3165–3177
with extensive data augmentation. J Comput Sci 30:174–182 38. Chen C, Dou Q, Jin Y, Liu Q, Heng PA (2021) Learning with
23. Talo M, Baloglu UB, Yıldırım Ö, Acharya UR (2019) Application privileged multimodal knowledge for unimodal segmentation.
of deep transfer learning for automated brain abnormality clas- IEEE Trans Med Imaging 41(3):621–632
sification using MR images. Cogn Syst Res 54:176–188 39. Wang L, Hawkins-Daarud A, Swanson KR, Hu LS, Li J (2021)
24. Abiwinanda N, Hanif M, Hesaputra ST, Handayani A, Mengko Knowledge-infused global-local data fusion for spatial predictive
TR (2019) Brain tumor classification using convolutional neural modeling in precision medicine. IEEE Trans Autom Sci Eng.
network. World congress on medical physics and biomedical engi- https://doi.org/10.1109/TASE.2021.3076117
neering 2018. Springer, pp 183–189 40. Scheufele K, Subramanian S, Biros G (2020) Fully automatic cali-
25. Swati ZNK, Zhao Q, Kabir M, Ali F, Ali Z, Ahmed S, Lu J (2019) bration of tumor-growth models using a single mpMRI scan. IEEE
Brain tumor classification for MR images using transfer learning Trans Med Imaging 40(1):193–204
and fine-tuning. Comput Med Imaging Graph 75:34–46 41. Majib MS, Rahman MM, Sazzad TS, Khan NI, Dey SK (2021)
26. Mallick PK, Ryu SH, Satapathy SK, Mishra S, Nguyen GN, Vgg-scnet: a vgg net-based deep learning framework for brain
Tiwari P (2019) Brain MRI image classification for cancer detec- tumor detection on MRI images. IEEE Access 9:116942–116952
tion using deep wavelet autoencoder-based deep neural network. 42. Dissanayake T, Fernando T, Denman S, Sridharan S, Fookes C
IEEE Access 7:46278–46287 (2021) Deep learning for patient-independent epileptic seizure
27. Sriramakrishnan P, Kalaiselvi T, Rajeswaran R (2019) Modified prediction using scalp EEG signals. IEEE Sens J 21(7):9377–9388
local ternary patterns technique for brain tumor segmentation and 43. Ismail M, Prasanna P, Bera K, Statsevych V, Hill V, Singh G,
volume estimation from MRI multi-sequence scans with GPU Partovi S, Beig N, McGarry S, Laviolette P, Ahluwalia M (2022)
CUDA machine. Biocybern Biomed Eng 39(2):470–487 Radiomic Deformation and Textural Heterogeneity (R-DepTH)
28. Marghalani BF, Arif M (2019) Automatic classification of brain Descriptor to characterize Tumor Field Effect: Application to
tumor and Alzheimer’s disease in MRI. Proced Comput Sci Survival Prediction in Glioblastoma. IEEE Trans Med Imaging
163:78–84 41(7):1764–1777
29. Deepak S, Ameer PM (2019) Brain tumor classification using 44. Lei Y, Zhu H, Zhang J, Shan H (2022) Meta ordinal regression
deep CNN features via transfer learning. Comput Biol Med forest for medical image classification with ordinal labels. arXiv
111:103345 preprint arXiv:2203.07725
30. Ghassemi N, Shoeibi A, Rouhani M (2020) Deep neural network 45. Bakas SS (2020) Brats MICCAI brain tumor dataset. IEEE Data-
with generative adversarial networks pre-training for brain tumor port. https://doi.org/10.21227/hdtd-5j88.s
classification based on MR images. Biomed Signal Process Con- 46. Solanki A, Pandey S (2019) Music instrument recognition
trol 57:101678 using deep convolutional neural networks. Int J Inf Technol
31. Eluri VR, Ramesh C, Dhipti SN, Sujatha D (2019) Analysis of 14:1659–1668
MRI-based brain tumor detection using RFCM clustering and 47. Sharma M (2022) Improved autistic spectrum disorder estima-
SVM classifier. Soft computing and signal processing. Springer, tion using Cfs subset with greedy stepwise feature selection tech-
pp 319–326 nique. Int J Inf Technol 14:1251–1261. https://doi.org/10.1007/
32. Arasi P, Suganthi M (2019) A clinical support system for brain s41870-019-00335-5
tumor classification using soft computing techniques. J Med Syst
43(5):1–11 Springer Nature or its licensor (e.g. a society or other partner) holds
33. Chandra SK, Bajpai MK (2020) Fractional mesh-free linear exclusive rights to this article under a publishing agreement with the
diffusion method for image enhancement and segmentation for author(s) or other rightsholder(s); author self-archiving of the accepted
automatic tumor classification. Biomed Signal Process Control manuscript version of this article is solely governed by the terms of
58:101841 such publishing agreement and applicable law.
34. Hamid MA, Khan NA (2020) Investigation and classification of
MRI brain tumors using feature extraction technique. J Med Biol
Eng 40(2):307–317
35. Çinar A, Yildirim M (2020) Detection of tumors on brain MRI
images using the hybrid convolutional neural network architec-
ture. Med Hypotheses 139:109684
13