2022 Segmentation and classification of brain tumor using 3D-UNet deep neural networks
2022 Segmentation and classification of brain tumor using 3D-UNet deep neural networks
a r t i c l e i n f o a b s t r a c t
Keywords: Early detection and diagnosis of a brain tumor enhance the medical options and the patient’s chance of recovery.
Brain tumor segmentation, Brain tumor Magnetic resonance imaging (MRI) is used to detect and diagnose brain tumors. However, the manual identifi-
classification cation of brain tumors from a large number of MRI images in clinical practice solely depends on the time and
3D U-Net
experience of medical professionals. Presently, computer aided expert systems are booming to facilitate medical
Deep learning
diagnosis and treatment recommendations. Numerous machine learning and deep learning based frameworks are
Convolutional neural network
MRI employed for brain tumor detection. This paper aims to design an efficient framework for brain tumor segmenta-
Neural networks tion and classification using deep learning techniques. The study employs the 3D-UNet model for the volumetric
segmentation of the MRI images, followed by the classification of the tumor using CNNs. The loss and precision
diagrams are presented to establish the validity of the models. The performance of proposed models is mea-
sured, and the results are compared with those of other approaches reported in the literature. It is found that the
proposed work is more efficacious than the state-of-the-art techniques.
1. Introduction tapathy, Guttery, Górriz, & Wang, 2021b). A convolutional neural net-
work with exponential linear units and rank-based weighted pooling is
Abnormal growth of cells or tissues in the brain can lead to a brain implemented for the early diagnosis of optimal therapeutic intervention
tumor. Neither the exact symptoms of a brain tumor nor the reasons (Zhang et al., 2021a).
that cause brain tumors are known today. Thus, people may be suffering One of the most difficult aspects of dealing with MRI scans is that
from brain tumors without realising the gravity of the situation. It is of they are not 2D images like X-ray images. An MRI image is made up
paramount importance to detect and extract the tumors at their early of several 3D volumes that show various parts of the brain. Until im-
stages to save the patient’s life. age segmentation, these 3D volumes are fused. When merging various
The MRI is an important tool for the detection, diagnosis, and mon- channels of an MRI image, certain misalignments can occur, resulting
itoring of brain tumors. However, examining MRI scans is a dexterous, in errors that can be corrected by image registration. Image registra-
time-consuming, and difficult process. Further, it is very difficult to de- tion is a technique for aligning images. Various machine learning and
tect tumors manually, and the results may vary from one clinical expert deep learning models for brain tumor prediction have been proposed
to another based on their experience. Effective classification and seg- recently. Many models for detecting, segmenting, and classifying brain
mentation of MRI images is quite challenging. The rationale is to build tumors have been presented in the literature. For the segmentation of
an expert system that would assist in the effective diagnosis of cancerous volumetric MRI scans, convolutional neural network architecture has
cells in MRI scans of the brain. been considered in this study.
Over the years, several researchers from various backgrounds have This research work focuses on the development of an effective model
relied on image recognition techniques for the identification of brain that can help in the accurate identification of tumors automatically. The
tumor cells (Amin, Sharif, Haldorai, Yasmin, & Sundar Nayak, 2021). proposed model is built on 3D-UNet convolutional neural networks that
To get the optimum performance, they have used a variety of machine have been trained for tumor segmentation. The research is based on
learning techniques to detect cancerous cells. Advanced neural networks 3D segmentation of MRI scans. The volumetric MRI scans’ 3D volume
and deep learning techniques are also utilized. For instance, advanced is divided into 3D sub-volumes, which are fed into the segmentation
neural networks, graph-based CNN, and CNN are employed to improve model and then recombined into a single 3D volume. The suggested
the detection of malignant lesions in breast mammograms (Zhang, Sa- method is useful since it effectively protects all aspects of the image
∗
Corresponding author.
E-mail address: [email protected] (N. Katal).
https://doi.org/10.1016/j.ijcce.2022.11.001
Received 20 August 2021; Received in revised form 7 November 2022; Accepted 7 November 2022
Available online 11 November 2022
2666-3074/© 2022 The Authors. Publishing Services by Elsevier B.V. on behalf of KeAi Communications Co. Ltd. This is an open access article under the CC
BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
while maintaining the image’s volume. UNet architecture’s effectiveness 2. Related work
has also been extensively documented in the biomedical literature.
The proposed work takes into account an image registration model, Over the years, many specialists from diverse backgrounds have
a 3D U-Net model, and finally a soft dice loss feature, all of which have worked and are still working within the domain of image processing,
been combined to form a comprehensive tumor detection model. The dealing with the detection and classification of various cancerous dis-
first move was to merge 3D image slices from an MRI scan into a sin- eases like brain tumor, kidney tumor etc., and have proposed many
gle 3D model. Image registration corrects misalignment issues during novel procedures to generate the best results.
mixing. The 3D model is divided into subsections after it has been de- Wadhwa, Bhardwaj and Verma (2019) examined various methods
veloped. The subsections are then passed into the U-Net model, and for tumor identification and proposed that combining Conditional Ran-
the segmented model is obtained at the output after both down and dom Field (CRF) with FCNN and CRF with Deep Medic or Ensemble
up convolution cycles. The subsections are then merged once more to offers better performance than the other approaches for tumor seg-
create a segmented 3D model, followed by the estimation of the loss mentation. In Özyurt, Sert, Avci and Dogantekin (2019), for segmen-
function. tation, Fatih Zyurt et al., proposed the use of the neutrosophic set ex-
After the volumetric segmentation of the tumor the next step is the pert maximum fuzzy-sure entropy (NS-EMFSE) process, and SVM and
classification of the brain tumors into meningioma, glioma, and pitu- KNN classifiers were used to remove segmented functionality from
itary tumors. Prior to feature extraction and sorting, most traditional the CNN architecture. Recently, CNN has been employed by many re-
brain tumor classification approaches included region-based tumor seg- searchers for image classification in the domain of medical sciences
mentation. CNN is made up of a convolutional neural network that per- (Ayadi, Elhamzi, Charfi, & Atri, 2021; Jin, Meng, Sun, Cui & Su,
forms automated segmentation and feature extraction, supplemented by 2020; Kalaiselvi, Padmapriya, Sriramakrishnan & Somasundaram, 2020;
a classical neural network that performs classification. A Rectified Lin- Mohsen, El-Dahshan, El-Horbaty & Salem, 2018; Murthy, Koteswararao
ear Unit (ReLu), a convolution, and a pooling layer make up CNN’s well- & Babu, 2022; Rehman et al., 2021; Suganthe, Revathi, Monisha &
known simple architecture. Pavithran, 2020). Good performance results are reported using ad-
The abstract view of the proposed framework is presented in Fig. 1. vanced neural network models for MRI scan classification (Liu et al.,
The MRI images will be used as the input. The main phases of the pro- 2018; Abiwinanda, Hanif, Hesaputra, Handayani & Mengko, 2019;
posed system are divided into four parts: Afshar, Mohammadi & Plataniotis, 2018; Badža & Barjaktarović, 2020;
Bedekar, Niharika Prasad, & Revati Hagir, 2018). Automated classifica-
i Data Collection
tion is very useful in computer-aided diagnosis systems. Ensemble mod-
ii Pre-processing
els combining SVMs and neural networks are also implemented for the
iii Segmentation
design of medical diagnosis systems (Deepak & Ameer, 2021). Soft com-
iv Classification
puting techniques like fuzzy logic are also incorporated for better re-
Firstly, the collected images are subjected to the pre-processing mod- sults (Jayachandran & Dhanasekaran, 2013). Advanced fuzzy methods
ule. The corrupted and blurred images are filtered in this module. For like adaptive fuzzy C-means clustering are used for segmentation. Re-
efficient and enhanced segmentation and classification, better segmen- sults are further improved by the deer hunting optimization algorithm
tation and classification models are proposed in the research work. (Murthy et al., 2022).
The major contributions of the paper are as follows: Li, Kuang, Xu and Sha (2019) proposed a multi-CNN approach to
tackle the poor performance offered by the conventional methods. The
• The proposed framework incorporated the implementation of an ad-
conventional models have slow training rates and often suffer from over-
vanced 3D U-Net model for volumetric segmentation and updated
fitting. The proposed method uses 3D-MRI images to train the neural
CNN for the classification of the MRI images, with the objective of
network for volumetric segmentation as compared to 2D-MRI images.
creating an expert system for predicting brain tumors at an early
The method employs the use of three-dimensional CNNs for this pur-
stage.
pose for the volumetric detection of the tumor in the 3D-MRI images.
• The proposed segmentation and classification models are empirically
The work also concludes that instance normalization consumes less time
evaluated using various evaluation metrics such as precision, recall,
to train the 3D-CNN as compared to batch normalization and group nor-
F score, dice similarity co-efficient, and support.
malization methods, and the proposed 3D-CNN model for the brain tu-
• The loss and precision diagrams have also been used to establish the
mor detection offers better accuracy and performance. An algorithm for
validity of the models.
2D MRI scans is also proposed for segmentation and classification of MRI
• The results are compared with the other approaches reported in the
scans. Deep neural network algorithms with different activation func-
literature, and established as being more efficacious than the state-
tions like SoftMax and sigmoid are also implemented (Chattopadhyay
of-the-art techniques.
200
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
& Maitra, 2022). Some researchers have also deployed a user-friendly mor is more pronounced than in other images. The photographs from
computer-aided interface for MRI scan classification (Ucuzal, Yaşar & the axial view of the head received the lowest dice score in their tests,
Çolak, 2019). which has been reported as 0.71. The axial view provides less specificity
Sobhaninia et al. (2018) suggested that medical image recognition than the other pictures. It is anticipated that preprocessing this group of
relies heavily on image segmentation, because medical photographs are images would result in improved tumor pixel classification and an im-
too diverse, and used MRI and CT scan images to segment the brain provement in the dice ranking. The presented approach may be used to
tumor. The most common use of MRI is for brain tumor segmentation segment brain tumor in MRI images as an easy and practical technique
and classification. They proposed the use of fuzzy C-Means clustering for doctors.
for tumor segmentation, which can reliably model tumor cells. After In Murthy and Sadashivappa (2014), MRI studies indicate that the
segmentation, classical classifiers and CNNs were used to classify the cancer-affected region has very high intensity pixels, whereas normal
data. They implemented and compared the effects of various conven- tissue has low intensity pixels. Thresholding is a method of segmentation
tional classifiers such as K-Nearest neighbour, logistic regression, mul- that uses only the sensitivity parameter. This is one of the most basic
tilayer perceptron, nave bayes, random forest, and support vector ma- types of segmentation, in which the tumor is classified according to its
chine in the traditional classifier section. SVM had the best precision grey level.
of 92.42 percent among these conventional ones. They also introduced Area-based image segmentation (Alqudah, Alquraan, Qasmieh,
CNN, which yielded 97.87 percent accuracy with a split ratio of 80:20 Alqudah & Al-Sharu, 2020) involves developing regions. Method uses
of 217 photographs, and suggested to experiment with 3D brain images 4-connected neighborhood or 8-connected neighborhood methodology.
in the future to accomplish more effective brain tumor segmentation. The amplitude of the same picture is clustered in one area. If the in-
Working with a wider dataset would be more difficult in this regard, tensity belongs to the same seed, the phase is iterated, and the inten-
and they aspired to create a dataset that emphasizes the abstract in re- sity belongs to one field. Geometric active contour models focused on
lation to their region, which will help them expand the reach of their regions are more resistant to noise in the MRI, which leads to poor seg-
research. mentation. T.S. Deepthi Murthy et al. (Kaur & Gandhi, 2019) proposed
In Zhou et al. (2020), a web-based application that can identify brain thresholding and morphological operations that are used to perform ef-
tumor (glioma, meningioma, and pituitary) based on high-precision T1 fective brain tumor segmentation. However, since the threshold value
contrast MRI with CNNs. It is hoped that the free web-based software used is a global threshold, it is not completely automatic and requires
would enable medical professionals and other health professionals to human interference. In Kavita, Alli and Rao (2022), a study has been pre-
identify brain tumors more quickly and accurately. In this regard, the sented on the multimodal medical image fusion technologies using pulse
app can be used as a clinical-decision support method for brain tumor coupled neural networks with QCSA and SSO optimization techniques.
classification (i.e., glioma, meningioma, and pituitary). According to the In Kalaivani and Seetharaman (2022), a three-stage boosted ensemble
experimental findings, all of the measured success metrics for classifying convolutional neural network has been proposed for the classification
the forms of brain tumors on the training dataset were greater than 98%. of COVID-19 chest x-ray images. The proposes the development of an
On the research sample, all performance metrics are greater than 91%, extended U-Net architecture using ResNet architecture as a backbone.
with the exception of the sensitivity and Matthews correlation coeffi- In Muruganantham and Balakrishnan (2021), a survey has been carried
cient (MCC) performance metrics for meningiomas. When the measured out for the various deep learning methodologies used to detect various
efficiency metrics from the CNN model’s training and testing stages are gastrointestinal tract diseases.
considered, the proposed model is capable of effectively classifying var-
ious brain tumor forms. A new research study created the CNN to iden- 3. Methods and materials
tify brain tumor on public data sets, with 233 and 73 patients, and 3064
and 516 images on T1-weighted magnetic resonance images. For the two The proposed segmentation and classification models are explained
datasets, the method built in this trial performs significantly better and in this section.
is able to effectively identify brain tumor multi-classification jobs at the
highest overall accuracy levels of 96.13% and 98.7% respectively. A new 3.1. Segmentation model
algorithm for the classification of brain tumor in Grade I, Grade II, Grade
III and Grade IV of the CNN profound learning algorithm was also devel- i Dataset
oped. The proposed algorithm for deep learning consists of three steps:
a) tumor segmentation, b) data increase, and c) profound extraction and In multimodal magnetic resonance imaging (MRI) scans, BraTS has
classification functions. Experimental findings from the other research always concentrated on evaluating cutting-edge techniques for brain tu-
work were investigated and showed that, when extended to augmented mor segmentation. BraTS 2020 segments intrinsically heterogeneous (in
and initial data sets, the proposed algorithm has greater efficiency than appearance, shape, and histology) brain tumors, such as gliomas, using
the present methods. The classification and simulation of T1-weighted multi-institutional pre-operative MRI scans. BraTS’20 also uses integra-
MRI of brain tumor were well performed during previous experiments tive analyzes of radiomic features and machine learning algorithms to
in machine learning and deep learning algorithms. But the selection and pinpoint the clinical validity of this segmentation task, as well as esti-
development of these algorithms may take a lot of time and experience mate patient overall survival and the discrepancy between faux progres-
if we consider the machine learning and data mining applications of the sion and actual tumor recurrence. Finally, BraTS’20 attempts to evaluate
studies published over the past few years. Therefore, in recent years, the algorithmic sophistication of tumor segmentation.
automated machine learning and various modelling systems have been
i 3D-Unet
widely developed. To put it briefly, the current research introduces a
novel public web-based program to identify brain tumor types based on U-Net is one of the most popular architectures used for segmenta-
CNN’s profound learning algorithms for T1-weighted MR images. tion. It was designed for image segmentation in the biomedical field.
Yadav and Sahu (2013) presented a novel approach for the auto- It produced great results for cell tracking. It can work with hundreds
matic segmentation of the most popular brain tumor, including gliomas, of examples and produce good results. As it is U-shape so it is called
meningiomas, and pituitary. No preprocessing steps are essential for this the U-net model. It consists of two paths: the contracting path and the
technique. The findings show that angle-based dividing of photographs expanding path. Both paths perform opposite results. The contracting
increases dividing precision. The highest score for the dice was 0.79. The path involves down sampling and down convolution. Expanding paths
tumor segmentation in sagittal view images provided this comparatively involves up-sampling and up-convolution. In contracting path feature
high ranking. Other organs are not visible in sagittal images, and the tu- maps get spatially smaller, whereas in an expanding path, the feature
201
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
maps are expanded back to their original size. This model was basically Fig. 3 shows the block diagram of the grouping of brain tumor based
built for 2D images, but by replacing 2D convolutional networks with on the neuronal network. The classification of brain tumor based on
3D networks the model can be used for 3D convolution as well. Fig. 2 CNN is split into two stages: (a) preparation and (b) research. The num-
shows the architecture of the 3D-Unet deep neural network architecture. ber of photographs is categorized by naming the marks (tumor, non-
tumor images, etc.) into various categories. In the training step, pre-
i Proposed Segmentation Model
processing, functional extraction and loss function classification are car-
The 3D U-Net model is the model introduced in this paper. The mod- ried out to produce a prediction model. First, the picture collection is
els that make up the full tumor detection platform are an image registry marked for the instruction, and then the image is resized to adjust the
model, a 3D U-Net model, and soft dice failure. The first step was to com- image size in the pre-processing process. Finally, for the automated de-
bine 3D image slices from an MRI scan into a single 3D model. Image tection of brain tumor the neural convolution network is used.
registration is used to solve misalignment issues during combination. The brain image dataset used for this model is taken from Kaggle. To
Following the formation of the 3D model, the 3D model is divided into use the untrained dataset, the model is trained from layer one until the
subsections, each of which is coded in the appendix. The subsections are end layer. This can be very time-consuming and will also affect the out-
then fed into the U-Net model, which produces the segmented model af- come. So, for classification measures, a pre-trained model-based brain
ter all of the down and up convolution cycles. The subsections are then dataset is used to prevent this issue. In the proposed model, only the last
merged once more to create a segmented 3D model. The next move is layer is trained during implementation. As a result, the proposed model
to calculate the damage. has a short computing period with higher efficiency.
The loss function is determined by the gradient descent algorithm.
3.2. Classification model
The raw pixel image is mapped using a score feature to achieve class
results. Quality is calculated by the loss function of a particular set of
3.2.1. Dataset
parameters. It is dependent on the way induced results are accepted
The classification model is based on the Brain tumor Classification
in the training data with the ground truth marks. In order to increase
(MRI) Kaggle dataset. This dataset is split into training and research
the precision, calculating the loss function is extremely necessary. When
sets, accumulating 3264 files categorized as glioma, meningioma, pitu-
there is a high loss function, the precision will be very poor. Similarly,
itary, and no tumor photographs. Since this is a classification model,
when the loss function is minimal, the precision will be high. The value
this dataset aids in the accurate and precise training and testing of the
for the loss function is determined to estimate the downward gradient
model.
algorithm, and it accesses the gradient value to calculate the loss func-
tion gradient repeatedly.
3.2.2. Convolutional neural network
Neural network architecture is inspired by the biological human
brain. Neural networks are primarily used to quantify vectors, approxi- 3.2.3. Proposed segmentation model
mate data, cluster data, align patterns, optimize, and classify functions. The proposed model in this paper is a newly developed CNN archi-
Based on their links, the neural network is categorized into three groups, tecture. The proposed architecture is novel because it is updated. The
viz., (a) feedback, (b) feedforward, and (c) recurrent networks. Further, design has 16 layers to enable the classifier to efficiently classify the
a neural network can be classified as a single-layer network or a multi- brain tumor images. The configuration of the implemented CNN archi-
layer neural network. tecture is presented in Fig. 4.
The picture cannot be scaled in the standard neural network. How-
ever, in the convolution of the neural network, pictures can be scaled
4. Results and discussion
(i.e. in length, width, and height). The Convolution Neural Network
(CNN) consists of an input layer, a convolution layer, and a rectified
The results of segmentation and classification models are explained
linear unit (ReLu). The provided input picture is divided into several
in this section.
small regions of the convolution sheet. In the ReLu layer, element-wise
feature activation is performed, and an optional pooling layer could be
used. The pooling layer is used primarily for sampling purposes. A class 4.1. Segmentation results
score or mark score value dependent on chance between 0 and 1, is used
in the last layer (i.e., to produce the completely connected layer). The results of the segmentation model are presented in this section.
202
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
203
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
For upgrading the weight vector, loss functions are used by using
4.2. Classification results the labelled and measured model outputs. This paper uses two widely
used failure methods, "Gradient Downward" and "Middle-Square-Error"
The results of the classification model are presented in this Mathematical decision and optimization theory show that a loss function
section. or a cost function may map an event with one or more variables to a
204
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
real number that intuitively represents some cost in conjunction with • Recall:
the event.
Sensitivity is the term for recall. It’s a percentage of the overall num-
• Precision: ber of relative instances that were currently found.
205
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
Fig. 10. Training images used for training the classification model.
206
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
Fig. 12. Loss and accuracy plots for the classification model based on VGG16.
207
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
Fig. 13. Loss and accuracy plots for the classification model based on TensorFlow.
Table 3 Table 4
Various classification performance metrics for the VGG16. Various classification performance metrics for the TensorFlow.
Glioma 0.90 0.18 0.30 100 Glioma 0.86 0.19 0.31 100
Meningioma 0.76 0.83 0.79 115 Meningioma 0.81 0.83 0.82 115
No tumor 0.54 0.95 0.69 105 No tumor 0.56 0.96 0.71 105
Pituitary 0.81 0.69 0.74 74 Pituitary 0.86 0.85 0.86 74
Average 0.75 0.66 0.63 394 Average 0.77 0.71 0.67 394
Table 5
4.2.2. Comparison of models for classification of brain tumors Compared various classification performance metrics.
Alqudah (Alqudah et al., 2020) proposed a comparable architec-
TEST
ture of 18 layers for OCT image classification. The proposed work has
MODEL ACCURACY (%) TEST LOSS
also been compared with two different techniques: the first one consid-
ers the transfer learning approach by using the VGG-16 model as pro- Proposed CNN model 90 0.63
VGG16 (Kaur & Gandhi, 2019) 67 4.3
posed in Kaur and Gandhi (2019), and the second methodology is based TensorFlow Example (Pawlowski et al., 2017) 71 4.5
upon the use of MRI image classification techniques using TensorFlow
(Pawlowski et al., 2017). The plots for loss and accuracy are shown in
Figs. 12 and 13 for VGG-16 and image classification techniques, respec-
tively. Tables 3 and 4 explain the classification results for both of these tion model gives the best performance and the minimum values for the
models. loss function and has been established to efficacious than the state-of-
Table 5 compiles all the comparisons of the test accuracy and the test the-art techniques. It can be proved that advanced neural networks like
loss for all the methods. It can be observed that the proposed classifica- CNNs have huge potential for brain tumor detection. The development
208
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
of an online platform or service that caters to the need of the patient to of brain tumor, i.e., glioma, meningioma, and pituitary, to be classified
upload the scans and the severity of the tumor can also be estimated. immediately without involving the use of area-based pre-processing pro-
This will not only improve the disease prognosis but also make such cedures. The results obtained establish the efficacy of the proposed work
services accessible to the masses globally. In the future, the potential when compared to the models already proposed in the literature.
applications of these applications can be in telemedicine, where, via a
digital platform like a website or a mobile application, the records of Declaration of Competing Interest
the scans of the patients can be stored and will be accessible to the doc-
tors so that the tumor progression can be monitored automatically. This The authors declare that they have no known competing financial
will aid in the wider applicability of artificial intelligence to the global interests or personal relationships that could have appeared to influence
population. the work reported in this paper.
Further, the incorporation of the new AI techniques will enhance the
results and overall consistency of the classifications. As more and more
References
datasets are now available, the number of output groups can also be
expanded, which will significantly improve the total classification pre- Abiwinanda, N., Hanif, M., Hesaputra, S. T, Handayani, A., & Mengko, T. R. (2019).
cision. Also, increasing the number of cached layers in the deep neural Brain tumor classification using convolutional neural network. In Proceedings of the
network will be another solution for improving the result. For the devel- world congress on medical physics and biomedical engineering 2018 (pp. 183–189).
Springer.
oped CNN models, hyper-parameter tuning can also be done to further Afshar, P., Mohammadi, A., & Plataniotis, KN. (2018). Brain tumor type classification via
improve the segmentation and classification precision. capsule networks. In Proceedings of the 2018 25th IEEE international conference on image
Automated brain tumor segmentation is still a challenge for cancer processing (ICIP) (pp. 3129–3133). IEEE.
Alqudah, A. M., Alquraan, H., Qasmieh, I. A., Alqudah, A., & Al-Sharu, W. (2020).
diagnosis. The availability of public data sets and the well accepted Brain tumor classification using deep learning technique–a comparison between
BRATS benchmark recently offered the researchers a popular tool for cropped, uncropped, and segmented lesion images with different sizes. arXiv preprint
developing and evaluating their approaches critically using existing arXiv:2001.0884..
Amin, J., Sharif, M., Haldorai, A., Yasmin, M., & Sundar Nayak, R. (2021). Brain tumor
techniques. CNN has the benefit of automated learning of representa-
detection and classification using machine learning: A comprehensive survey. Complex
tive complex features directly from multi-modal MRI images for both & Intelligent System, 1–23.
healthy brain tissues and tumor tissues. The development of new meth- Ayadi, W., Elhamzi, W., Charfi, I., & Atri, M. (2021). Deep CNN for brain tumor classifi-
cation. Neural Processing Letters, 53, 671–700.
ods, such as positron emission tomography (PET), magnetic resonance
Badža, M. M., & Barjaktarović, M. Č. (2020). Classification of brain tumors from MRI
spectroscopy (MRS), and diffusion tensor imaging (DTI), may improve images using a convolutional neural network. Applied Sciences, 10, 1999.
the current methods through further improvements and modifications Bedekar, Priyanka, Niharika Prasad, & Revati Hagir (2018). and Neha Singh. Automated
to CNN architectures and by providing supplementary information on Brain Tumor Detection using Image Processing. International journal of engineering re-
search and technology, 5(1).
other imaging modalities. By having more brain MRI images with vary- Chattopadhyay, A., & Maitra, M. (2022). MRI-based brain tumor image detection using
ing weights and different methods for contrast improvement, this quality CNN based deep learning method. Neuroscience Informatic, Article 100060.
can also be enhanced by allowing the design to be theoretically more Deepak, S., & Ameer, P. M. (2021). Automated categorization of brain tumor from MRI
using CNN features and SVM. Journal of Ambient Intelligence and Humanized Computing,
common and stronger for large image databases. 12, 8357–8369.
Jayachandran, A., & Dhanasekaran, R. (2013). Brain tumor detection and classification of
4.3. Limitations and future scope MR images using texture features and fuzzy SVM classifier. Research Journal of Applied
Sciences, Engineering and Technology, 6(1), 2264–2269.
Jin, Q., Meng, Z., Sun, C., Cui, H., & Su, R. (2020). RA-UNet: A hybrid deep attention-aware
(a) Limitations: As, to train the DNNs, large training samples are network to extract liver and tumor in CT scans. Frontiers in Bioengineering and Biotech-
desired. The restricted data availability and high computational nology, 1471.
Kalaiselvi, T., Padmapriya, S. T., Sriramakrishnan, P., & Somasundaram, K. (2020). Deriv-
cost are the main disadvantages of applying 3D deep learning ing tumor detection models using convolutional neural networks from MRI of human
to medical imaging. Dimensionality is another problem, as it is brain scans. International Journal of Information Technolog, 1–6.
difficulty to process and augment the 3D data, and requires high Kalaivani, S., & Seetharaman, K. (2022). A three-stage ensemble boosted convolutional
neural network for classification and analysis of COVID-19 chest x-ray images. Inter-
end GPUs. Another limitation is with the use of 2D ANN kernels
national Journal of Cognitive Computing in Engineering, 35–45.
as they can’t be used for the 3D volumetric data. Kaur, T., & Gandhi, T. K. (2019). Automated brain image classification based on VGG-16
(b) Future Scope: Regardless of the high computation costs, 3D and transfer learning. In Proceedings of the international conference on information tech-
nology (ICIT) (pp. 94–98). IEEE.
DNNs have an incredible scope in the several medical applica-
Kavita, P., Alli, D. R., & Rao, A. B. (2022). Study of image fusion optimization techniques
tions. Using the interpolation techniques, the overall size of these for medical applications. International Journal of Cognitive Computing in Engineerin.
medical image volumes can be significantly reduced. Concepts of Li, M., Kuang, L., Xu, S., & Sha, Z. (2019). Brain tumor detection based on multimodal in-
ghost imaging can also be incorporated to enhance the dataset. formation fusion and convolutional neural network. IEEE Access: Practical innovations,
open solutions, 180134–180146.
The generative adversarial networks can also be employed. It can Liu, J., Pan, Yi, Li, M., Chen, Z., Tang, Lu, Lu, C., et al., (2018). Applications of deep
be of great help for clinical experts if exact location and early learning to MRI images: A survey. Big Data Mining and Analytics, 1, 1–18.
detection of tumor can be done with ease. This can help in re- Mohsen, H., El-Dahshan, E. S. A., El-Horbaty, E. S. M., & Salem, A. B. M. (2018). Classi-
fication using deep learning neural networks for brain tumors. Future Computing and
ducing the human errors and variation in result due to manual Informatics Journal, 3, 68–71.
judgements. Murthy, M. Y. B., Koteswararao, A., & Babu, M. S. (2022). Adaptive fuzzy deformable
fusion and optimized CNN with ensemble classification for automated brain tumor
diagnosis. Biomedical Engineering Letters, 12, 37–58.
5. Conclusions and future scope Murthy, T. S. D., & Sadashivappa, G. (2014). Brain tumor segmentation using thresholding,
morphological operations and extraction of features of tumor. In 2014 international
In this study, segmentation and detection of brain tumor have been conference on advances in electronics computers and communications (pp. 1–6). IEEE.
Muruganantham, P., & Balakrishnan, S. M. (2021). A survey on deep learning models for
done using deep neural networks. In the present study, the MRI image
wireless capsule endoscopy image analysis. International Journal of Cognitive Computing
dataset is used to train the neural network, and then soft dice loss is used in Engineering, 83–92.
to detect losses in the segmented model. Later, the model is trained, rec- Özyurt, F., Sert, E., Avci, E., & Dogantekin, E. (2019). Brain tumor detection based on
convolutional neural network with neutrosophic expert maximum fuzzy sure entropy.
tifying those losses and giving the segmented image as output. Initially,
Measurement, 14, Article 106830.
the 3D MRI model is divided into 3D sub-models to pass through the Pawlowski, N., Ktena, S. I., Lee, M. C. H., Kainz, B., Rueckert, D., Glocker, B., et al., (2017).
segmentation model. There are two datasets used for the CNN models. Dltk: State of the art reference implementations for deep learning on medical images.
Every dataset is taken from different patients from different parts of arXiv preprint arXiv:1711., 0685.
Rehman, A., et al., (2021). Microscopic brain tumor detection and classification using
the world to conquer the problem of generalization. Secondly, the CNN 3D CNN and feature selection architecture. Microscopy Research and Technique, 84,
model is implemented in particular for the three most popular kinds 133–149.
209
P. Agrawal, N. Katal and N. Hooda International Journal of Cognitive Computing in Engineering 3 (2022) 199–210
Sobhaninia, Z., & Rezaei, S., & Noroozi, A., & Ahmadi, M., & Zarrabi, H., & Karimi, N. Zhang, Y. D., Satapathy, S. C., Wu, D., Guttery, D. S., Górriz, J. M., & Wang, S. H. (2021a).
et al. (2018). Brain tumor segmentation using deep learning by type specific sorting Improving ductal carcinoma in situ classification by convolutional neural network
of images. with exponential linear unit and rank-based weighted pooling. Complex & Intelligent
Suganthe, R. C., Revathi, G., Monisha, S., & Pavithran, R. (2020). Deep learning based Systems, 7(3), 1295–1310.
brain tumor classification using magnetic resonance imaging. Journal of Critical Re- Zhang, Y. D, Satapathy, S. C., Guttery, S., Górriz, J. M., Wang, S. H., et al., (2021b). Im-
views, 7, 347–350. proved breast cancer classification through combining graph convolutional network
Ucuzal, H., Yaşar, Ş., & Çolak, C. (2019). Classification of brain tumor types by deep learn- and convolutional neural network. Information Processing & Management, 58, Article
ing with convolutional neural network on magnetic resonance images using a devel- 102439.
oped web-based interface. In 2019 3rd International Symposium on Multidisciplinary Zhou, S., Nie, D., Adeli, E., Yin, J., Lian, J., & Shen, D. (2020). High-resolution encoder –
Studies and Innovative Technologies (ISMSIT) (pp. 1–5). IEEE. Decoder networks for low-contrast medical image segmentation. IEEE Transactions on
Wadhwa, A., Bhardwaj, A., & Verma, V. S. (2019). A review on brain tumor segmentation Image Processing, 29, 461–475. 10.1109/TIP.2019.2919937.
of MRI images. Magnetic Resonance Imaging, 6, 247–259.
Yadav, P. S., & Sahu, C. (2013). Detection of brain tumour using self organizing map with
Kmean algorithm. International Journal on Advanced Computer Theory and Engineering,
1(2), 2319–2326.
210