Thesis Title: Tumor
Identification in CT and
MRI Imaging Using
Deep Learning for
Accurate Diagnosis
• BACHELOR STUDENT : ANJEZA KANXHA
• EPOKA UNIVERSITY
ABSTRACT
• DEEP LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS HAS BECOME A
POWERFUL TOOL FOR SOLVING A WIDE RANGE OF REAL-LIFE PROBLEMS.
IMAGE CLASSIFICATION AND MEDICAL IMAGING ARE CASES OF
IMPLEMENTATION OF DEEP LEARNING, A SUBSET OF MACHINE LEARNING.
DEEP LEARNING HAS A HIGH SUCCESS RATE, DUE TO ITS VARIETY OF
ALGORITHMS AND THE ABILITY TO LEARN COMPLEX PATTERNS DIRECTLY
FROM DATA.
• THIS THESIS AIMS TO TEST THE EFFECTIVENESS OF THE CONVOLUTIONAL
NEURAL NETWORK, IN DETECTING TUMORS AND CLASSIFYING DIFFERENT
MEDICAL IMAGES IN TUMORAL OR HEALTHY. THIS MODEL WILL BE TRAINED
AND TESTED ON OUR CHOSEN DATASETS FOR ITS ACCURACY AND OTHER
PERFORMANCE METRICS. THE PERFORMANCE WAS SATISFYING WITH HIGH
ACCURACY, DURING TRAINING AND TESTING ON UNSEEN DATA.
ACKNOWLEDGEMENTS
• SPECIAL THANKS TO ASSOC. PROF. DR. DIMITROS KARRAS
FOR GUIDANCE AND SUPPORT THROUGHOUT THE
RESEARCH PROCESS.
TABLE OF CONTENTS
• 1. INTRODUCTION
• 2. OBJECTIVES
• 3. LITERATURE REVIEW
• 4. METHODOLOGY
• 4.1. MODEL ARCHITECTURE
• 4.2. EXPERIMENTAL SETUP AND TRAINING
• 5. RESULTS
• 5.1. SUMMARY OF RESULTS
• 6. DISCUSSION
• 7. CONCLUSIONS
• 8. FUTURE WORK
• 9. REFERENCES
1. INTRODUCTION
•DEEP LEARNING IS A SUBFIELD OF MACHINE LEARNING. IT HAS PROVEN HIGHLY
SUCCESSFUL RESULTS IN MEDICAL IMAGE ANALYSIS. CNN HAS SHOWN VERY
GOOD PERFORMANCE IN DETECTING FEATURES FROM COMPLEX IMAGES AND
CLASSIFYING THEM. THE PREDICTIONS AND RESULTS WITH HIGH ACCURACY,
ARE ESSENTIAL PROOF OF SUCCESSFUL DETECTION. USING THE POWER OF CNNS,
IT IS POSSIBLE TO DEVELOP AUTOMATED SYSTEMS FOR THE DETECTION OF LUNG
CANCER FROM CT SCANS AND TUMOR BRAINS FROM MRIS. IN THIS WAY
SUPPORTING HEALTHCARE PROFESSIONALS IN MAKING SWIFTER AND BETTER
DECISIONS FOR THE PATIENT'S DIAGNOSIS AND TREATMENT.
2. OBJECTIVES
• 1. COLLECTING DATASETS OF LUNG AND BRAIN RADIOGRAPHIC IMAGES TO TRAIN
AND TEST OUR MODEL, MAKING SURE DATASETS PROVIDE DIVERSE CASES AND
SIZES.
• 2. ENHANCING THE DATA MAKES IT SUITABLE FOR OUR DETECTION SYSTEM
USING PREPROCESS. THIS AVOIDS IMBALANCES IN THE DATASET.
• 3. SELECT THE APPROPRIATE DEEP-LEARNING ALGORITHM THAT PROVES HIGH
PERFORMANCE FOR THE TUMOR DETECTION SYSTEM FOR MEDICAL IMAGES.
• 4. ACHIEVING HIGH ACCURACY AND OVERALL PERFORMANCE USING OUR MODEL
TO IDENTIFY THE TUMOR’S PRESENCE.
• 5. CREATING A TUMOR DETECTING SYSTEM, WHICH IS RELIABLE AND USEABLE IN
REAL-LIFE SCENARIOS FOR MEDICAL PROFESSIONALS.
• 6. CREATING A MULTIPURPOSE MODEL TESTED IN SEVERAL DATASETS.
3. LITERATURE REVIEW
In this section, different papers used for
the thesis are listed in a table. I have
analyzed 8 papers, mostly from the last ten
years, all from IEEE Xplore.
In the table are mentioned the paper name,
its authors, its year of publishing and also
some review about its content.
Nr Paper Title Author Year Sour Content Solved Problems Future
ce Work
4 Automatic Wadoo 2021 IEEE Develops a Achieved high Explore
Lung Cancer d Abdul CNN-based accuracy in lung additional
Detection system for cancer detection features
and detecting and which can assist in and larger
Classification classifying lung early diagnosis. datasets for
(ALCDC) cancer from improved
System medical model
Using imaging data. generalizat
Convolutiona ion.
l Neural
Network
5 Detection of Babu 2020 IEEE Reviews Summarizes the Develop
Lung Kumar various CNN effectiveness of robust
Nodules S, M approaches for different CNN models that
using Vinoth detecting lung architectures in handle
Convolution Kumar nodules, a key improving varied
Neural indicator of detection rates. image
Network: A lung cancer. quality and
Review reduce
false
positives.
Nr Paper Title Author(s) Year Source Content Solved Future
Problem Work
6 Deep Learning Diksha 2020 IEEE Proposes a Demonstr Further
Algorithm for Makshe, deep ates the validation
Classification Kannan learning potential on
and Prediction Rajeswari, algorithm of deep multicent
of Lung Ruchita to classify learning er
Cancer using Tekade and predict in datasets
CT Scan lung providing to
Images cancer predictive establish
from CT insights clinical
scan into lung applicabil
images. cancer ity.
stages.
Nr Paper Title Author Yea Sour Content Solved Problems Future
(s) r ce Work
7 Lung Cancer M.Bikr 201 IEEE Uses CNN to Improved Integration
Detection from omjit 9 detect lung detection rates with real-
Computed Khuma cancer from which may time imaging
Tomography ncha, CT scans, reduce false systems for
(CT) Scans Aarti emphasizing positives and dynamic
using Barai, the model's negatives in lung diagnosis
Convolutional C.B sensitivity cancer screening. support.
Neural Rama and
Network Rao specificity.
8 A novel Rohit Y. 201 IEEE Introduces a Enhances image Extend the
approach for Bhalera 9 combined processing approach to
detection of o, approach of techniques that other types of
Lung Cancer Harsh P. digital image improve the cancer
using Digital Jani, processing initial stages of detection
Image Rachan and CNNs for lung cancer using
Processing and a K. early lung screening. multimodal
Convolution Gaitond cancer imaging data.
Neural e, Vinit detection.
Networks Raut
Nr Paper Title Author(s) Year Sour Content Solved Future
ce Problems Work
9 Gradient-based Y. Lecun, L. 1998 IEEE Discusses Highlights Explore
learning applied Bottou, Y. gradient- effective more
to document Bengio, P. based techniques efficient
recognition Haffner learning for gradient-
techniques distinguishin based
in the g between optimizati
context of tumor types, on
document which can algorithm
recognition guide s for
treatment faster
decisions. training
times.
14 A Basic Irma 2020 IEEE Proposes a Contributes Adapt the
Concept of Permata CNN to automated model to
Image Sari, method to COVID-19 detect
Classification Widodo, classify detection other
for Covid-19 Murien COVID-19 crucial for illnesses
Patients Using Nugraheni, patients triaging for
Chest CT Scan Putra Wanda from chest patients. broader
and CT scans clinical
Convolutional use.
Neural Network
Nr Paper Title Author(s) Year Sourc Content Solved Future
e Problems Work
15 Brain Sultan B. 2020 IEEE Develops a High Implement
Tumor Fayyadh, deep accuracy in real-time
Detection Abdullahi learning classificatio analysis and
and A.Ibrahim technique n essential extend the
Classificati for for approach to
on Using detecting appropriate other
CNN and treatment neurological
Algorithm classifying planning. conditions.
and Deep brain
Learning tumors
Techniques using
CNNs.
SUMMARY OF LITERATURE REVIEW
•STUDIES [4,5,6,7,8] FOCUS ON THE DEVELOPMENT OF AUTOMATED SYSTEMS FOR DETECTING
LUNG CANCER FROM CT SCANS USING CONVOLUTIONAL NEURAL NETWORKS. THESE AIM TO
IDENTIFY LUNG NODULES AND CLASSIFY THEM AS TUMORIAL OR NOT. THESE REFERENCES ARE
CITED IN OUR STUDY BECAUSE A PART OF OUR STUDY IS RELATED TO CT SCANS AND CANCER
CASES IN THE LUNG. ALSO [14], AND [15] FOCUS ON THE APPLICATION OF CNN ARCHITECTURE IN
MEDICAL IMAGING SUCH AS MRI AND CT SCANS. WE HAVE CITED [14] AND [15] TO USE THEM AS A
FOUNDATION BASE WHERE OTHER PREVIOUS STUDIES HAVE STATED THE APPLICATION OF CNN IN
DIFFERENT MEDICAL IMAGES.
•TECHNOLOGICAL AND METHODOLOGICAL
•PAPERS [4,7,8] DEMONSTRATE THE USE OF SOPHISTICATED NEURAL NETWORK MODELS FOR
IMAGE CLASSIFICATION WITH MEDICAL IMAGING. WE HAVE CITED THEM IN THE SECTION OF CNN
INTRODUCTION, INPUT LAYERS, FULLY CONNECTED LAYERS, AND ACTIVATION LAYERS.
•THEORETICAL CONTRIBUTION
•GRADIENT-BASED LEARNING [9]: THE WORK OF Y. LECUN ET AL. LAID TO GROUNDWORK FOR
MANY ADVANCEMENTS IN USING CONVOLUTIONAL NEURAL NETWORKS FOR IMAGE DATA. WE
HAVE CITED THIS BECAUSE OUR MODEL USES GRADIENT-BASED LEARNING.
MRIs
METHODOLOGY
Train Epoch: 10 Train Loss: 0.303000 Val Loss: 0.292715 Train Epoch: 200 Train Loss: 0.069925 Val Loss: 0.069919
Train Epoch: 210 Train Loss: 0.069928 Val Loss: 0.069909
Train Epoch: 20 Train Loss: 0.236320 Val Loss: 0.228751
Train Epoch: 220 Train Loss: 0.069904 Val Loss: 0.069906
Train Epoch: 30 Train Loss: 0.193609 Val Loss: 0.184619
Train Epoch: 230 Train Loss: 0.069884 Val Loss: 0.069905
Train Epoch: 40 Train Loss: 0.159293 Val Loss: 0.184990
Train Epoch: 240 Train Loss: 0.069885 Val Loss: 0.069896
Train Epoch: 50 Train Loss: 0.141757 Val Loss: 0.136623
Train Epoch: 250 Train Loss: 0.069910 Val Loss: 0.069897
Train Epoch: 60 Train Loss: 0.127096 Val Loss: 0.135775
Train Epoch: 260 Train Loss: 0.069898 Val Loss: 0.069899
Train Epoch: 70 Train Loss: 0.115164 Val Loss: 0.104141
Train Epoch: 80 Train Loss: 0.102495 Val Loss: 0.092318 Train Epoch: 270 Train Loss: 0.069876 Val Loss: 0.069901
Train Epoch: 90 Train Loss: 0.090556 Val Loss: 0.095216 Train Epoch: 280 Train Loss: 0.069872 Val Loss: 0.069891
Train Epoch: 100 Train Loss: 0.076802 Val Loss: 0.078608 Train Epoch: 290 Train Loss: 0.070277 Val Loss: 0.069893
Train Epoch: 110 Train Loss: 0.072944 Val Loss: 0.060831 Train Epoch: 300 Train Loss: 0.069867 Val Loss: 0.069892
Train Epoch: 120 Train Loss: 0.057174 Val Loss: 0.055813 Train Epoch: 310 Train Loss: 0.069882 Val Loss: 0.069899
Train Epoch: 130 Train Loss: 0.070401 Val Loss: 0.071685 Train Epoch: 320 Train Loss: 0.069882 Val Loss: 0.069896
Train Epoch: 330 Train Loss: 0.069994 Val Loss: 0.069896
Train Epoch: 140 Train Loss: 0.069931 Val Loss: 0.069946
Train Epoch: 340 Train Loss: 0.069872 Val Loss: 0.069899
Train Epoch: 150 Train Loss: 0.069924 Val Loss: 0.069946 Train Epoch: 350 Train Loss: 0.069885 Val Loss: 0.069887
Train Epoch: 160 Train Loss: 0.069931 Val Loss: 0.069941 Train Epoch: 360 Train Loss: 0.069921 Val Loss: 0.069887
Train Epoch: 170 Train Loss: 0.069936 Val Loss: 0.069940 Train Epoch: 370 Train Loss: 0.069880 Val Loss: 0.069880
Train Epoch: 380 Train Loss: 0.069868 Val Loss: 0.069888
Train Epoch: 180 Train Loss: 0.070772 Val Loss: 0.069936
Train Epoch: 390 Train Loss: 0.069910 Val Loss: 0.069886
Train Epoch: 190 Train Loss: 0.070056 Val Loss: 0.069923 Train Epoch: 400 Train Loss: 0.070712 Val Loss: 0.069876
CT Scans
Train Epoch: 10 Train Loss: 0.572070 Val Loss: 0.568871 Train Epoch: 210 Train Loss: 0.015096 Val Loss: 0.014266
Train Epoch: 20 Train Loss: 0.435668 Val Loss: 0.428367 Train Epoch: 220 Train Loss: 0.013987 Val Loss: 0.013862
Train Epoch: 30 Train Loss: 0.341289 Val Loss: 0.325231 Train Epoch: 230 Train Loss: 0.013405 Val Loss: 0.012737
Train Epoch: 40 Train Loss: 0.256313 Val Loss: 0.252201 Train Epoch: 240 Train Loss: 0.011852 Val Loss: 0.011655
Train Epoch: 50 Train Loss: 0.199934 Val Loss: 0.222654 Train Epoch: 250 Train Loss: 0.012133 Val Loss: 0.011558
Train Epoch: 60 Train Loss: 0.157346 Val Loss: 0.160207 Train Epoch: 260 Train Loss: 0.010904 Val Loss: 0.010508
Train Epoch: 70 Train Loss: 0.121625 Val Loss: 0.114409 Train Epoch: 270 Train Loss: 0.010212 Val Loss: 0.010178
Train Epoch: 80 Train Loss: 0.091725 Val Loss: 0.083656 Train Epoch: 280 Train Loss: 0.010260 Val Loss: 0.010015
Train Epoch: 90 Train Loss: 0.066635 Val Loss: 0.068710 Train Epoch: 290 Train Loss: 0.010065 Val Loss: 0.010053
Train Epoch: 100 Train Loss: 0.048773 Val Loss: 0.078657 Train Epoch: 300 Train Loss: 0.010098 Val Loss: 0.010055
Train Epoch: 110 Train Loss: 0.030153 Val Loss: 0.045643 Train Epoch: 310 Train Loss: 0.010164 Val Loss: 0.010055
Train Epoch: 120 Train Loss: 0.025610 Val Loss: 0.024685 Train Epoch: 320 Train Loss: 0.010075 Val Loss: 0.010054
Train Epoch: 130 Train Loss: 0.026984 Val Loss: 0.023770 Train Epoch: 330 Train Loss: 0.010067 Val Loss: 0.010053
Train Epoch: 140 Train Loss: 0.024970 Val Loss: 0.021994 Train Epoch: 340 Train Loss: 0.010053 Val Loss: 0.010052
Train Epoch: 150 Train Loss: 0.023636 Val Loss: 0.021074 Train Epoch: 350 Train Loss: 0.010111 Val Loss: 0.010052
Train Epoch: 160 Train Loss: 0.021972 Val Loss: 0.021581 Train Epoch: 360 Train Loss: 0.010166 Val Loss: 0.010052
Train Epoch: 170 Train Loss: 0.019504 Val Loss: 0.018281 Train Epoch: 370 Train Loss: 0.010050 Val Loss: 0.010049
Train Epoch: 180 Train Loss: 0.021210 Val Loss: 0.017240 Train Epoch: 380 Train Loss: 0.010154 Val Loss: 0.010047
Train Epoch: 190 Train Loss: 0.017442 Val Loss: 0.016226 Train Epoch: 390 Train Loss: 0.010097 Val Loss: 0.010046
Train Epoch: 200 Train Loss: 0.015930 Val Loss: 0.015202 Train Epoch: 400 Train Loss: 0.010101 Val Loss: 0.010044
4.1. MODEL ARCHITECTURE
•SELF.CNN_MODEL: STORES THE SEQUENTIAL CONTAINER(NN.SEQUENTIAL), WHICH EXECUTES
OPERATIONS IN A SEQUENTIAL ORDER.
• THE CONV2D LAYER IS THE FIRST CONVOLUTIONAL LAYER WITH 3 INPUT CHANNELS(RGB
CHANNELS), 6 OUTPUT CHANNELS, AND A KERNEL SIZE OF 5X5.
• TANH ACTIVATION(NN.TANH()) IS THE ACTIVATION FUNCTION THAT SCALES THE OUTPUT OF THE
PREVIOUS LAYER FROM -1 TO 1. IT AIMS TO INTRODUCE NON-LINEARITY INTO THE MODEL.
•AVERAGE POOLING LAYER: (NN.AVGPOOL2D) IS PUT TO PERFORM AVERAGE POOLING WITH
PARAMETERS(2X2 WINDOW,STRIDE=5). IT AIMS TO REDUCE THE SPATIAL DIMENSIONS FOR THE NEXT
CONVOLUTIONAL LAYER.
• SECOND CONV2D LAYER: SECOND CONVOLUTIONAL LAYER WITH PARAMETERS(IN_CHANNELS=6,
OUT_CHANNELS=6, KERNEL_SIZE=6). IT INCREASES THE DEPTH BY TRANSFORMING FROM 6 TO 16
OUTPUT CHANNELS WITH THE SAME KERNEL SIZE, TO LEARN COMPLEX FEATURES.
• SECOND TANH ACTIVATION: NON-LINEARITY IN DEEPER PARTS OF THE NETWORK.
• SECOND AVERAGE POOLING LAYER: THE SECOND POOLING LAYER REDUCES THE
DIMENSIONALITY, SIMPLIFYING THE NETWORK.
•FULLY CONNECTED LAYERS: SELF.FC_MODEL IS A CONTAINER(SEQUENTIAL) THAT HOLDS THE
DENSE LAYERS.
• LINEAR LAYER: A FULLY CONNECTED LAYER THAT TAKES THE FLATTENED OUTPUT OF THE
PREVIOUS LAYER(256) AND OUTPUTS 120 FEATURES.
• THIRD TANH ACTIVATION: NON-LINEARITY IS INTRODUCED IN THIS LEVEL OF DEPTH.
• SECOND LINEAR LAYER: REDUCES DIMENSIONALITY FROM 120 FEATURES TO 84 FEATURES.
• FOURTH TANH ACTIVATION: CONTINUES INTRODUCING NON-LINEARITY.
• OUTPUT LINEAR LAYER: MAPS 84 FEATURES TO A SINGLE OUTPUT FEATURE, FOR BINARY
CLASSIFICATION.
•FORWARD PASS: THIS METHOD DEFINES THE DATA FLOW.
•X=SELF.CNN_MODEL(X): PASSES THE INPUT THROUGH CONVOLUTIONAL LAYERS.
•X=X.VIEW(X.SIZE(0), -1): FLATTENS THE OUTPUT OF THE CONVOLUTIONAL NEURAL NETWORK,
SO IT CAN BE FED INTO THE FULLY CONNECTED LAYERS.
•X=SELF.FC_MODEL(X): PASSES THE FLATTENED OUTPUT THROUGH THE DENSE LAYERS.
•IN THE END, THE SIGMOID FUNCTION IS APPLIED TO THE FINAL OUTPUT, SO THE OUTPUT
RANGES BETWEEN 0 AND 1. THIS IS USEFUL FOR THIS STUDY, OF BINARY CLASSIFICATION.
4.2 EXPERIMENTAL SETUP AND
TRAINING
•IN THE HARDWARE IMPLEMENTATION, THE MAIN TASK IS THE
CONFIGURATION OF DEVICE: IF GPU IS AVAILABLE AND SET THE DEVICE.
TENSOR OPERATIONS ON GPU: MOVING TENSORS TO THE CHOSEN
DEVICE(GPU/CPU), ENSURING THE FULL HARDWARE CAPABILITIES.
•IN OUR CASE, WE HAVE USED A CPU WHICH NEEDS MORE TIME TO BE
EXECUTED COMPARED TO A GPU. ALSO, THE TIME NEEDED FOR RUNNING
DEPENDS ON THE SIZE OF THE DATASET. THE LARGER THE DATASET, THE
LONGER THE EXECUTION TIME. WITH CPU SUPPORT, THE NEEDED TIME FOR
ALL EXECUTION(INCLUDING TESTING, TRAINING, AND VALIDATION) FOR A
DATASET OF AROUND 2000 IMAGES, IS AROUND 35 MINUTES. MEANWHILE,
THE DATASET OF MRIS OF 10000 IMAGES NEEDS ABOUT 3 HOURS, TO BE
COMPLETELY EXECUTED.
5. RESULTS
Performance analysis in CT scans
Performance metric Training Testing
Accuracy 0.996767 0.995455
Precision 0.998528 0.992806
Recall 0.996261 1.000000
F1-score 0.997393 0.996390
Performance analysis in MRIs
Performance metric Training Testing
Accuracy 0.989777 0.995018
Precision 0.992681 0.997021
Recall 0.993028 0.996032
F1-score 0.992854 0.996526
5.1SUMMARY OF RESULTS
• IN OUR STUDY, CNN IS USED TO IDENTIFY TUMORS IN TWO DIFFERENT
MEDICAL IMAGING, CT AND MRI. THE MODEL IS TRAINED AND TESTED IN
BOTH CT SCANS OF THE LUNG AND MRIS OF THE BRAIN. THIS STUDY
PROVED SUCCESSFUL IN IDENTIFYING AND CLASSIFYING MEDICAL
IMAGES IN TUMORAL OR NOT, WITH HIGH ACCURACY. THE ACCURACY FOR
THE DETECTION OF BRAIN TUMORS IN MRIS WAS 99.6767%. FOR CT LUNG
SCANS THE ACCURACY FOR TRAINING WAS 98.97%, WHILE FOR TESTING IT
WAS 99.55%.
MRI Dataset CT scans Dataset
Accuracy(Train Phase) 0.9897 0.996767
Accuracy(Test Phase) 0.995018 0.995455
Convergence of Losses: with the
increasing of epochs number, blue and red
lines decrease significantly. This drastic
decrease shows that the model has
effectively learned.
Stability and Overfitting: lines remain
close to each other, which means the
model is not just memorizing. It is also
generalizing well to unseen data. No signs
of underfitting because loss is low. Also,
there is no sign of significant overfitting,
since the training loss decreases while the
validation loss decreases too.
RED LINE-VALIDATION LOSS
BLUE LINE-TRAINING LOSS
6. DISCUSSION
•THE MODEL IN OUR PROJECT DEMONSTRATES HIGH PERFORMANCE IN BOTH
TYPES OF DATASETS USED. CT AND MRI ARE TWO DIFFERENT MEDICAL
IMAGING TECHNIQUES, BUT THERE MIGHT BE SOME FACTORS APPLIED IN OUR
MODEL, WHICH HAVE A HIGH IMPACT.
• USE OF CNN(CONVOLUTIONAL NEURAL NETWORK), WHICH IS GOOD FOR
CAPTURING FEATURES OF IMAGES. IN BOTH CT SCANS AND MRIS, EDGES,
SHAPES, AND TEXTURES ARE IMPORTANT, SO THE USE OF CNN HAS A HIGH
IMPACT ON RESULTS ACHIEVED.
• ALSO, PREPROCESSING TECHNIQUES SUCH AS RESIZING, NORMALIZATION,
AND AUGMENTATION HAVE HELPED IN MINIMIZING DIFFERENCES BETWEEN
CT SCANS AND MRIS. THIS PROVIDES HIGH PERFORMANCE IN BOTH TYPES.
7. CONCLUSIONS
•THIS THESIS USES DEEP LEARNING, SPECIFICALLY THE USE OF
CONVOLUTIONAL NEURAL NETWORKS IN MEDICAL IMAGES, AIMING AT
TUMOR IDENTIFICATION. WE HAVE CREATED A MODEL WHICH AIDS IN
EARLY DIAGNOSIS, FOR BOTH TYPES OF MEDICAL IMAGES, CT SCANS, AND
MRIS. THE RESULTS DEMONSTRATED THE HIGH ACCURACY AND
EFFECTIVENESS OF DEEP LEARNING IN LEARNING COMPLEX IMAGES. THE
MODEL IS EFFECTIVE IN BOTH MRIS AND CT SCANS. THE ACCURACY FOR
THE DETECTION OF BRAIN TUMORS IN MRIS WAS 98.97%. FOR CT LUNG
SCANS THE ACCURACY FOR TRAINING WAS 99.6767%. ALSO, THE MODEL
ACHIEVED SATISFYING RESULTS IN UNSEEN DATA.
8. FUTURE WORK
•FOR FUTURE WORK, THIS STUDY CAN BE EXTENDED BY
INCORPORATING LARGER AND MORE DIVERSE DATASETS. ALSO,
EXPLORING DATASETS COLLECTED FROM DEMOGRAPHIC AND
DIFFERENT GEOGRAPHIC BACKGROUNDS.