Helaly2021 Article DeepLearningApproachForEarlyDe
Helaly2021 Article DeepLearningApproachForEarlyDe
https://doi.org/10.1007/s12559-021-09946-2
Abstract
Alzheimer’s disease (AD) is a chronic, irreversible brain disorder, no effective cure for it till now. However, available medicines
can delay its progress. Therefore, the early detection of AD plays a crucial role in preventing and controlling its progression.
The main objective is to design an end-to-end framework for early detection of Alzheimer’s disease and medical image clas-
sification for various AD stages. A deep learning approach, specifically convolutional neural networks (CNN), is used in
this work. Four stages of the AD spectrum are multi-classified. Furthermore, separate binary medical image classifications
are implemented between each two-pair class of AD stages. Two methods are used to classify the medical images and detect
AD. The first method uses simple CNN architectures that deal with 2D and 3D structural brain scans from the Alzheimer’s
Disease Neuroimaging Initiative (ADNI) dataset based on 2D and 3D convolution. The second method applies the transfer
learning principle to take advantage of the pre-trained models for medical image classifications, such as the VGG19 model.
Due to the COVID-19 pandemic, it is difficult for people to go to hospitals periodically to avoid gatherings and infections. As
a result, Alzheimer’s checking web application is proposed using the final qualified proposed architectures. It helps doctors
and patients to check AD remotely. It also determines the AD stage of the patient based on the AD spectrum and advises the
patient according to its AD stage. Nine performance metrics are used in the evaluation and the comparison between the two
methods. The experimental results prove that the CNN architectures for the first method have the following characteristics:
suitable simple structures that reduce computational complexity, memory requirements, overfitting, and provide manageable
time. Besides, they achieve very promising accuracies, 93.61% and 95.17% for 2D and 3D multi-class AD stage classifica-
tions. The VGG19 pre-trained model is fine-tuned and achieved an accuracy of 97% for multi-class AD stage classifications.
Keywords Medical image classification · Alzheimer’s disease · Convolutional neural network (CNN) · Deep learning ·
Brain MRI
13
Vol.:(0123456789)
Cognitive Computation
13
Cognitive Computation
Imaging (DTI) modalities fusion on hippocampal Regions feature fusion for multi-scale features. Graph Convolutional
of Interest (RoI). They compared the performance of that Neural Network (GCNN) classifier was proposed by Song
approach with the AlexNet-based network. Higher perfor- et al. [34] based on the Graph-theoretic tools. They trained and
mance was reported by 3D Inception than by AlexNet. validated the network using structural connectivity graphs rep-
A HadNet architecture was proposed to study Alzheimer’s resenting a multi-class model to classify the AD spectrum into
spectrum MRI by Sahumbaiev et al. [26]. The dataset of four categories.
MRI images is spatially normalized by Statistical Paramet- For the detection of AD, Liu et al. [35] used speech info.
ric Mapping (SPM) toolbox and skull-stripped for better The features of the spectrogram were extracted and obtained
training. It is projected that when the HadNet architecture from elderly speech data. The system relied on methods for
improved, sensitivity and specificity would improve as well. machine learning. Among the tested models, the logistic
The model of Apolipoprotein E expression level4 regression model gave the best results. Besides, a multi-
(APOe4) was suggested by Spasov et al. [27]. MRI scans, model deep learning framework was proposed by Liu et al.
genetic measures, and clinical evaluation were used as [36]. Automatic hippocampal segmentation and AD classifi-
inputs for the APOe4 model. Compared with pre-trained cation were jointed based on CNN using structural MRI data.
models such as AlexNet [28] and VGGNet [29], the model The learned features from the multi-task CNN and the 3D
minimized computational complexity, overfitting, memory Densely Connected Convolutional Networks (3D DenseNet)
requirements, prototyping speed, and a low number of models were combined to classify the disease status.
parameters. A protocol was introduced by Impedovo et al. [37]. This
A novel CNN framework was proposed based on a multi- protocol offered a “cognitive model” for evaluating the rela-
modal MRI analytical method using DTI or Functional tionship between cognitive functions and handwriting pro-
Magnetic Resonance Imaging (fMRI) data by Wang et al. cesses in healthy subjects and cognitively impaired patients.
[30]. The framework classified AD, NC, and amnestic mild The key goal was to establish an easy-to-use and non-invasive
cognitive impairment (aMCI) patients. Although it achieved technique for neurodegenerative dementia diagnosis and mon-
high classification accuracy, it is expected that using 3D itoring during screening and follow-up. A 3D CNN architec-
convolution instead of 2D convolution would give better ture is applied to 4D FMRI images for classifying four AD
performance. stages (AD, EMCI, LMCI, NC) by Harshit et al.[38]. In addi-
A shallow tuning of a pre-trained model such as Alex net, tion to that, other CNN structures that deal with 3D MRI for
Google Net, and ResNet50 was suggested by Khagi et al. different AD stage classification are suggested by Silvia et al.
[31]. The main objective was to find the effect of each sec- [39] and Dan et al. [40]. A 3D Densely Connected Convolu-
tion of the layers in the results in the natural image and tional Networks (3D DenseNets) is applied in 3D MRI images
medical image classification. PFSECTL mathematical model for 4-way classification by Juan Ruiz et al. [41].
was proposed by Jain et al. [32] based on CNN and VGG-
16 pre-trained models. It worked as a feature extractor for
the classification task. The model supported the concept of Problem Statement and Plan of Solution
transfer learning.
Ge et al. [33] developed a 3D multi-scale CNN (3DMSCNN) Recently, numerous architectures that can accommodate
model. For AD diagnosis, 3DMSCNN was a new architecture. AD detection and medical image classification have been
Additionally, they proposed an enhancement strategy and proposed in the literature, as seen in the “Related Work”
13
Cognitive Computation
section. However, most of them lack applying transfer learn- Step 1—Data Acquisition Step: All trained data is col-
ing techniques, multi-class medical image classification, lected from the ADNI dataset in 2D, T1w MRI modality. It
and applying Alzheimer’s disease checking web service to includes medical image descriptions such as Coronal, Sagit-
check AD stages and advise patients remotely. These issues tal, and Axial in the DICOM format. The dataset consists of
have not been sufficiently discussed in the literature. So, the 300 patients divided into four classes AD, EMCI, LMCI, and
novelties of this study, according to other state-of-the-art NC. Each class has 75 patients with a total number of images
techniques reviewed in the “Related Work” section, can be of 21 and 816 scans. AD class contains 5764 images, EMCI
organized as follows: has 5817 images, LMCI includes 3460 images, and NC has
• An end-to-end framework is applied for the early detec- 6775 images. All medical data were derived with a size of
tion of Alzheimer’s disease and medical image classification. 256 × 256 in 2D format. Table 1 depicts demographic data
• Medial image classification is applied using two meth- for 300 subjects from the ADNI dataset. It gives an overview
ods as follows: of the data, such as the number of patients in each class,
the ratio of male or female patients in each class, and the
The first method is based on simple CNN architectures mean of ages with the standard deviation (STD). Figure 2
that deal with 2D and 3D structural brain MRI. These shows three slices in a two-dimensional format. The slices
architectures are based on 2D and 3D convolution. were extracted from an MRI scan in MR Accelerated Sagit-
The second method uses transfer learning to take advan- tal MPRAGE view, MR Axial Field Mapping view, and MR
tage of the pre-trained models such as the VGG19 model. 3 Plane Localizer view.
Step 2—Preprocessing Step: The collected dataset suf-
• The main challenges for medical images are the small fers from imbalanced classes. To overcome this problem,
number of the dataset. So, data augmentation techniques we resampling the dataset using two methods (oversampling
are applied to maximize the dataset’s size and prevent the and undersampling). Oversampling means coping instances
overfitting problem. for the under-represented class, and undersampling means
• Resampling methods are used, such as “oversampling, deleting instances from the over-represented class. We apply
downsampling” to overcome collected imbalanced dataset oversampling method on AD, EMCI, and LMCI. Also, the
classes. undersampling method is utilized for the NC class. All AD
• Three multi-class medical image classification and 12 classes after resampling methods become 6000 MRI images.
binary medical image classification have experimented with As a result, the dataset becomes 24,000 images. The data-
four AD stages. set is then processed, normalized, standardized, resized,
• The experimental results give high performance accord- denoised, and converted to a suitable format. The data is
ing to nine performance metrics. denoised by a non-local means algorithm for blurring an
• Due to the COVID-19 pandemic, it is difficult for people image to reduce image noise.
to go to hospitals periodically to avoid gatherings and infec- Step 3—Data Augmentation Step: Due to the scarcity of
tions. Thus, Alzheimer’s disease checking web service for medical datasets, the dataset is augmented using traditional
doctors and patients is proposed to check AD and determine data augmentation techniques such as rotation and reflec-
its stage remotely. Then, it advises according to the specified tion (flipping) that flips images horizontally or vertically.
AD stage. So, the dataset’s size becomes 48,000 images divided into
12,000 images for each class. The major reasons for using
data augmentation techniques are to (i) maximize the dataset
Methods and Materials and (ii) overcome the overfitting problem.
The balanced augmented dataset of 48,000 MRI images
Early detection of Alzheimer’s disease plays a crucial role is then shuffled and split into training, validation, and test
in preventing and controlling its progress. Our goal is to pro- set with a split ratio of 80:10:10 on a random selection basis
pose a framework for the early detection and classification
of the stages of Alzheimer’s disease. There will be a com- Table 2 Training, validation, and test set size
prehensive explanation of the proposed E 2AD2C framework Class label Training set size Validation Test set Total
workflow, the preprocessing algorithms, and medical image set size size
classification methods in the next sub-sections.
0 AD 9600 1200 1200 12,000
1 EMCI 9600 1200 1200 12,000
The Proposed E 2AD2C Framework
2 LMCI 9600 1200 1200 12,000
3 NC 9600 1200 1200 12,000
The proposed E 2AD2C framework comprises six steps,
Total 38,400 4800 4800 48,000
which are as follows:
13
Cognitive Computation
13
Cognitive Computation
for each class. Table 2 summarizes the resulting training, Preprocessing Techniques
validation, and test set sizes for 4-way classification (AD vs.
CN vs. EMCI vs. LMCI) as well as 2-way classification or Data Normalization Data normalization is the process that
multi-class and binary classifications. changes the range of pixel or voxel intensity values. It aims
Step 4—Medical Image Classification Step: In this step, to remove some variations in the data, such as different sub-
four stages of AD spectrum (I) NC, (II) EMCI, (III) LMCI, ject pose or differences in image contrast, to simplify subtle
and (IV) AD are multi-classified. Besides, separate binary difference detection. Zero-mean, unit variance normaliza-
classifications are implemented between each two-pair class. tion, [−1, 1] rescaling, and [0, 1] rescaling are examples of
This medical image classification is done via two methods. the data normalization methods. The last method is applied
The first method depends on simple CNN architectures that in the current study. The difference between these normali-
deal with 2D, 3D structural brain MRI scans based on 2D, zation methods appears in Fig. 4. It illustrates an original
3D convolutions. The CNN architectures are built from image and its output shape based on applying the different
scratch. The second method uses transfer learning tech- data normalization methods.
niques for medical image classification, such as VGG 19
model, to benefit from the pre-trained weights. Proposed Classification Methods and Techniques
Step 5—Evaluation Step: The two methods and the CNN
architectures are evaluated according to nine performance Feature extraction, feature reduction, and classification are
metrics. three essential stages where traditional machine learning
Step 6—Application Step: Based on the proposed quali- methods are composed. All these stages are then combined
fied models, an AD checking web application is proposed. It in standard CNN. By using CNN, there is no need to make
helps doctors and patients to check AD remotely, determines the feature extraction process manually. Its initial layers’
the Alzheimer’s stage of the patient based on the AD spec- weights serve as feature extractors, and their values are
trum, and advises the patient according to its AD stage. The improved by iterative learning. CNN gives higher perfor-
full pipeline of the proposed framework is shown in Fig. 3. mance than other classifiers. It consists of three layers:
13
Cognitive Computation
13
Cognitive Computation
(LReLU), calculated by Eq. 5. The difference between the Table 3 The tuning applied in the vgg19 model
three activation functions is depicted in Fig. 6. Model: "sequential"
1 Layer (type) Output shape Param #
fsigmoid = (3)
1 + exp(−x)
vgg19 (functional) (None, 3, 3, 512) 20,024,384
flatten (Flatten) (None, 4608) 0
fRelu = max(0, x) (4) dense (Dense) (None, 1024) 4,719,616
dense_1 (Dense) (None, 512) 524,800
dense_2 (Dense) (None, 256) 131,328
{ }
x if x > 0
fLRelu =
.01 otherwise (5) dropout (Dropout) (None, 256) 0
dense_3 (Dense) (None, 128) 32,896
For the proposed multi-classifier, the SoftMax function dropout_1 (Dropout) (None, 128) 0
is used [32], which returns the probability for a data point dense_4 (Dense) (None, 4) 516
belonging to each class, calculated from Eq. 6. Total params: 25,433,540
Trainable params: 25,433,540
exi
f (xi ) = ∑K for i = 1, 2....., k and x = [x1 , ....., xk ] (6) Non-trainable params: 0
xj
j=1 e
13
Cognitive Computation
classifier
The receiver operating •It picks a good cut-off Threshold for the TPR (sensitivity) = TP TP
curve (ROC) and Area model from plotting True Positive Rate + FN
under the Curve (AUC) (TPR) against False Positive Rate (FPR) for FPR (1-specificity) = FP
FP + TN
different values of the Threshold in the range
of [0, 1]
Matthews Correlation •The higher the correlation between True and MCC = √ TP × TN−FP × FN
Coefficient (MCC) predicted values is, the better the model (TP + FP)(TP + FN)(TN + FP)(TN + FN)
prediction is
Confusion matrix •It is the complete description of the model To understand the definition of TP, TN, FP, and FN, assume the
performance proposed binary model classifies between AD and NC then:
•It gives a matrix as an output, and it forms the – TP: The case that p is AD and y is AD
basis of other types of metrics that depend on – TN: The case that p is NC and y is NC
TP, TN, FP, and FN metrics – FP: The case that p is AD and y is NC
– FN: The case that p is NC and y is AD
for MRI images. The architecture of the 2D-M2IC model comprises three convolution layers, three max-pooling, and
is shown in Fig. 7. 2 FC layers, followed by a softmax output layer. All 3D con-
The 3D-M 2IC model has the same structure as the volution kernels are sized 3 × 3 × 3 with a stride value of 1 in
2D-M2IC model, but it uses 3D convolutional layers. It all three dimensions. All pooling kernels are sized 2 × 2 × 2.
Payan et al. [19] 755 in each class (AD, ADNI MRI Binary, multi AD vs. EMC vs. HC:
MCI, and HC) 89.47%
AD vs. HC: 95.39%
AD vs. MCI: 86.84%
HC vs. MCI: 92.11%
Sarraf et al. [21] 302 subjects (211 AD, 91 ADNI MRI, fMRI Binary AD vs. HC: 98.84%
NC)
Hosseini-Asl et al. [22] 210 subjects (70 AD, 70 CAD-dementia MRI Binary, multi AD vs. EMC vs. HC: 89.1%
NC, 70 MCI) AD + MCI/NC: 90.3%
AD/NC: 97.6%
AD/MCI: 95%
MCI/NC: 90.8%
Korolev et al. [23] 50 AD, 43 LMCI, 77 ADNI MRI Binary AD vs. NC: 80%
EMCI, 61 NC AD vs. EMCI: 63%
AD vs. LMCI: 59%
LMCI vs. NC: 61%
LMCI vs. EMCI: 52%
EMCI vs. NC: 56%
13
Cognitive Computation
Table 5 (continued)
Approach Dataset Modality Type of classification Accuracy
Wang et al. [24] 98 AD, 98 NC Local hospitals, OASIS MRI Binary AD/NC: 97.65%
Khvostikov et al. [25] 53 AD, 228 MCI, 250 NC ADNI sMRI and DTI AD/MCI/NC: 68.9%
AD/NC: 93.3%
AD/MCI: 86.7%
MCI/ NC: 73.3%
Sahumbaiev et al. [26] 530 subjects (185 AD, 185 ADNI MRI Multi AD/MCI/NC: 88.31%
MCI, 160 HC)
Spasov et al. [27] AD 192, 184 NC ADNI MRI Binary AD/NC: 99%
Yan Wang et al. [30] 35 AD, 30 aMCI, 40 NC Beijing Xuanwu Hospital DTI, fMRI Multi AD/aMCI/NC: 92.06%
Khagi et al. [31] 28 AD, 28 NC OASIS MRI Binary AD/NC: 98.51%
Jain et al. [32] 150 subjects (AD 50, NC ADNI sMRI Multi, binary AD/MCI/NC: 95.73%
50, MCI 50) AD vs CN: 99.14%
AD vs MCI: 99.30%
MCI vs. CN: 99.22%
Song et al. [34] AD 12, NC 12, EMCI 12, ADNI DTI Multi AD/EMCI/LMCI/NC: 89%
LMCI 12
Ge, C., & Qu, Q. et al. 337 subjects (198 AD, ADNI MRI Binary AD/NC: 98.80%
[33] 139 NC)
Harshit et al. [38] 120 subjects, 30 for each ADNI 4D FMRI Multi-classification AD/EMCI/LMCI/NC: 93%
class (AD, EMCI, LMCI,
NC)
Silvia et al. [39] 407 HC, 418 AD, 280 ADNI 3D MRI Binary AD vs. HC: 99.2%, c-MCI
c-MCI, 533 stable MCI vs HC: 87.1%, s-MCI
[s-MCI] vs. HC: 76.1%, AD vs.
c-MCI: 75.4%, AD vs.
s-MCI: 85.9%, c-MCI vs.
s-MCI: 75.1%
Dan et al. [40] 787 subjects for (AD, ADNI 3D MRI Binary AD vs. HC: 84%, MCIc
MCIc, MCInc, HC) vs. HC: 79%, MCIc vs.
classes MCInc: 62%
Juan Ruiz et al. [41] 600 brain MRI images ADNI 3D MRI Multi AD, EMCI, LMCI, NC:
66.67%
Proposed 2D-M2IC model 300 subjects (75 AD, 75 ADNI 2D MRI Multi, binary AD vs. NC: 97.11%
EMCI, 75 LMCI, 75 NC) AD vs. EMCI: 96.32%
Total size = 48,000 MRI AD vs. LMCI: 96.62%
images LMCI vs. NC: 98.10%
LMCI vs. EMCI: 95.23%
EMCI vs. NC: 98.39%
AD/EMCI/LMCI/NC:
93.60%
Proposed 3D-M2IC model 3D MRI Multi, binary AD vs. NC: 97.36%
AD vs. EMCI: 97.07%
AD vs. LMCI: 97.16%
LMCI vs. NC: 98.05%
LMCI vs. EMCI: 96.03%
EMCI vs. NC: 98.47%
AD/EMCI/LMCI/NC:
95.17%
Proposed fine-tuned 2D MRI Multi AD/EMCI/LMCI/NC: 97%
VGG19 model
The 2D MRI medical images’ processing is performed to proposed models to improve the weights with a learning
convert them to the 3D format with size (50 × 30 × 20) voxels rate = “0.0001” to optimize the loss function.
to be more suitable to this model, as shown in Fig. 8. The The second method uses the transfer learning principle for
number of trainable parameters is 875.588 and 1,654,468 medical image classification. Transfer learning is a deep learning
for 2D-M 2IC and 3D-M 2IC, respectively. The number procedure whereby a neural network model is first trained on a
of non-trainable parameters is zero for the two architec- problem similar to the issue being solved. Transfer learning’s key
tures. The Adam optimization algorithm is also used in the benefit is that (i) it benefits from the pre-trained weights resulting
13
Cognitive Computation
from the training of millions of images from the ImageNet data- Experimental Results and Model Evaluation
base. (ii) It decreases the training time for a learning model. (iii)
Its ability to reduce generalization errors. Therefore, we use the The proposed models take into consideration different con-
VGG-19 pre-trained model for MRI multi-class classification. ditions. The experimental results are analyzed in terms of
VGG-19 is a convolutional neural network that has 19 layers in nine performance metrics: accuracy, loss, confusion matrix,
its architecture. A basic fine-tuning is applied to the final layer of F1 Score, recall, precession, the receiver operating charac-
VGG19 to be optimal for the proposed medical image classifica- teristic curve (ROC), True Positive Rate (Sensitivity), Area
tion problem. The trainable parameter for fine-tuned VGG19 is under Curve (AUC), and Matthews Correlation Coefficient.
25,433,540, and the non-trainable parameter is zero. The tuning The summarization of the applied performance metrics is
applied in the VGG 19 model is shown in Table 3. shown in Table 4.
13
Cognitive Computation
Methods and Model Evaluation called 2D-M2IC, 3D-M2IC, 2D-BMIC, 3D-BMIC, and fine-
tuned VGG19 model. According to the accuracy metric,
For multi-class and binary medical image classification meth- these models will be evaluated by comparing their perfor-
ods applied, we propose simple CNN architecture models mance to other state-of-the-art models, as shown in Table 5.
13
Cognitive Computation
Table 5 shows that for multi-class medical image classifi- time. They also achieve very promising accuracy for binary
cation of AD stages (AD, EMCI, LMCI, NC), the proposed and multi-class classification.
fine-tuned vgg19 achieved the highest accuracy of 97%. The Figure 9 shows the comparison of the proposed models
proposed 3D-M2IC achieved the second-highest accuracy of (2D-M2IC, 3D-M2IC, and fine-tuned VGG19 model) with
95.17%. The proposed 2D-M2IC achieved the third-highest other state-of-the-art models for multi-class medical image
accuracy of 93.6%. Harshit et al. [38] get the fourth-highest classification.
accuracy value of 93%, and Juan Ruiz et al. [41] get the The comparison among the proposed models (2D-M2IC,
lowest accuracy of 66.7%. Therefore, from the empirical 3D-M2IC, 2D-BMIC, 3D-BMIC, and fine-tuned VGG19
results, it is proved that the proposed architectures are suit- model) with one another for multi-class and binary medical
able simple structures that reduce computational complexity, image classifications for four stages of Alzheimer’s disease
memory requirements, overfitting, and provide manageable is shown in Fig. 10. It shows three multi-class medical image
13
Cognitive Computation
Table 7 The confusion metric and normalized confusion metric for the proposed models (2D-M2IC model, 3D-M2IC).
AD 1128 48 1 23 AD 1164 24 0 12
True Label
True Label
NC 44 16 0 1140 NC 12 12 0 1176
EMCI 0.02 0.97 0.01 0.01 EMCI 0.02 0.96 0.01 0.01
LMCI 0.02 0.08 0.9 0.01 LMCI 0.02 0.02 0.95 0.01
13
Cognitive Computation
13
Cognitive Computation
13
Cognitive Computation
24. Wang SH, Phillips P, Sui Y, Liu B, Yang M, Cheng H. Classifica- 34. Song T, et al. Graph convolutional neural networks for Alzhei-
tion of Alzheimer’s disease based on an eight-Layer convolutional mer’s disease. 2019 IEEE 16th Int Symp Biomed Imaging (ISBI
neural network with leaky rectified linear unit and max pooling. 2019), no. Isbi. 2019;414–417.
J Med Syst. 2018;42(5):85. 35. Liu L, Zhao S, Chen H, Wang A. A new machine learning method
25. Khvostikov A, Aderghal K, Krylov A. 3D Inception-based CNN for identifying Alzheimer’s disease. Simul Model Pract Theory.
with sMRI and MD-DTI data fusion for Alzheimer’s disease diag- 2020;99:102023.
nostics. no. July 2018. 36. Liu M, et al. A multi-model deep convolutional neural network
26. Sahumbaiev I, Popov A, Ram J, Górriz JM, Ortiz A. 3D - CNN for automatic hippocampus segmentation and classification in
HadNet classification of MRI for Alzheimer’s disease diagnosis. Alzheimer’s disease. Neuroimage. 2018;208(August):2020.
2018;3–6. 37. Impedovo D, Pirlo G, Vessio G, Angelillo MT. A handwriting-
27. Spasov SE, et al. A Multi-modal convolutional neural net- based protocol for assessing neurodegenerative dementia. Cognit
work framework for the prediction of Alzheimer’s disease. Comput. 2019;11(4):576–86.
2018;1271–1274. 38. Parmar H, Nutter B, Long R, Antani S, Mitra S. Spatiotempo-
28. Kahramanli H. A modified cuckoo optimization algorithm ral feature extraction and classification of Alzheimer’s disease
for engineering optimization. Int J Futur Comput Commun. using deep learning 3D-CNN for fMRI data. J Med Imaging.
2012;1(2):199. 2020;7(05):1–14.
29. Simonyan K, Zisserman A. Very deep convolutional networks for 39. Basaia S, et al. Automated classification of Alzheimer’s disease
large-scale image recognition, 3rd Int Conf Learn Represent ICLR and mild cognitive impairment using a single MRI and deep neu-
2015 - Conf. Track Proc. 2015;1–14. ral networks. Neuro Image Clin. 2019;21(2018):101645.
30. Wang Y, et al. A novel multimodal MRI analysis for Alzheimer’s 40. Pan D, Zeng A, Jia L, Huang Y, Frizzell T, Song X. Early detec-
disease based on convolutional neural network. 2018 40th Annu tion of Alzheimer’s disease using magnetic resonance imaging:
Int Conf IEEE Eng Med Biol Soc 2018;754–757. a novel approach combining convolutional neural networks and
31. Khagi B, Lee B. CNN models performance analysis on MRI ensemble learning. Front Neurosci. 2020;14(May):1–19.
images of OASIS dataset for the distinction between healthy 41. Vassanelli S, Kaiser MS, Eds NZ, Goebel R. 3D DenseNet ensem-
and Alzheimer’s patient. 2019 Int Conf Electron Information ble in the 4-way classification of Alzheimer’s disease. Series
Commun. 2019;1–4. Editors. 2020.
32. Jain R, Jain N, Aggarwal A, Hemanth DJ. ScienceDirect Con- 42. Mahmud M, Kaiser MS, McGinnity TM, Hussain A. Deep learn-
volutional neural network-based Alzheimer’s disease classifi- ing in mining biological data. Cognit Comput. 2021;13(1):1–33.
cation from magnetic resonance brain images. Cogn Syst Res.
2019;57:147–59. Publisher's Note Springer Nature remains neutral with regard to
33. Ge C, Qu Q. Multiscale deep convolutional networks for charac- jurisdictional claims in published maps and institutional affiliations.
terization and detection of Alzheimer’s disease using MR images
Dept. of Electrical Engineering, Chalmers University of Technol-
ogy, Sweden Inst. of Neuroscience and Physiology, Sahlgrenska
Academy. 2019 IEEE Int Conf Image Process. 2019;789–793.
13