Single Volume Image Generator and Deep Learning-Based ASD Classification
Single Volume Image Generator and Deep Learning-Based ASD Classification
Abstract—Autism spectrum disorder (ASD) is an intricate abnormalities such as communication difficulties, social deficits,
neuropsychiatric brain disorder characterized by social repetitive behaviors and cognitive delays, and nonsocial features
deficits and repetitive behaviors. Deep learning approaches such as restricted and stereotyped behaviors, all of which have
have been applied in clinical or behavioral identification of
ASD; most erstwhile models are inadequate in their capac- a significant impact on adaptive functioning [1]–[3]. As investi-
ity to exploit the data richness. On the other hand, classi- gated by the Centre for Disease, Control and Prevention in the
fication techniques often solely rely on region-based sum- United States, the estimated ASD prevalence of 1% or higher (1
mary and/or functional connectivity analysis of functional subject in 59) and it is augmented dramatically in the last decades
magnetic resonance imaging (fMRI). Besides, biomedical
[4]. Therefore, finding a precise biological marker to extrapolate
data modeling to analyze big data related to ASD is still
perplexing due to its complexity and heterogeneity. Sin- the underlying roots of ASD pathologies is indispensable for
gle volume image consideration has not been previously applying effective treatment in ASD diagnosis.
investigated in classification purposes. By deeming these One of the significant challenges in brain disorder research is
challenges, in this work, firstly, we design an image gen- to replicate the findings through larger datasets that can reflect
erator to generate single volume brain images from the the heterogeneity of clinical populations. Functional magnetic
whole-brain image by considering the voxel time point of
each subject separately. Then, to classify ASD and typ- resonance imaging (fMRI) has been extensively considered to
ical control participants, we evaluate four deep learning perceive functional abnormalities of ASD patients which can
approaches with their corresponding ensemble classifiers characterize the neural pathways [5], [6]. Functional connectiv-
comprising one amended Convolutional Neural Network ity analysis has produced deep insights to see the brain abnormal-
(CNN). Finally, to check out the data variability, we ap- ity connectomes between ASD and typical control (TC) either
ply the proposed CNN classifier with leave-one-site-out
5-fold cross-validation across the sites and validate our at individual or at group level characteristics. Recently, most of
findings by comparing with literature reports. We show- the machine learning techniques to study functional connectivity
case our approach on large-scale multi-site brain imag- data rely on hand-engineered feature extraction, such as the
ing dataset (ABIDE) by considering four preprocessing correlation between region of interests (ROIs) and topological
pipelines, which outperforms the state-of-the-art methods. measurements of modularity, clustering based classification [7],
Hence, it is robust and consistent.
segregation or integration [8]. On the other hand, brain ROIs
Index Terms—Biomedical data modeling, image
generator, convolutional neural network (CNN), autism
provide the structural substrates for measuring connectivities
spectrum disorder (ASD), fMRI, ABIDE. within the individual brain and functional activation patterns of
the brain. It is a common approach to analyze ASD individuals
I. INTRODUCTION based on the expert’s defined brain parcellation or data-driven
UTISM Spectrum Disorder (ASD) is the disturbance of the strategies such as dictionary learning, clustering, and ICA (inde-
A structure and functioning of the brain that cause different pendent component analysis) [9], [10]. Both the expert-defined
and data-driven ROIs strategy have several complaints such
Manuscript received October 18, 2019; revised February 25, 2020 as standardization, arbitrary decision, and selection of the re-
and May 17, 2020; accepted May 22, 2020. Date of publication May
29, 2020; date of current version November 5, 2020. This work was
gions exhibiting proficient information [11]. The data-driven
supported in part by the National Key Research and Development Pro- ROIs strategy can be biased in selecting the regions showing
gram of China under Grant 2017YFC0108000, in part by the National considerable variability across the subject, which influence the
Natural Science Foundation of China under Grant 81771940, and in
part by the Capital’s Funds for Health Improvement and Research under
results [12]. Hence, an alternative tool or strategy is essential
Grant 2018-4-6031. (Corresponding author: Yuan Zhang.) to overcome the above complaints and generate the volumetric
Md Rishad Ahmed and Yuan Zhang are with the Chongqing Key brain images to see the activated regions. A single volume image
Laboratory of Nonlinear Circuits and Intelligent Information Process-
ing, College of Electronic and Information Engineering, Southwest
generator not only generates whole brain volume images but
University, Chongqing 400715, China (e-mail: [email protected]; also has the arbitrary nature of a chosen brain region scheme,
[email protected]). which ensures the coverage of the entire brain regions of each
Yi Liu is with the Department of Respiratory Medicine, Civil Aviation
General Hospital, Beijing 100123, China (e-mail: [email protected]).
subject.
Hongen Liao is with the Department of Biomedical Engineering, Machine learning (ML) such as SVM (support vector ma-
School of Medicine, Tsinghua University, Beijing 100084, China (e-mail: chines) has been widely adopted to classify and exploit indi-
[email protected]).
Digital Object Identifier 10.1109/JBHI.2020.2998603
vidual variation in functional connectivity of ASD [13], [14].
2168-2194 © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
AHMED et al.: SINGLE VOLUME IMAGE GENERATOR AND DEEP LEARNING-BASED ASD CLASSIFICATION 3045
Recently, deep learning models with neuroimaging modalities limitations of traditional machine learning models for ASD
have been effective in identifying brain disorders such as ASD, classification that often rely on ROI definitions.
Alzheimer Disease (AD) [15], [16]. With the rapid advancement • Finally, to evaluate the classifier performance and check
of deep learning approaches for brain disorder diagnosis, con- out the data variability, we apply the proposed CNN clas-
volutional neural networks (CNN) becomes the most popular sifier with leave-one-site-out 5-fold cross-validation across
method for ASD classification [17]. However, most deep learn- the sites and validate our findings by comparing with
ing approaches have been focused on functional connectivity literature reports.
or ROIs analysis, time-series data analysis, or temporal/spatial The proposed approaches with a combined loss function and
information of fMRI [18], [19]. They also lack model trans- generated single volume images, establish a novel benchmark
parency, i.e., how the model secures its interpretability for model for ASD detection on the ABIDE (Autism Brain Imaging
clinical applications, as most of the deep neural networks are Data Exchange) database.
not easily interpretable. The choice of the potential classification The leftover of the paper is organized as follows: Section II
algorithm is another exception in the connectome-based analysis discusses the related works; Section III covers broad method-
of ASD. A few years ago, some of the deep learning-based ological explanation including single volume image generator
ASD classification models had been focused on simple linear and propose deep learning models. In Section IV, the experiment
predictive techniques using a vectored connectivity correlation and discussion of this method including dataset are presented
matrix [20], [21]. Additionally, ASD big data handling using and finally conclusion is drawn in Section V.
deep learning techniques is still thought-provoking due to the
lack of potential data mining and investigating methods from
the heterogeneous, complex, and dynamic nature data to di- II. RELATED WORKS
agnose this brain disorder. Due to the heterogeneity, etiology, The amalgamation of machine learning methods and brain
and severity of ASD, a more professed methodology is required imaging data permit the classification of ASD, which can as-
to forecast and analyze the behavior and functionality of each suage the significant suffering and provide safety for the patient’s
subject. Hence motivated by the above challenges, here we daily well-being. Studies on ASD classification using different
focus on designing a new image generator that can generate imaging modalities and their analysis approaches, specifically
single volume images from whole-brain fMRI and propose two deep learning techniques, are discussed in this section.
novel classification architectures for classifying ASD and typical Study on the functional connectivity of brain networks is
controls. The single volume brain image is the visualization of a sturdy utensil to understand the neurological bases of a di-
the brain regions along with specific direction and slice number, versity of brain disorders such as autism [22]. In the work
allowing the real-time voxel-periods separation of the raw fMRI [23], Abraham et al. employed resting-state fMRI to extract
data. Nevertheless, ROIs only define the interested brain regions functionally-defined brain areas and support vector classifier
depending on the pre-selected slice number. (SVC) to compare connectivity between ASD and typical con-
The main contributions of this study are as follows: trol. They considered 871 subjects from ABIDE dataset for
• To the best of our knowledge for the first time, we design a connectome-based prediction and got 67% accuracy. Guo et al.
single volume image generator that can produce 2D three- considered multiple stacked auto-encoder (SAE) as a feature
channel images from a functional magnetic resonance 4D selection method from whole-brain FCP obtained by Pearson
NIFTI image. The main advantage of the single volume correlation of ROIs [24]. Using only UM (University of Michi-
image generator is that generated 2D images represent gan) data site, they got a classification accuracy of 86.36%.
activated brain regions for each voxel time point of the In [25], using only CCS (connectome computation system)
patients. It also visualizes the brain regions in axial, sagit- pipeline without global signal regression data and LSTMs (long
tal, and coronal axes in the form of glass brain and stat_map short-term memory) method for classification of individuals
images. with ASD, Dvornek et al. achieved 68.5% accuracy.
• We incorporate four deep learning approaches with our On the other hand, time-series for several sets of regions-of-
improved CNN model to classify ASD and typical controls interests (ROIs) also have the potentiality to classify and see
using generated images as the input. The advantages of brain network connections of ASD. ROIs are usually computed
our model include leveraging the voxel-2D structures of using a predefined atlas or a parcellation scheme on anatomical
rsfMRI without possessing too many model parameters features, functional activations, and connectivity patterns of
and easily interpret the complex, heterogeneous data which brain [10], [26]. Dvornek et al. incorporated phenotypic data
can be used in combination with other tools supporting with rs-fMRI into a single LSTM based model for classifying
clinicians to diagnose ASD with more precision. ASD and achieved an accuracy of 70.1% [27]. They employed
• We propose a novel deep ensemble learning framework CCS pipeline data without global signal regression and cross-
based on the improved CNN and the benchmark ap- validation framework. With the development of deep learning
proaches to classify ASD using features extracted by model specifically, Convolutional Neural Networks (CNNs)
VGG16 from the glass brain and stat map images. The have found abundant applications on 2D and 3D images which
proposed ensemble model can integrate two different types can exploit image intensities and pixel grid to decipher image
of generated images simultaneously utilizing one ensemble segmentation and classification problems [28], [29]. Zhao et al.
learning classifier individually. Thus, it overcomes the also evaluated an effective 3D CNN to bridge the gap between
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
3046 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 11, NOVEMBER 2020
TABLE I TABLE II
A BRIEF SUMMARIZATION OF THE PREVIOUS MACHINE/DEEP LEARNING OVERVIEW OF THE BASIC PARAMETERS AND STEPS OF FOUR DIFFERENT
TECHNIQUES IN ASD CLASSIFICATION FUNCTIONAL PREPROCESSING STRATAGEMS
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
AHMED et al.: SINGLE VOLUME IMAGE GENERATOR AND DEEP LEARNING-BASED ASD CLASSIFICATION 3047
TABLE III
PARAMETERS WEIGHED WHILE PLOTTING THE VOLUMETRIC IMAGES USING
THE PROPOSED IMAGE GENERATOR
Fig. 1. Graphical representation of the proposed single volume image In Table III, display mode selects the specific direction of the
generator. cuts following x as sagittal, y as coronal and z as axial view
of the brain. On the other hand, a threshold is either a numeric
value or none; none means plotted images are not threshold, and
B. Single Volume Image Generator an absolute numeric value means it plots the threshold images
where the value below the threshold is transparent. We choose
As we know, a 3D fMRI is the voxel image containing only one two different threshold values: 3 and 5 in our experiment for
brain volume. Whereas, a 4D fMRI is a series of concatenated glass brain and stat map functions, respectively. The parameter
brain volumes over repeated time and the time is the 4th dimen- cut cords specify the number of slices to visualize the brain
sion representing the number of brain volume. Considering the images along the specific direction. Colorbar shows a vertical
number of brain volumes, it can give the images of activated color bar to the right of the current axes of the plotting images,
brain regions during spontaneously fMRI acquisition, which is which stands “false” in our experiment. The detailed description
called a single volume image [41], [42]. The functional acti- of the plotted two types of volumetric images, and their choosing
vation maps impact the properties of repeated fMRI scans, for reasons are explained below. The detailed description of the
example, the assessment of the relations between the symptom’s plotted two types of volumetric images, and their choosing
intervention and brain activation patterns [43]. Therefore, there reasons are explained below.
needs a tool to generate the images between the voxel periods for Glass Brain and Stat_Map Images: All the neuroimaging is
scrutinizing the activated brain regions. Rather than traditional the part of the brain mapping. The glass brain is a 3D brain visu-
analysis of functional connectivity or brain ROIs, in this work, alization that displays real-time source activity and connectivity
we design an image generator to produce single volume brain between brain areas [44]. On the other hand, stat_map is the full
images from the preprocessed whole-brain functional image. name of the statistical images which plot cuts of an ROI/mask
The single-volume image depicts the 2D visualization of the image. We prefer the glass brain and stat_map displaying mode
brain activity by considering each voxel time point. Fig. 1 rep- for a single volume image because of its power of projecting
resents the ow diagram to generate the single volume images by high-resolution 3D model of an individual’s brain, skull, and
predened displaying mode. The working principle of the single scalp tissue. Another fundamental discrepancy between general
volume image generator has several folds: rstly, the generator brain mapping and our generator images is the map projection.
checks out the shape of each input Nifti les, the input shape In a general brain mapping, it considers the specific brain regions
might be the 4D fMRI images. Secondly, the generator main (ROIs) or time series scheme to see the brain connectome.
body, which checks out the conditions to draw output depending However, our proposed generator estimates the number of voxel
on the enumeration and iteration counter. The enumeration and time points for each 4D image to plot the real-time activated
iteration segment counts the number of voxel time points during brain regions. The plotted images were in MNI space for all the
image acquisition. Then we set the corresponding parameters to considered pipelines to work image function accurately.
demonstrate the real-time brain activations. Finally, generator
plots and saves the brain images into two types of volumetric
images for each ASD and TC individuals by counting the whole C. Proposed CNN-Based Classification Model
voxel time points for each subject. The corresponding param- Fig. 2 depicts the overall deep learning architecture specif-
eters to display and save the single volume images from 4D ically based on CNN for classification of ASD. Convolutional
multiple brain volumes image are shown in Table III. neural network, as a portion of the neural network is widespread
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
3048 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 11, NOVEMBER 2020
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
AHMED et al.: SINGLE VOLUME IMAGE GENERATOR AND DEEP LEARNING-BASED ASD CLASSIFICATION 3049
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
3050 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 11, NOVEMBER 2020
TABLE V
NUMBER OF GENERATED IMAGES FOR EACH SPECIFIC DATA SITE
INCLUDING SEPARATE TRAINING AND TESTING SET
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
AHMED et al.: SINGLE VOLUME IMAGE GENERATOR AND DEEP LEARNING-BASED ASD CLASSIFICATION 3051
TABLE VI
CORRESPONDING CLASSIFICATION PERFORMANCES USING PROPOSED CNN MODEL
each site individually according to their splitting procedure. In it shows that the performance of the improved CNN over other
Table V, all the images have the same shape with different time methods is statistically noteworthy.
points for different data site. For instance, the original fMRI has
a shape like (61, 73, 61, voxel time points); thus, the fMRI time E. Deep Ensemble Learning Classifiers Performance
points are defined as the fourth dimension of the original fMRI Analysis
shape. In this work, we consider the number of voxel time points
to generate 2D brain images. We performed four ensemble classifier techniques in the
On the other hand, during unit-site classification, we per- experiment by combining benchmark approaches with the im-
formed a leave-one-site-out 5-fold cross-validation approach proved CNN. We have trained each classifier separately for
to evaluate the performances of the proposed CNN classifier two different image feature’s classification and shown the cor-
across sites. This method helps to extract more information responding outputs. All the ensemble classifiers were trained
from the images while leaving enough test samples to measure using ADAM optimizer and sigmoid function for classifica-
the capability of the model in classifying unobserved images. tion. The final output was taken based on the equation (6) for
Additionally, some of the sites have a small number of sam- the binary classification. Table VII represents the classification
ples, for example, CMU (Carnegie Mellon University), which performance for the ensemble learning classifiers. The highest
contains only 27 samples. Therefore, performing leave-site-out accuracy and other relevant measurements are marked as bold in
5-fold cross-validation increases the variance of cross-validation the table for each pipeline generated images. From the analysis
estimation. of the table results, the third ensemble classifier performs better
than other ensemble classifiers in CCS and CPAC pipelines data,
with the highest accuracy of 87%, respectively. For the other
D. CNN-Based Classifier Performance Evaluation on
two pipelines, DPARSF and NIAK data, the second and third
ABIDE Dataset
classifiers perform closely for the DPARSF dataset, and the first
Deep learning approach and ABIDE data have previously and second classifiers perform closely for the NIAK dataset,
been studied to identify and analyze the ASD showing dif- with an accuracy of 86% in both cases. The fourth ensemble
ferent measurement metrics. In this work, we evaluate four classifier performs averagely in all four cases.
performance measurement metrics such as P (precision), R Ensemble learning has already been introduced for ASD
(recall), F (F1-score) and A (accuracy) to validate the algorithm classification in many literature reports but not frequently yet.
performance. Precision is outlined as the ratio of the correctly In [35], Khosla et al. used stochastic parcellation and seven
ASD positive labeled to all ASD positive labeled, and recall is the atlases of ABIDE dataset to classify ASD based on the 3D CNN
ratio with the whole ASD positive in reality. On the other hand, approach. The authors also performed two ensemble learning
F1-score considers both precision and recall measurement. Ac- strategies called multi-atlas ensemble (MA-Ensemble) model
curacy is delineated as the ratio of correctly labeled individuals to and stochastic parcellation ensemble strategy (SP-Ensemble)
the whole number of subjects. All of the four metrics are weighed of a 3D CNN method. The average classification predictions
to validate the classification ability of our model for ASD and TC of the MA-Ensemble model were computed using each one of
classification. Table VI shows the comparison between different the seven atlases available in PCP for ROI time series extrac-
performance measurements for benchmark and proposed CNN tion, with the highest accuracy of 71.7%. Using 30 stochastic
method using two categorized images of four pipelines. parcellations in the SP-Ensemble model obtained a maximum
The maximum performance is marked as bold in each section. efficiency of 72.3%. Additionally, in [36], Wang et al. firstly
The highest average accuracy obtain in our proposed method divided the data according to the subject’s age and sex and
for CPAC glass brain images is 83%, average precision 80.5% found out their functional connectivity patterns for ASD clas-
for CCS stat_map images, average F1-score 80.9% for DPARSF sification. They proposed a sparse multi-view multitask ensem-
glass brain images and average specificity 81.2% for NIAK glass ble (Sparse-MVMT-E) classification method for individualized
brain images. From the analysis of the performance comparison, ASD diagnosis. They considered two data sites from ABIDE,
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
3052 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 11, NOVEMBER 2020
TABLE VII
CLASSIFICATION PERFORMANCES USING PROPOSED DEEP ENSEMBLE LEARNING CLASSIFIERS
TABLE VIII
UNIT SITE CLASSIFICATION USING LEAVE-SITE-OUT 5-FOLD CROSS-VALIDATION, COMPARED WITH OTHER METHODS
namely NYU and UM-1, and secured the highest accuracy the training process and used that data as the test set to evaluate
of 72.6% and 71.4%, respectively. Compared with the results the model. Then it tested the applicability of the model to a new,
of [35] and [36],our model surpassed with a mean accuracy different site. Leave-One-Site cross-validation always estimates
difference around 14% using different types of image features the entire 5-fold cross-validation using a single split of data folds.
rather than FC or atlas data. The results comparison suggests During the experiments, we accomplish the confidence intervals
that our proposed ensemble model classifies ASD patients more through the standard deviation (SD) of the mean accuracy.
precisely. Table VIII represents the performance comparison using the
proposed CNN model, including benchmark approaches and
F. Unit Site Classification Performances Analysis mean accuracy’s standard deviation (SD) for each model. In
As the ABIDE is a consortium of the large dataset for ASD our experiments, the highest accuracy for glass brain images
subjects from multiple renowned institutions around the world, is up to 88% belongs to several data sites and for stat_map
we performed unit-site classification to pattern out the site’s images the accuracy is up to 87% for several data sites as
variability. Leave-One-Site-Out 5-fold cross-validation has been shown in the table. Heinsfeld et al. also investigated the unit-site
employed to evaluate the CNN classifier performance across the classification, where they achieved the highest accuracy of 68%,
sites. This process firstly excluded the data from one site from both for Caltech and MaxMun data site, by using patterns of
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
AHMED et al.: SINGLE VOLUME IMAGE GENERATOR AND DEEP LEARNING-BASED ASD CLASSIFICATION 3053
functional connectivity [37]. On the other hand, Eslami et al. [2] M. A. Just, V. L. Cherkassky, A. Buchweitz, T. A. Keller, and T. M.
performed 5-fold cross-validation on each site separately using Mitchell, “Identifying autism from neural representations of social in-
teractions: Neurocognitive markers of autism,” PLOS ONE, vol. 9, no. 12,
ASD-DiagNet [38]. They got the maximum accuracy of 82% pp. 1–22, 2014.
for the OHSU data site without data augmentation. In both [3] G. Noriega, “Restricted, repetitive, and stereotypical patterns of behavior
works [37] and [38], CC-200 functional parcellation brain atlas in autism–an fmri perspective,” IEEE Trans. Neural Syst. Rehabil. Eng.,
vol. 27, no. 6, pp. 1139–1148, Jun. 2019.
data was utilized for intra-site evaluation. The closest accuracy [4] J. Baio, “Prevalence of autism spectrum disorder among children aged 8
comparing with our model has been achieved by [38] in only years-autism and developmental disabilities monitoring network, 11 sites,
one data site, while other data site performs far lower with data united states,” Centers Diseases Control Prevention, vol. 63, pp. 1–24,
2014.
augmentation. The mean accuracy difference is approximately [5] M. D. Kaiser et al., “Neural signatures of autism,” Proc. Nat. Acad. Sci.,
20% between our and the comparative works. Therefore, based vol. 107, no. 49, pp. 21 223–21 228, 2010.
on the results, our proposed method, along with benchmark [6] M. Lee, D. Y. Kim, M. K. Chung, A. L. Alexander, and R. J. Davidson,
“Topological properties of the structural brain network in autism via ε-
approaches, obtains the highest accuracy in most data sites and neighbor method,” IEEE Trans. Biomed. Eng., vol. 65, no. 10, pp. 2323–
outperforms other methods on average for classifying ASD. 2333, Oct. 2018.
Furthermore, the standard deviation between each data site is [7] X.-a. Bi, Y. Wang, Q. Shu, Q. Sun, and Q. Xu, “Classification of autism
spectrum disorder using random support vector machine cluster,” Frontiers
lower, which is an indication that the accuracy is closer to the Genetics, vol. 9, 2018.
mean accuracy. It can be concluded that the results, compared [8] Y. Kong, J. Gao, Y. Xu, Y. Pan, J. Wang, and J. Liu, “Classification of
with other findings, also suggest that there has data variability autism spectrum disorder by combining brain connectivity and deep neural
network classifier,” Neurocomputing, vol. 324, pp. 63–68, 2019.
(dispersion or spread) among these sites that do not exist in other [9] X. Bi et al., “The genetic-evolutionary random support vector machine
sites. cluster analysis in autism spectrum disorder,” IEEE Access, vol. 7,
pp. 30 527–30 535, Mar. 2019.
[10] B. A. Cociu et al., “Multimodal functional and structural brain connectivity
analysis in autism: A preliminary integrated approach with EEG, FMRI,
V. CONCLUSION and DTI,” IEEE Trans. Cogn. Dev. Syst., vol. 10, no. 2, pp. 213–226,
Jun. 2018.
The recent advancement of functional connectivity and brain [11] Z. Wang, Y. Zheng, D. C. Zhu, A. C. Bozoki, and T. Li, “Classification of
ROIs analysis has made a conspicuous invasion into the fol- alzheimers disease, mild cognitive impairment and normal control subjects
using resting-state fmri based network connectivity analysis,” IEEE J.
lowing classification of ASD. However, it is challenging to Translational Eng. Health Medicine, vol. 6, pp. 1–9, Oct. 2018.
generalize the outcomes for larger, more heterogeneous popu- [12] B. Thirion, G. Varoquaux, E. Dohmatob, and J.-B. Poline, “Which fmri
lations rather than for smaller ones. While most of the recent clustering gives good brain parcellations?” Frontiers Neurosci., vol. 8,
2014.
work investigated the functional connectivity or time series [13] C. Wang, Z. Xiao, B. Wang, and J. Wu, “Identification of autism based
analysis of fMRI, in this study, we demonstrate a suitable image on svm-rfe and stacked sparse auto-encoder,” IEEE Access, vol. 7,
generator to harvest a stable image that can provide perceptive pp. 118 030–118 036, Aug. 2019.
[14] Z. Yao et al., “Resting-state time-varying analysis reveals aberrant vari-
details on the target disease using heterogeneous neuroimaging ations of functional connectivity in autism,” Frontiers Human Neurosci.,
modality. Also, we validate the generated images using two vol. 10, 2016.
propose deep learning-based frameworks that could enhance [15] X. Li, N. C. Dvornek, J. Zhuang, P. Ventola, and J. S. Duncan, “Brain
biomarker interpretation in asd using deep learning and fmri,” in Proc.
diagnostic truthfulness, with the potential to classify and develop Med. Image Comput. Comput. Assisted Intervention, 2018, pp. 206–214.
better treatments. Furthermore, to check out the inter-site data [16] M. R. Ahmed, Y. Zhang, Z. Feng, B. Lo, O. T. Inan, and H. Liao,
variability, we apply the proposed method across the sites using a “Neuroimaging and machine learning for dementia diagnosis: Recent
advancements and future prospects,” IEEE Rev. Biomed. Eng., vol. 12,
leave-site-out cross-validation approach. Our significance image pp. 19–33, Dec. 2018.
processing scheme and sampling, along with the precise CNN [17] J. R. Sato, M. Calebe Vidal, S. de Siqueira Santos, K. Brauer Massirer,
classifier ensure a trustworthy approach of ASD classification and A. Fujita, “Complex network measures in autism spectrum disorders,”
IEEE/ACM Trans. Comput. Biol. Bioinf., vol. 15, no. 2, pp. 581–587,
associating with the other image processing techniques. Overall, Mar./Apr. 2018.
the proposed image processing scheme provides a proficient and [18] Y. Zhao et al., “Automatic recognition of fmri-derived functional networks
objective way of interpreting neuroimaging applied to the deep using 3-d convolutional neural networks,” IEEE Trans. Biomed. Eng.,
vol. 65, no. 9, pp. 1975–1984, Sep. 2018.
learning model. [19] T. Iidaka, “Resting state functional magnetic resonance imaging and neural
Future research inclinations involve expending structural pre- network classified autism and control,” Cortex, vol. 63, pp. 55–67, 2015.
processing and calculation of cortical measures pipeline data [20] P. Kassraian-Fard, C. Matthis, J. H. Balsters, M. H. Maathuis, and N.
Wenderoth, “Promises, pitfalls, and basic guidelines for applying machine
for classification of ASD. Besides, it is necessary to modify the learning classifiers to psychiatric imaging data, with autism as an example,”
architecture to consolidate raw fMRI data as well as to analyze Frontiers Psychiatry, vol. 7, 2016.
the correlation between brain activation regions (axial, sagittal [21] J. V. Hull, L. B. Dokovna, Z. J. Jacokes, C. M. Torgerson, A. Irimia, and
J. D. Van Horn, “Resting-state functional connectivity in autism spectrum
and coronal) to perceive the neural connectivity of brain during disorders: A review,” Frontiers Psychiatry, vol. 7, 2017.
the natural progression of autism. [22] A. Pascual-Belda, A. Díaz-Parra, and D. Moratal, “Evaluating functional
connectivity alterations in autism spectrum disorder using network-based
statistics,” Diagnostics, vol. 8, no. 3, p. 51, 2018.
[23] A. Abraham et al., “Deriving reproducible biomarkers from multi-site
REFERENCES resting-state data: An autism-based example,” NeuroImage, vol. 147,
pp. 736–745, 2017.
[1] E. Honey, J. Rodgers, and H. McConachie, “Measurement of restricted and [24] X. Guo, K. C. Dominick, A. A. Minai, H. Li, C. A. Erickson, and L. J. Lu,
repetitive behaviour in children with autism spectrum disorder: Selecting a “Diagnosing autism spectrum disorder from brain resting-state functional
questionnaire or interview,” Res. Aut. Spc. Dis., vol. 6, no. 2, pp. 757–776, connectivity patterns using a deep neural network with a novel feature
2012. selection method,” Frontiers Neurosci., vol. 11, 2017.
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
3054 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 11, NOVEMBER 2020
[25] N. C. Dvornek, P. Ventola, K. Pelphrey, and J. S. Duncan, “Identifying [37] A. S. Heinsfeld, A. R. Franco, R. C. Craddock, A. Buchweitz, and F.
autism from resting-state fmri using long short-term memory networks,” Meneguzzi, “Identification of autism spectrum disorder using deep learn-
in Proc. Mach. Learn. Med. Imag., 2017, vol. 10541, pp. 362–370. ing and the abide dataset,” NeuroImage: Clin., vol. 17, pp. 16–23, 2018.
[26] S. B. Eickhoff, B. Thirion, G. Varoquaux, and D. Bzdok, “Connectivity- [38] T. Eslami, V. Mirjalili, A. Fong, A. R. Laird, and F. Saeed, “Asd-diagnet: A
based parcellation: Critique and implications,” Human Brain Mapping, hybrid learning approach for detection of autism spectrum disorder using
vol. 36, no. 12, pp. 4771–92, 2015. fmri data,” Frontiers Neuroinf., vol. 13, 2019.
[27] N. C. Dvornek, P. Ventola, and J. S. Duncan, “Combining phenotypic [39] C. Craddock et al., “The neuro bureau preprocessing initiative: open
and resting-state FMRI data for autism classification with recurrent neural sharing of preprocessed neuroimaging data and derivatives,” Frontiers
networks,” in Proc. IEEE 15th Int. Symp. Biomed. Imag., 2018, pp. 725– Neuroinf., vol. 7, 2013.
728. [40] “Abide preprocessed,” 2011. [Online]. Available: http://preprocessed-
[28] S. Parisot et al., “Spectral graph convolutions for population-based disease connectomes-project.org/abide/, Accessed: May 18, 2020.
prediction,” in Proc. Med. Image Comput. Comput. Assisted Intervention, [41] X. Liu, C. Chang, and J. Duyn, “Decomposition of spontaneous brain
2017, pp. 177–185. activity into distinct fmri co-activation patterns,” Frontiers Syst. Neurosci.,
[29] X. Li et al., “2-channel convolutional 3d deep neural network (2cc3d) for vol. 7, 2013.
fmri analysis: Asd classification and feature learning,” in Proc. IEEE 15th [42] J. R. Sato et al., “Identifying multisubject cortical activation in functional
Int. Symp. Biomed. Imag., 2018, pp. 1252–1255. mri: A frequency domain approach,” J. Data Sci., vol. 6, pp. 89–103, 2008.
[30] Y. Zhao, F. Ge, S. Zhang, and T. Liu, “3d deep convolutional neu- [43] G. S. Dichter, L. Sikich, A. Song, J. Voyvodic, and J. W. Bodfish, “Func-
ral network revealed the value of brain network overlap in differ- tional neuroimaging of treatment effects in psychiatry: Methodological
entiating autism spectrum disorder from healthy controls,” in Proc. challenges and recommendations,” Int. J. Neurosci., vol. 122, no. 9,
Int. Conf. Med. Image Comput. Comput.-Assisted Intervention, 2018, pp. 483–493, 2012.
pp. 172–180. [44] T. Mullen et al., “Real-time estimation and 3d visualization of source
[31] S. I. Ktena et al., “Distance metric learning using graph convolutional dynamics and connectivity using wearable eeg,” in Conf. Proc. IEEE Eng.
networks: Application to functional brain networks,” in Proc. Med. Image Med. Biol. Soc., 2013, pp. 2184–2187.
Comput. Comput. Assisted Intervention, 2017, pp. 469–477. [45] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network
[32] R. Anirudh and J. J. Thiagarajan, “Bootstrapping graph convolutional training by reducing internal covariate shift,” in Proc. Int Conf. Mach.
neural networks for autism spectrum disorder classification,” in Proc. IEEE Learn., 2015, vol. 37, pp. 448–456.
Int. Conf. Acoust., Speech Signal Process., 2019, pp. 3197–3201. [46] G. Huang, Z. Liu, L. v. d. Maaten, and K. Q. Weinberger, “Densely
[33] A. Phinyomark, E. Ibanez-Marcelo, and G. Petri, “Resting-state FMRI connected convolutional networks,” in Proc. IEEE Conf. Comput. Vision
functional connectivity: Big data preprocessing pipelines and topological Pattern Recognit., 2017, pp. 2261–2269.
data analysis,” IEEE Trans. Big Data, vol. 3, no. 4, pp. 415–428, Dec. [47] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
2017. recognition,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2016,
[34] D. Raví et al., “Deep learning for health informatics,” IEEE J. Biomed. pp. 770–778.
Health Informat., vol. 21, no. 1, pp. 4–21, Jan. 2017. [48] F. Chollet, “Xception: Deep learning with depthwise separable convo-
[35] M. Khosla, K. Jamison, A. Kuceyeski, and M. R. Sabuncu, “En- lutions,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2017,
semble learning with 3d convolutional neural networks for functional pp. 1800–1807.
connectome-based prediction,” NeuroImage, vol. 199, pp. 651–662, [49] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking
2019. the inception architecture for computer vision,” in Proc. IEEE Conf.
[36] J. Wang, Q. Wang, H. Zhang, J. Chen, S. Wang, and D. Shen, “Sparse Comput. Vision Pattern Recognit., 2016, pp. 2818–2826.
multiview task-centralized ensemble learning for asd diagnosis based [50] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
on age- and sex-related functional connectivity patterns,” IEEE Trans. large-scale image recognition,” in Proc. 3rd Int. Conf. Learn. Representa-
Cybern., vol. 49, no. 8, pp. 3141–3154, Aug. 2019. tions, May 2015.
Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.