Babu G. Computational Imaging and Analytics in Biomedical Engineering... 2024
Babu G. Computational Imaging and Analytics in Biomedical Engineering... 2024
COMPUTATIONAL
IMAGING AND ANALYTICS IN
BIOMEDICAL ENGINEERING
Algorithms and Applications
Edited by
T. R. Ganesh Babu, PhD
U. Saravanakumar, PhD
Balachandra Pattanaik, PhD
First edition published 2024
Apple Academic Press Inc. CRC Press
1265 Goldenrod Circle, NE, 2385 NW Executive Center Drive,
Palm Bay, FL 32905 USA Suite 320, Boca Raton FL 33431
760 Laurentian Drive, Unit 19, 4 Park Square, Milton Park,
Burlington, ON L7N 0A4, CANADA Abingdon, Oxon, OX14 4RN UK
U. Saravanakumar, PhD
Professor and Head of Electronics and Communication Engineering,
Muthayammal Engineering College (Autonomous), Namakkal, India
Contributors...........................................................................................................xiii
Abbreviations ........................................................................................................ xvii
Preface ................................................................................................................... xxi
Index .....................................................................................................................321
CONTRIBUTORS
M. Suresh Anand
Department of Computing Technologies, School of Computing, SRM Institute of Science &
Technology, Kattankulathur, India
Sridhar P. Arjunan
Department of Electronics and Instrumentation Engineering, SRM Institute of Science and Technology,
Kattankulathur, India
B. Arputhamary
Department of Computer Applications, Bishop Heber College, Tiruchirappalli, India
V. Ramesh Babu
Department of Computer Science and Engineering, Sri Venkateswara College of Engineering,
Sriperumbudur, India
Yohannes Bekuma Bakare
Department of Electrical and Computer Engineering, College of Engineering and Technology,
Wollega University, Ethiopia, Africa
G. Balanagireddy
Department of Electronics and Communication Engineering, Rajiv Gandhi University of Knowledge
Technologies-Ongole Campus, Ongole, India
B. Balakumar
Centre for Information Technology and Engineering, Manonmaniam Sundaranar University,
Tirunelveli, India
S. Mary Cynthia
Department of ECE, Jeppiaar Institute of Technology, Chennai, India
M. Ganthimathi
Department of CSE, Muthayammal Engineering College, Namakkal, Tamil Nadu, India
N. Gopinath
Department of Computer Science and Engineering, Sri Sairam Engineering College, Chennai, India
G. Gunasekaran
Department of Computer Science and Engineering, Dr. M. G. R. Educational and Research Institute,
Maduravoyal, Chennai, India
M. V. Ishwarya
Department of Artificial Intelligence and Data Science, Agni College of Technology, Chennai, India
Vajiram Jayanthi
SENSE, Vellore Institute of Technology, Chennai, India
S. Jacily Jemila
Vellore Institute of Technology, Chennai, India
xiv Contributors
Mahendrakan K.
Department of Electronics and communication Engineering, Hindusthan Institute of Technology,
Coimbatore, Tamil Nadu, India
A. Karunamurthy
BWDA Arts and Science College, Vilupuram, India
Syed Khasim
Department of CSE, Dr. Samuel George Institute of Engineering &Technology, Andhra Pradesh, India
T. Ganesh Kumar
Department of Computing Science and Engineering, Galgotias University, Greater Noida,
Uttar Pradesh, India
K. Sampath Kumar
Department of Computing Science and Engineering, Galgotias University, Greater Noida,
Uttar Pradesh, India
M. Kumarasamy
Department of Computer Science, College of Engineering and Technology, Wollege University,
Ethiopia, Africa
A. Kumaresan
School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India
M. Malathi
Department of Electronics and communication Engineering, Rajalakshmi Institute of Technology,
Chennai, India
M. Moorthy
Muthayammal Engineering College, Rasipuram, India
K. Sakthi Murugan
Department of ECE, PSN College of Engineering and Technology, Tirunelveli, India
R. Murugasami
Department of Electronics and Communication Engineering, Nandha Engineering College
(Autonomous), Erode, India
S. Muthukumar
Department of CSE, B.S. Abdur Rahman Crescent Institute of Science and Technology, Chennai, India
N. Naveenkumar
Department of CSE, Muthayammal Engineering College, Rasipuram, India
S. Omkumar
Department of ECE, SCSVMV (Deemed University), Kanchipuram, India
Gururama Senthilvel P.
Department of Computing Science and Engineering, Galgotias University, Greater Noida,
Uttar Pradesh, India
Balachandra Pattanaik
Department of Electrical and Computer Engineering, College of Engineering and Technology,
Wollege University, Ethiopia, Africa
S. Pragadeeswaran
Department of CSE, Muthayammal Engineering College, Namakkal, Tamil Nadu, India
Contributors xv
R. Praveena
Department of ECE, Muthayammal Engineering College, Namakkal, India
G. Soniya Priyatharsini
Department of ECE, DR. M. G. R. Educational and Research Institute, Maduravoyal, Chennai, India
S. Punitha
Department of ECE, Muthayammal Engineering College, Rasipuram, India
A. Purushothaman
Department of ECE, Hindhusthan Institute of Technology, Coimbatore, India
Sreenithi R.
Department of Computer Technology, Madras Institute of Technology, Chennai, India
A. Rajan
Department of ECE, Sreerama Engineering College, Thirupathi, India
J. Martin Sahayaraj
Department of Electronics and Communication Engineering, Sri Indu College of Engineering and
Technology, Hyderabad, Telangana, India
K. Savima
Department of Computer Science, S.T.E.T. Women’s College, Mannargudi, India
K. Sekar
Department of Electrical and Electronics Engineering, Hindusthan College of Engineering and
Technology, Coimbatore, Tamil Nadu
Sivakumar Shanmugasundaram
SENSE, Vellore Institute of Technology, Chennai, India
S. Sharmila
Department of Civil Engineering, Nandha Engineering College, Erode, India
K. Shebagadevi
Department of ECE, Muthayammal Engineering College, Namakkal, India
P. Sinthia
Department of Biomedical Engineering, Saveetha Engineering College, Chennai, India
M. Sivakumar
Department of ECE, Mohamed Sathak A. J. College of Engineering, Chennai, India
S. R. Sridhar
Department of CSE, Muthayammal Engineering College, Namakkal, Tamil Nadu, India
P. Srinivasan
Department of CSE, Muthayammal Engineering College, Rasipuram, India
P. Subramanian
Department of CSE, Mohamed Sathak A. J. College of Engineering, Chennai, India
P. Sukumar
Department of Computer Science and Engineering, Nandha Engineering College (Autonomous),
Erode, India
xvi Contributors
R. Sumathi
Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education,
Krishnankoil, India
J. Surendharan
HKBK College of Engineering, Bangalore, Karnataka, India
N. Sureshkumar
Department of ECE, Muthayammal College of Engineering, Rasipuram, India
A. Brintha Therese
Vellore Institute of Technology, Chennai, India
K. Umapathy
Department of ECE, SCSVMV (Deemed University), Kanchipuram, India
C. Vijayakumaran
Department of CSE, SRM Institute of Science and Technology, Kattankulathur, Chennai, India
ABBREVIATIONS
AD axial diffusivity
AGI artificial general intelligence
AI artificial intelligence
AML acute myeloid leukemia
ANN artificial neural network
ARR arrhythmia
AS attacker scale
ASD autism spectrum disorder
ATS attack target scale
BBB blood–brain barrier
BC binary classifier
BFC bias field corrector
BN Bayesian network
BOLD blood-oxygen-level-dependent
BSE brain surface extractor
BSL British sign language
CAD computer-aided diagnostic
CAF cooperation attack frequency
CAPEX capital expenditure
CHO channeled hotelling observer
CNN convolution neural network
CNNPL convolutional neural network with a prototype learning
CNS central nervous system
CRM customer relationship management
CS collusion set
CSF cerebrospinal fluid
CSL Chinese sign language
CT computer tomography
DCNN deep convolutionary neural network
DL deep learning
DT decision tree
DTI diffusion tensor imaging
ECG electrocardiography
EEG electroencephalography
xviii Abbreviations
EOG electro-oculogram
ERP enterprise resource planning
EV exploration views
FA fractional anisotropy
FCM fuzzy C-means
FL feedback limit
FLAIR fluid attenuated inversion recovery
fMRI functional magnetic resonance imaging
FP false positive
FS feedback set
FWHM full width half maximum
GLM general linear model
GOFAI good old fashioned artificial intelligence
GSL Greek sign language
HCI human computer interaction
HE histogram equalization
HPV human papilloma virus
IaaS infrastructure services
IdM identity management service
IQA image quality assessment
ISTFT inverse signals using short time Fourier transform
KNN K-nearest neighbor
LASSO least absolute shrinkage and selection operator
LR logistic regression
LSF French sign language
LSTM long short-term memory
MCT manual contour tracing
MD mean diffusivity
ME maximum entropy
MEG magnetoencephalogram
ML machine learning
MRI magnetic resonance imaging
NB Naïve Bayes
NMR nuclear appealing resonation
NN neural network
NSCT non-subsample contourlet transform
OFR out-of-field recurrence
OLAP online analytical processing
OPEX operational expenditure
Abbreviations xix
Medical images are useful in analyzing the internal physical anatomy of the
human body and therefore, they are treated as a core component in medical
diagnosis and treatment. These medical images contain information about the
human anatomy such as cell and tissue arrangements, any growth in human
cells, and so on. These information cannot be interpreted easily by human
eyes and need some medical devices, tools, and software to analyze in-depth.
On the other side, medical images are stored in databases, forming big-data
for population groups, and thus, handling of medical data (or images) is also
a crucial task.
To simplify these issues, various image-processing techniques are applied
on medical images and datasets with suitable algorithms to automatically
or semiautomatically extract useful information about the patients. For
further simplification and optimization, emerging techniques such as neural
networks, machine learning, deep learning (DL), and artificial intelligence
are adapted for feature extraction of medical images, medical image segmen
tation, and image-based bio-models.
These advancements enable appropriate use of medical images in the
field of healthcare including: (1) medical knowledge of biological systems
through detailed analysis of structures and functions of particular biological
part or organs; (2) computer-assisted frameworks for medical treatments;
and (3) development of image datasets of human body (bodies) for diagnosis,
guiding doctors, and research activities.
This book comprises 20 chapters, and they are organized in the following
manner.
Chapter 1 discusses statistical analysis of seizure data to support clinical
proceeding. The radiographic imaging is a powerful and clinically important
tool within oncology. Artificial intelligence is used for the quantification of
radiographic characteristics of images, using predefined algorithms. The
strategies of clinical imaging data include normalization, robust models,
and statistical analyses. They also help in improving quality and enhancing
perfection of medical image analysis and supporting clinicians diagnose,
treat, plan, and cover changes, and execute procedures more safely and
effectively. Statistical models give a surgery-specific environment to the
xxii Preface
problem from a training set of cases conforming to the problem along with
the result.
Chapter 2 presents a spatial preprocessing in segmentation of brain MRI
using t1 and t2 images. Preprocessing is a very important process before
segmentation in medical image processing. In this chapter, T1 and T2 MRI
images of brain are segmented before and after preprocessing using SPM-12
neuroimaging software, and the importance of preprocessing is discussed
by using five important quality metrics MSE, PSNR, SNR, EPI, and SSIM
(Structural Similarity Index Metric).
Chapter 3 discusses a comparative volume analysis of pediatric brain
with adult brain using t1 MRI images segmentation of pediatric brain is very
useful in medical image analysis to supervise the brain growth and develop
ment in infants. Atlas-based methods are used for automatic segmentation
of MRI images of brain. In this chapter, automatic segmentation of four
pediatric and three adult brains are done using Brainsuite19a brain imaging
software, and the volumes of different brain parts are computed.
Chapter 4 presents a comparison of region of interest and cortical area
thickness of seizure and hemosiderin-affected brain images. Brain disorder
and tumors are caused by the severe neurological abnormality in their func
tions. Hemosiderin is induced by changes in the characteristic functional
magnetic field and can be detected using susceptibility weighted T1 images.
Clinical syndromes and its relevance are unclear. Hemosiderin and epilepsy
brain images are preprocessed with available software tool. Based on the
region of interest, mean thickness area and cortical thickness area of the
different parts of the brain are determined.
Chapter 5 presents a design and analysis of classifier for atrial fibrilla
tion detection and ECG classification based on deep neural networks (DNN)
atrial fibrillation (AF), which is the most common cardiac arrhythmia as
well as a significant risk factor in heart failure and coronary artery disease.
Deep learning is the current interest of different healthcare applications
that includes the heartbeat classification based on ECG signals. AF can be
detected by using a short ECG recording. This chapter describes constructing
a classifier for detection of atrial fibrillation in signals of ECG using deep
neural networks.
Chapter 6 discusses design and analysis of efficient short time Fourier
transform-based feature extraction for removing EOG artifacts using
deep learning regression. It also discusses how to eliminate electro
oculogram (EOG) noise from the signals of electroencephalogram (EEG)
by employing the benchmark dataset of EEGdenoiseNet. This work involves
Preface xxiii
The four fusion methods namely wavelet transform, curvelet transform, non
subsample contourlet transform (NSCT), and multimodal image fusion are
applied. The performance analysis entropy, peak signal noise ratio (PSNR),
standard deviation (SD), structural similarity index measure (SIM), and root
mean square error (RMSE) are computed.
Chapter 19 presents a framework promoting position trust evaluation
system in the cloud environment. The design and development of Cloud
Armor, a legacy trust evaluation framework that offers a set of functions
to provide trust as a service, is discussed in this chapter (TaaS). It also
discusses the problems in calculating trust based on input from cloud users.
This technology effectively protects cloud services by detecting hostile and
inappropriate behavior by the use of trust algorithms which may recognize
on/off assaults and colluding attacks by using different security criteria.
Chapter 20 presents machine learning for medical images. This chapter
comprises key phases of digital image processing applications of medical
images, usages, and difficulties of artificial intelligence in healthcare
applications.
Our deep gratitude to all the contributors for providing quality content
to the readers of the book. We extend our sincere thanks to all the reviewers
for their suggestions which helped us to pack this book with its merits. From
the bottom of our heart, we thank our family members, employers, research
scholars, and colleagues for their unrestricted support. We wish to thank the
tireless team of Apple Academic Press for their wonderful support and guid
ance throughout the process of making this book.
— T. R. Ganesh Babu
U. Saravanakumar
Balachandra Pattanaik
CHAPTER 1
ABSTRACT
1.1 INTRODUCTION
Visual interpretation is not always the right choice in the medical process,
so automated image analysis using computers with a suitable algorithm
is only the perfect choice.1 The radiographic analysis of disease images
needs more attention. Because data are collected during routine clinical
practice, undergone, pre- and post-processing work. The artificial intelli
gence (AI), statistical methodology, of image processing capabilities, with
big data analysis of datasets, is growing exponentially.2,3 The automated
quantification of the radiographic process is more useful in the discovery,
characterization, and monitoring of conditions are called radiomics,4 it
uses the predefined features of the images5 based on the aspects of texture
or features, shape, and intensity used for the image illustration.6 The early
success of radiomics giving clinical opinions of different brain-related
diseases7 has increased the rapid-fire expansion in this medical field.8
The analysis of radiologic multitudinous challenges has been addressed
in quantitative fields of biostatistics.9 Microarray data normalization,
transformation,10 batch effects in high-throughput data,11 Bayes methods
used for the adjusting batch effects in microarray expression,12 a statis
tical method in DNA hybridization,13 the study of cancer subtypes.14
Researchers in biostatistics have also contributed to the development of
statistical methods for radiologic analysis, addressing challenges such as
normalizing and transforming microarray data, mitigating batch effects
in high-throughput data, and studying cancer subtypes.15 15 In an earlier
state, data analysis in radiology faces a lot of challenges and avoidable
data analysis pitfalls.16 The more than 150 biomedical image analyses are
discussed up to 2016. To quantify the robustness of a ranking, Kendall’s
tau of statistical methods is used such as statistical analysis with radiomic
quantification, biomarker identification, and validation.17 The different
ways used in AI are especially useful when it comes to the field of “big
data,” which describes large volumes of complex and variable data.
Statistical Analysis of Seizure Data to Support Clinical Proceedings 3
1.2.1 MEDCALC
distance between classes. The data generation and performance measures are
used to compare the algorithms are shown in this paper.
1.2.3 META-ANALYSIS
as log (OR), risk ratio (RR), and its logarithm interpretation. The log (OR)
and log (RR) are each unbounded, observe-stage occasion in the C program
ming language (0, 1). The binomial generalized linear combined fashions
(GLMMs) and beta-binomial (BB) distribution, with the logit hyperlink
feature, was used for the meta-analysis. The Cochrane meta-analyses of RR
bias effects from the conventional inverse-variance–weighted technique. The
epidemiologic research is executed with meta-regression. The linear rela
tion is between publicity and the algorithm, a nonlinear relation among an
exposure and relative chance and statistical solution also effortlessly carried
out.53 The relative dangers are equal to referent class and the meta-regression
contributions of meta-evaluation. Greenland and Longenecker54 developed a
technique of dose–response data with meta-analysis.
The risk ratio (RR, or relative risk) of two event groups, whereas the odds
ratio (OR) of the odds of an event, measures a value of 1 that estimates
effects for both interventions. The measures regarding patient characteristics,
comorbidities, symptoms crucial signs, and symptoms have been extracted
from a meta-analysis. The risk metrics of meta-analysis became carried out
to assess the two consequences: excessive and mortality. A meta-analysis
was conducted using a random-effects model to pool the regression data
on abnormal ratios (ORs). The ORs for mortality studies were calculated
for any severe illness (patient case history, ICU-Incentive care unit admis
sion) and the same scientific variable estimation, whether multivariate or
univariate. The data were analyzed using R statistical software along with
the meta-analysis and plots generated using the R package meta values.55,56
The ROC curve is used for the estimation of the individual curve. The param
eters are then pooled with bivariate random effects. Receiver-running feature
(ROC) is used for evaluating the overall diagnostic performance tests and
for comparing the accuracy of statistical version of logistic regression, linear
discriminant analysis that classifies subjects into 1 of 2 categories, diseased
or not diseased. Predictive modeling to estimate anticipated outcomes
consisting of mortality on patient risk traits in research with ROC analysis.
The measures of accuracy, sensitivity, specificity, and location underneath
Statistical Analysis of Seizure Data to Support Clinical Proceedings 7
the curve (AUC) that use the ROC curve. To estimate accuracy with ROC
strategies, the ailment reputation of the affected person is measured without
blunders is called the gold trendy. The diagnostic accuracies are specificity
(i.e., true bad charge) and sensitivity (i.e., true fine charge). The final results
of a threshold of the diagnostic test used to classify subjects. The ROC
curve is discussed in the coronary restenosis and peak oxygen consumption
analysis articles (57–58).
MedCalc is a fast and reliable statistical software that includes all features
and with more than 200 statistical tests, procedures, and graphs. Medical
calculators help physicians’ memory work and calculation skills to the
clinical test.
8 Computational Imaging and Analytics in Biomedical Engineering
TABLE 1.1 Input Data of Seizure Patient Treatment Based on Cluster Analysis.
Gender Treatment Measurment 1 Measurement 2
Female A 21 25
Male B 22 26
To be continued
Study 3 51 223 12 76
Study 5 47 53 10 51
FLE, frontal lobe epilepsy; PLE, peritial lobe epilepsy; OLE, occipital lobe epilepsy; MTLE,
mesial temporal lobe epilepsy; ME, mesial epilepsy.
12 Computational Imaging and Analytics in Biomedical Engineering
0 0.1
1.3 0.5
2.8 0.9
5 2.6
10.2 7.1
16.5 12.3
21.3 15.3
31.8 20.4
52.2 24.2
1.4 CONCLUSION
KEYWORDS
• statistical analysis
• meta-analysis
• cluster analysis
• radiology
REFERENCES
1. Renukalatha, S.; Suresh, K. V.; A Review on Biomedical Image Analysis. Biomed. Eng.
App. Basis Commun. 2018.
2. Wang, W.; Krishnan, E. A Review on the State of the Science. Big Data Clin. JMIR Med.
Inform. 2014, 2, e1.
Statistical Analysis of Seizure Data to Support Clinical Proceedings 17
3. Luo, J.; Wu, M.; Gopukumar, D.; Zhao, Y. A Literature Review: Big Data Application in
Biomedical Research and Health Care. Biomed. Inform. Insights 2016, 8, 1–10.
4. Lambin, P.; Rios-Velazquez, E.; Leijenaar, R.; Carvalho, S.; van Stiphout, RGPM.;
Granton, P. Extracting More Information from Medical Images Using Advanced Feature
Analysis Radiomics. Eur. J. Cancer 2012, 48, 441–446.
5. van Griethuysen, J. J. M.; Fedorov, A.; Parmar, C.; Hosny, A.; Aucoin, N.; Narayan, V.
Radiomics System to Decode the Radiographic Phenotype Computational. Cancer
Res. 2017, 77, e104–e107.
6. Rusk, N. Deep Learning. Nat Methods 2015, 13, 35.
7. Huynh, E.; Coroller, T. P.; Narayan, V.; Agrawal, V.; Romano, J.; Franco, I. et al.
Associations of Radiomic Data Extracted from Static and Respiratory-Gated CT Scans
with Disease Recurrence in Lung Cancer Patients Treated with SBRT. PLoS One 2017,
12, e169–e172.
8. Kolossváry, M.; Kellermayer, M.; Merkely, B.; Maurovich-Horvat, P. A Comprehensive
Review on Radiomic Techniques: Cardiac Computed Tomography Radiomics. J.
Thorac. Imaging 2018, 33, 26–34.
9. O’Connor, J. P. B.; Aboagye, E. O.; Adams, J. E.; Aerts, H. J. W. L.; Barrington, S.
F.; Beer, A. et al. Imaging Biomarker Roadmap for Cancer Studies. Nat. Rev. Clin.
Oncol. 2017, 14, 169–186.
10. Quackenbush, J. Microarray Data Normalization and Transformation. Nat. Genet. 2002,
32, 496–501.
11. Leek, J. T.; Scharpf, R. B.; Bravo, H. C.; Simcha, D.; Langmead, B.; Johnson, W. E. et
al Tackling the Widespread and Critical Impact of Batch Effects in High-Throughput
Data. Nat. Rev. Genet. 2010, 11, 733–739.
12. Johnson, W. E.; Li, C.; Rabinovic, A. Adjusting Batch Effects in Microarray Expression
Data Using Empirical Bayes Methods. Biostatistics 2007, 8, 118–127.
13. Lee, M. L.; Kuo, F. C.; Whitmore, G. A.; Sklar, J. Importance of Replication in
Microarray Gene Expression Studies, Statistical Methods and Evidence from Repetitive
cDNA Hybridizations. Proc. Natl. Acad. Sci. USA 2000, 97, 9834–9839.
14. Neve, R. M.; Chin, K.; Fridlyand, J.; Yeh, J.; Baehner, F. L.; Fevr, T. et al. A Collection
of Breast Cancer Cell Lines for the Study of Functionally Distinct Cancer Subtypes.
Cancer Cell 2006, 10, 515–527.
15. Allison, D. B.; Cui, X.; Page, G. P.; Sabripour, M. From Disarray to Consolidation and
Consensus: Microarray Data Analysis. Nat. Rev. Genet. 2006, 7, 55–65.
16. Aerts, H. J. Data Science in Radiology—A Path Forward. Clin. Cancer Res. 2018, 24,
532–534.
17. Parmar, C.; Barry, J.; Hosny, D.; Ahmed, Q.; John, A.; Hugo, J. W. L. Data Analysis
Strategies in Medical Imaging. Clin. Cancer Res. 2018. clincanres.0385.2018.
18. Altman, D. G.; Gardner, M. J. Calculating Confidence Intervals for Regression and
Correlation. Br. Med. J. 1988, 296, 1238–1242.
19. Altman, D. G. Construction of Age-Related Reference Centiles Using Absolute
Residuals. Statistics Med. 1983, 12, 917–924.
20. Glantz, S. A.; Slinker, B. K.; Neilands, T. B. Primer of Applied Regression & Analysis
of Variance, 2nd ed.; McGraw- Hill.
21. Walter, S. D. The Partial Area Under the Summary ROC Curve. Stat. Med. 24,
2025–2040.
18 Computational Imaging and Analytics in Biomedical Engineering
43. Seminara, D.; Khoury, M. J.; O’Brien, T. R. et al. The Emergence of Networks in Human
Genome Epidemiology: Challenges and Opportunities. Epidemiology 2007, 18, 1–8.
44. Ioannidis, J. P.; Bernstein, J.; Boffetta, P. et al. A Network of Investigator Networks in
Human Genome Epidemiology. Am. J. Epidemiol. 2005, 162, 302–304.
45. Rothstein, H. R.; Sutton, A. J.; Borestein, M., Eds. Publication Bias in Meta-Analysis–
Prevention, Assessment and Adjustments; Wiley: Chichester, 2005.
46. Barrett, J. C.; Cardon, L. R. Evaluating Coverage of Genome-Wide Association Studies.
Nat. Genet. 2006, 38 (6), 659–662.
47. Manolio, T. A.; Rodriguez, L. L.; Brooks, L. et al. GAIN Collaborative Research Group.
New Models of Collaboration in Genome-Wide Association Studies: The Genetic
Association Information Network. Nat. Genet. 2007, 39 (9), 1045–1051.
48. Hoggart, C. J.; Clark, T. G.; De Iorio, M.; Whittaker, J. C.; Balding, D. J. Genome-Wide
Significance for Dense SNP and Resequencing Data. Genet. Epidemiol. 2008, 32 (2),
179–185.
49. Ioannidis, J. P. Why Most Discovered True Associations Are Nflated. Epidemiology
2008, 19 (5), 640–648.
50. Pritchard, J. K.; Stephens, M.; Rosenberg, N. A.; Donnelly, P. Association Mapping in
Structured Populations. Am. J. Hum. Genet. 2000, 67, 170–181.
51. Ioannidis, J. P.; Patsopoulos, N. A.; Evangelou, E. Uncertainty in Heterogeneity
Estimates in Meta-Analyses. BMJ. 2007, 335, 914–916.
52. Moonesinghe, R.; Khoury, M. J.; Liu, T.; Ioannidis, J. P. Required Sample Size and
Non-Replicability Thresholds for Heterogeneous Genetic Associations. Proc. Natl.
Acad. Sci. U.SA 2008, 105 (2), 617–622.
53. Il’yasova, D.; Hertz-Picciotto, I.; Peters, U. et al. Choice of Exposure Scores for
Categorical Regression in Meta-Analysis: A Case Study of a Common Problem. Cancer
Causes Control 2005, 16, 383–388.
54. Greenland, S.; Longnecker, M. P. Methods for Trend Estimation from Summarized
Dose-Response Data, with Applications to Meta-Analysis. Am. J. Epidemiol. 1992, 135,
1301–1309.
55. Metlay, J. P.; Waterer, G. W.; Long, A. C.; Anzueto, A.; Brozek, J.; Crothers, K. et al.
Diagnosis and Treatment of Adults with Community-acquired Pneumonia. An Official
Clinical Practice Guideline of the American Thoracic Society and Infectious Diseases
Society of America. Am. J. Respir. Crit. Care Med. 2019, 200, e45–e67. PMID:31573350.
56. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation
for Statistical Computing: Vienna, Austria, 2019. http://www.R-project.org/
57. Mauri, L.; Orav, J.; O’Malley, A. J.; Moses, J. W.; Leon, M. Z. B.; Holmes, D. R.;
Teirstein, P. S.; Schofer, J.; Breithardt, G.; Cutlip, D. E.; Kereiakes, D. J.; Shi, C.; Firth,
B. G.; Donohoe, D. J.; Kuntz, R. Relationship of Late Loss in Lumen Diameter to
Coronary Restenosis in Sirolimus-Eluting Stents. Circulation. 2005, 111, 321–327.
58. O’Neill, J.; Young, J. B.; Pothier, C. E.; Lauer, M. S. Peak Oxygen Consumption as
a Predictor of Death in Patient with Heart Failure Receiving β-Blockers. Circulation.
2005, 111, 2313–2318.
59. Tappen, M. F.; Freeman, W. T.; Adelson, E. H. Recovering Intrinsic Images from a
Single Image. IEEE Trans. Pattern Analy. Mach. Intell. 2005.
60. Kimmel, R.; Elad, M.; Shaked, D.; Keshet, R.; Sobel, I. A Variational Framework for
Retinex. Int. J. Comput. Vision 2003, 52 (1), 7–23.
20 Computational Imaging and Analytics in Biomedical Engineering
61. Bell, W.; Freeman, W. T. Learning Local Evidence for Shading and Reflectance. In
Proc. Int. Conf. Comput. Vision 2001.
62. Freeman, E. T.; Pasztor, E. C.; Carmichael, O. T.; Learning Low-Level Vision. Int. J.
Comput. Vision 2000, 40 (1), 25–47.
63. Land, E. H.; McCann, J. J. Lightness and Retinex Theory. J. Optic. Soc. Am. 1971, 61,
1–11.
64. Barrow, H. G.; Tenenbaum, H. T. Recovering Intrinsic Scene Characteristics from
Images. In Computer Vision Systems; Hanson, A., Riseman, E., Eds.; Academic Press,
1978; pp 3–26.
65. https://www.researchgate.net/figure/Seizure-outcomes-by-epilepsy-etiology_
tbl2_239066212[DATA]
CHAPTER 2
SPATIAL PREPROCESSING IN
SEGMENTATION OF BRAIN MRI
USING T1 AND T2 IMAGES
S. JACILY JEMILA and A. BRINTHA THERESE
Vellore Institute of Technology, Chennai, India
ABSTRACT
2.1 INTRODUCTION
Many image attainment processes (MRI, PET, SPECT, etc.) sustain image
degradation by noise. For MRI, the foremost seedbed of random noise is
thermal noise, which frames a statistically-independent random source
piercing the MR data in the time domain. Thermal noise is white and can
be depicted by a Gaussian random field with a zero mean and constant vari
ance. Hence, the noise is not associated with the signal or with itself. Aside
from thermal noise, structured noise ordinarily demotes the image standard
as well as MR system attributes, physiological pulses, and object move
ment. The features of noise depend on their source. Noise, inhomogeneous
pixel intensity allocation, and abrupt boundaries in the medical MR images
invoked by the MR data acquisition procedure are the capital problems that
influence the grade of MRI segmentation. One leading origin of noise is
the ambient electromagnetic field groomed up by the radiofrequency (RF)
detectors acquiring the MR signal, and the other is the object or body being
imaged.
In MRIs, raw data are constitutionally complex, valued, and spoiled with
zero-mean Gaussian-distributed noise with equal variance. Next to the inverse
Fourier transformation, the real and imaginary images are quite Gaussian-
distributed, habituated to the orthogonality and linearity of the Fourier
transform. MR magnitude images are framed by exclusively fetching the
square root of the sum of the square of the two self-reliant Gaussian random
variables (real and imaginary images) pixel by pixel. After this nonlinear
transformation, MR magnitude data can be proven to be Rician-distributed.
There is a correlation between SNR, acquisition time, and spatial resolu
tion in MR images. The SNR is fairly dominant in huge MRI usages, and this
is cultivated implicitly and clearly by averaging. The MRI data obtainment
procedure can be performed by two averaging methods: (1) Spatial volume
24 Computational Imaging and Analytics in Biomedical Engineering
Our body comprises beyond 70% of water. The water molecule is in the form
of H2O, where H is the proton. This proton is an important component in
MRI. Based on the resonance property of the proton, only MRI images will
be made. All protons in our body normally rotate in random directions with
a Larmor frequency of 63.855 MHZ, as shown in Figure 2.2, and they are in
an out-of-phase condition.
26 Computational Imaging and Analytics in Biomedical Engineering
FIGURE 2.3 Position of protons when the patient is positioned in an MRI scanner.
2.4.1 T1 RELAXATION
When the RF pulse is removed, protons try to come to the longitudinal plane
from the transversal plane by giving some energy to the surroundings, so
now longitudinal magnetization increases. T1 is the time required to achieve
63% of the original longitudinal magnetization. This is the time required for
T1 imaging.
2.4.2 T2 RELAXATION
When we stop the RF pulse, not only do the protons try to come to the longi
tudinal plane, but they also change to an out-of-phase condition. T2 is the
time required to dephase up to 37% of the original value. This is the time
required for T2 imaging.
For different tissues, there are different T1 and T2 timing and relaxation
curves.
28 Computational Imaging and Analytics in Biomedical Engineering
Figures 2.5 and 2.6 show T1 and T2 relaxation curves for fat and water,
respectively. From the figures, we can observe that fat and water contain
individual relaxation curves. The T1 and T2 relaxation times of water are
high when compared to those of fat.
Spatial Preprocessing in Segmentation of Brain MRI 29
2.6.1 REALIGN
2.6.2 COREGISTER
2.6.3 NORMALIZATION
This step is used to manipulate the scans into a standard space. SPM-12 uses
a unified segmentation procedure. It combines three steps.
1. Segmentation
2. Bias correction
3. Spatial normalization8
Spatial Preprocessing in Segmentation of Brain MRI 31
2.6.4 SMOOTHING
2.7 SEGMENTATION
The input T1 and T2 images in DICOM format are given in Figures 2.8
and 2.9, and the input images in Nifti format are given in Figures 2.10 and
2.11.
The realign estimate output is as given in Figure 2.12. The joint histogram
obtained during coregistration of T1 and T2 images is given in Figure 2.13.
Deformation fields generated during the normalization process for T1 and T2
images are given in Figures 2.14 and 2.15. The smoothed images are given
in Figures 2.16 and 2.17.
FIGURE 2.18 (a–e) Segmented brain parts from T1 image without preprocessing.
FIGURE 2.19 (a–e) Segmented brain parts from T2 image without preprocessing.
Spatial Preprocessing in Segmentation of Brain MRI 37
FIGURE 2.20 (a–e) Segmented brain parts from T1 image with preprocessing.
FIGURE 2.21 (a–e) Segmented brain parts from T2 image with preprocessing.
Image quality metrics3,4 are used to analyze the quality of the image. Quality
metrics are calculated for the segmented images, which are segmented
before and after preprocessing. If quality metrics are maintained well, then
only they are suitable for further analysis. MSE, PSNR, SNR, EPI, and SSIM
are calculated for the brain parts segmented with and without preprocessing
for T1 and T2 images.
The results are plotted as a graph as shown in Figure 2.22 (a–j), from
which we can understand the effect of preprocessing. From the above images,
we know that if we preprocess the MRI brain T1 and T2 images before
segmentation, we can reduce the MSE and increase the PSNR, SNR, and
SSIM, which are very important in the analysis of medical images. The EPI
value is also maintained well. For example, the MSE for white matter in T1
is 653.2879 without preprocessing and 609.7175 with preprocessing. PSNR
for gray matter in T1 is 20.1991 without preprocessing and 20.3984 with
preprocessing. SNR for white matter in T1 is 4.3804 without preprocessing
and 8.5689 with preprocessing. EPI for gray matter in T1 is 0.7253 without
preprocessing and 0.6864 with preprocessing. The SSIM for white matter in
T1 is 0.9784 without preprocessing and 0.9913 with preprocessing.
38 Computational Imaging and Analytics in Biomedical Engineering
FIGURE 2.22 (a–j) Quality metrics comparison for T1 and T2 images with and without
preprocessing.
Spatial Preprocessing in Segmentation of Brain MRI 39
2.10 CONCLUSIONS
In this chapter, T1 and T2 MRI brain images are segmented before and after
preprocessing. Realignment, coregistration, normalization, and smoothing
are the spatial preprocessing tools applied. With and without preprocessing,
MSE, PSNR, SNR, EPI, and SSIM values are calculated for the segmented
brain parts, and the results are analyzed. From the results, we can conclude
that preprocessing is very important before the segmentation of the brain
MRI.
KEYWORDS
• preprocessing
• segmentation
• MSE
• SNR
• PSNR
• EPI
• SSIM
REFERENCES
1. Gonzalez, R. G.; Woods, R. E. Third Edition Digital Image Processing; Pearson: South
Asia, 2014.
2. Sridhar, S. Digital Image Processing; Oxford: India, 2011.
3. Najarian, K.; Splinter, R. Biomedical Signal and Image Processing; Taylor & Francis:
New York, 2006.
4. Demirkaya, O.; Asyali, M. H.; Sahoo, P. K. Image Processing with MATLAB®
Applications in Medicine and Biology; CRC Press: New York, 2009.
5. Semmlow, J. L. Biosignal and Medical Image Processing; CRC Press: New York, 2009.
6. Josien, P. W. P.; Antoine Maintz, J. B.; Viergever, M. A. Mutual-Information-Based
Registration of Medical Images: A Survey. IEEE Trans. Med. Imaging 2003, 22 (8),
986–1004.
7. Eklund, A.; Nichols, T.; Andersson, M.; Knutsson, H. Empirically Investigating the
Statistical Validity of SPM, FSL and AFNI for Single Subject FMRI Analysis. IEEE
2015, 1376–1380.
8. Muro, A.; Zapirain, B. G.; Méndez, A.; Ruiz, I.; FMRI Processing Tool for the Analysis,
Parametrisation and Comparison of Preprocessed SPM Images. In 18th European Signal
Processing Conference 2010; pp 1335–1339.
CHAPTER 3
ABSTRACT
3.1 INTRODUCTION
3.2.1 DATASETS
Real-time datasets of seven different patients are considered for this analysis.
Among the seven, four are babies and three are adults. Real-time data that
are in DICOM format are converted to Nifty format by using SPM-12, a
neuroimaging software. Real-time datasets at the following ages are consid
ered for volume analysis:
1. 6-day-old baby
2. 2-month-old baby 1
3. 2-month-old baby 2
4. 2 months, 20 days old baby
5. 42-year old adult
6. 46 years, 7 months 13 days old adult
7. 47-year-old adult
The surface area of the brain is around 233–465 square inches (1500–2000
cm2). To fit this surface area to the skull, the cortex is folded, and sulci
(grooves) and gyri (folds) are formed. The cerebral cortex, or cortical
surface, is divided into four major lobes. They are the frontal lobe, parietal
lobe, occipital lobe, and temporal lobe. Each lobe has a different function.
The brain is divided into left and right halves using the interhemispheric
fissure, which is a large groove. The corpus callosum is used to facilitate
communication between these two halves. Also, the right and left temporal
lobes communicate through the anterior commissure, which is a tract of fiber.
The cortical area above the corpus callosum is divided by a groove called the
cingulate sulcus. The area between the groove and the corpus callosum is
called the cingulate gyrus.
Some important areas of the brain and their functions:
• Parietal lobe—it will receive and respond to somatosensory input
(pain and touch).
• Frontal lobe—it is involved in motor skills (including speech) and
cognitive functions.
• Occipital lobe—it receives and processes visual information directly
from the eyes.
Comparative Volume Analysis of Pediatric Brain 43
The first step in many of the MRI analysis sequences is the skull stripping.9
In T1 images, the edges between the skull and brain regions are well-defined.
The brain surface extractor (BSE) is used to extract the brain region in
Brainsuite 19a. It performs the following steps: anisotropic diffusion, edge
detection, and morphological operations for brain surface extraction. Aniso
tropic diffusion applies smoothing to lower contrast edges while retaining
the higher contrast edges. For edge detection, it uses the Marr–Hildreth
edge detection operator, which applies Gaussian blur followed by a Lapla
cian operator. In morphological operations, first erosion is applied and then
dilation is applied; these are used to eliminate the noise-related connections
between the scalp and brain.
correction (BFC) is applied. BFC estimates the correction field based on the
tissue gain variation.
Once the brain region is extracted and classified, then it is separated into
cerebellum, cerebrum, and other structures to enable identification of the
cerebral cortex. To achieve this, volumetric registration is needed for an MRI
image.
The wisp removal step is used to remove the segmentation errors in the brain
extraction process by removing thin, wispy structures. It decomposes the
binary mask into a graph, separating weakly connected components.
present in the image. SvReg is very helpful for regional analysis. SvReg uses
a series of multistep registration and refinement processes. We use Brainsuit
atlas1 for SvReg. It is based on the Colin 27 atlas. It is an average of 27 scans
of an individual.
FIGURE 3.3 SvReg output of a 6-day-old baby: (a) front view and (b) back view.
FIGURE 3.4 SvReg output of a 2-month-old baby 1: (a) front view and (b) back view.
FIGURE 3.5 SvReg output of a 2-month-old baby 2: (a) front view and (b) back view.
Comparative Volume Analysis of Pediatric Brain 47
FIGURE 3.6 SvReg output of a 2 months 20 days old baby: (a) front view and (b) back
view.
FIGURE 3.7 SvReg output of a 42 years adult: (a) front view and (b) back view.
FIGURE 3.8 SvReg output of a 46 years, 7 months 13 days adult: (a) front view and (b)
back view.
48 Computational Imaging and Analytics in Biomedical Engineering
FIGURE 3.9 SvReg output of a 47 years adult: (a) front view and (b) back view.
The outputs of cortical surface extraction for an adult and a baby are
given in Figures 3.1 and 3.2. When compared to a baby, there are some
curves on the surface. The outputs for SvReg are given in Figures 3.3–3.9
for different datasets. Different colors in the image indicate different regions
of the brain. Volumes corresponding to these different regions are given in
Table 3.1.
TABLE 3.1 Volumes Corresponding to Different Brain Regions for Different Datasets.
Sl. Brain parts 6 days baby 2 months 2 2 month 20 46 years, 7 42 47
no baby1 months days baby months 13 years years
baby 2 days adult adult adult
1 Background 2099.84814 3589.632 3745.7 2766.4321 2622.3 2784.9 2601.41
2 R. caudate 7.79200029 1.568 0.648 10.72 3.176 7.416 2.304
nucleus
3 L. caudate 13.3520002 1.6 1.352 10.68 1.088 1.528 5.808
nucleus
4 R. putamen 7.46400023 4.648 2.992 6.1040001 2.896 3.688 4.736
5 L. putamen 5.83200026 3.528 2.616 4.48 2.904 2.416 4.96
6 R. globus 9.54400063 3.2 4.12 6.8480005 1.192 2.464 3.056
pallidus
7 L. globus 8.03200054 2.192 4.16 5 1.2 1.528 3.384
pallidus
8 R. nucleus 0.712000012 0.248 0.272 0.288 0.224 0.112 0.16
accumbens
9 L. nucleus 0.624000013 0.28 0.192 0.184 0.128 0.192 0.344
accumbens
Comparative Volume Analysis of Pediatric Brain 49
In Table 3.1, “wm” represents white matter regions, and “gm” represents
gray matter regions. From the above table, we noticed that the left mamil
lary body, right parahippocampal gyrus (gm), and left parahippocampal
gyrus (gm) are not present in babies. Left ventricular system, right lateral
orbitofrontal gyrus, right middle orbitofrontal gyrus (wm), left anterior
orbitofrontal gyrus (wm), right pars orbitalis (wm), and left pars orbitalis
(wm) are not present in most of the babies and adults. Right mamillary body,
right lateral ventricle, right pars orbitalis (gm), left pars orbitalis (gm), right
transverse frontal gyrus (gm), left transverse frontal gyrus (gm), right middle
orbitofrontal gyrus (gm), right anterior orbitofrontal gyrus (gm), left anterior
orbitofrontal gyrus (gm), right posterior orbitofrontal gyrus (gm), right para-
central lobule (gm), right temporal pole (gm), left temporal pole (gm), right
fusiforme gyrus (gm), left fusiforme gyrus (gm), left transverse frontal gyrus
(wm), left lateral orbitofrontal gyrus (wm), right cingulate gyrus (wm), right
temporal pole (wm), left temporal pole (wm), and right parahippocampal
gyrus (wm) are missing in some babies.
60 Computational Imaging and Analytics in Biomedical Engineering
3.4 CONCLUSION
KEYWORDS
• segmentation
• pediatric brain
• atlas-based method
• volume
• MRI
REFERENCES
1. Gonzalez, R. G.; Woods, R. E. Third Edition Digital Image Processing; Pearson: South
Asia, 2014.
2. Sridhar, S. Digital Image Processing; Oxford, India, 2011.
3. Najarian, K.; Splinter, R. Biomedical Signal and Image Processing; Taylor & Francis:
New York, 2006.
4. Demirkaya, O.; Asyali, M. H.; Sahoo, P. K. Image Processing with MATLAB®
Applications in Medicine and Biology; CRC Press: New York, 2009.
5. Semmlow, J. L. Biosignal and Medical Image Processing; CRC Press: New York, 2009.
6. Gousiasa,I. S.; Hammersa, A.; Heckemanna, R. A.; Counsella, S. J.; Dyeta, L.
E.; Boardmana, J. P.; David Edwardsa, A. Atlas Selection Strategy for Automatic
Segmentation of Pediatric Brain MRIs into 83 ROI. IEEE, 2010.
7. Klimont, M. B.; Flieger, M.; Rzeszutek, J.; Stachera, J.; Zakrzewska, A.; JoNczyk-
Potoczna, K. Automated Ventricular System Segmentation in Paediatric Patients Treated
for Hydrocephalus Using Deep Learning Methods. BioMed Res. Int. 2019, 1–9.
8. Makropoulos, A.; Gousias, I. S.; Ledig, C.; Aljabar, P.; Serag, A.; Hajnal, J. V.; David
Edwards, A.; Counsell, S.J.; Rueckert, D. Automatic Whole Brain MRI Segmentation
of the Developing Neonatal Brain. IEEE Trans. Med. Imaging 2014, 33 (9), 1818–1831.
9. Devi, C. N.; Sundararaman, V. K.; Chandrasekharan, A.; Alex, Z. A. Automatic Brain
Segmentation in Pediatric MRI Without the Use of Atlas Priors. IEEE WiSPNET 2016
Conference, 2016; pp 1481–1484.
10. Gousias, I. S.; Hammers, A.; Counsell, S. J.; David Edwards, A.; Rueckert, D. Automatic
Segmentation of Pediatric Brain MRIs Using a Maximum Probability Pediatric Atlas.
IEEE 2012.
Comparative Volume Analysis of Pediatric Brain 61
11. Gao, Y.; Zhang, M.; Grewen, K.; Thomas Fletcher, P.; Gerig, G. Image Registration and
Segmentation in Longitudinal MRI Using Temporal Appearance Modeling. IEEE 2016,
629–632.
12. Levman, J.; Vasung, L.; MacDonald, P.; Rowley, S.; Stewart, N.; Lim, A.; Ewenson, B.;
Albert Galaburdad, E. Regional Volumetric Abnormalities in Pediatric Autism Revealed
by Structural Magnetic Resonance Imaging. Int. J. Dev. Neurosci. 2018, 34–45.
13. Kasiri, K.; Dehghani, M. J.; Kazemi, K.; Helfroush, M. S.; Kafshgari, S. Comparison
Evaluation of Three Brain MRI Segmentation Methods in Software Tools. 17th Iran.
Conf. Biomed. Eng. 2010.
14. Gibson, E.; Young, M.; Sarunic, M. V.; Beg, M. F. Optic Nerve Head Registration Via
Hemispherical Surface and Volume Registration. IEEE Trans. Biomed. Eng. 2010, 57
(10), 2592–2595.
CHAPTER 4
COMPARISON OF REGION OF
INTEREST AND CORTICAL AREA
THICKNESS OF SEIZURE AND
HEMOSIDERIN-AFFECTED BRAIN
IMAGES
VAJIRAM JAYANTHI1, SIVAKUMAR SHANMUGASUNDARAM1, and
C. VIJAYAKUMARAN2
1
SENSE, Vellore Institute of Technology, Chennai, India
2
Department of CSE, SRM Institute of Science and Technology,
Kattankulathur, Chennai, India
ABSTRACT
4.1 INTRODUCTION
4.2.3 REGISTRATION
The registration process overlays two or more images from various sensors
or imaging equipment taken at different angles and times, or from the
66 Computational Imaging and Analytics in Biomedical Engineering
geometrically aligned same scene of images for analysis (Zitová and Flusser,
2003), transforming the different sets of data into one coordinate system.
The military’s automatic target recognition, medical imaging, and analysis
of image data are obtained from satellites. The MRI, CT, SPECT, or PET
enables the combination of data from multiple modalities to get complete
information about the patient. It helps to facilitate treatment verification,
improve interventions, monitor tumor growth,15 or compare patient data to
anatomical atlases.
4.2.4 REGRESSION
Noise reduction and blurring of images are done by smoothing the spatial
filter. Blurring is a preprocess of removing noise, and smoothing filters are
used to reduce image noise, sharpen edges of image structures, and enhance
an image with the help of spatial domain operations or filtering of images.
The software tool analysis is covered by the Brainsuite and its step-by-step
procedure of inhomogeneity correction, brain extraction-registration, tissue
segmentation, intensity normalization, and cortical surface extraction. After
that, surface volume registration, region of interest (ROI), mean thickness area,
and cortical surface area thickness were calculated as shown in Figure 4.2.
The parameters of the fMRI WRE CON images of T1AXPOST and Hemo
siderin are preprocessed by the Brainsuite tool. The tool includes skull and
scalp removal, image nonuniformity compensation, voxel-based tissue
classification, topological correction, rendering, and editing functions. The
purpose of preprocessing image acquisition is to eliminate unwanted infor
mation or noise from ultrasound images without evading vital information.
These variations follow further image analysis steps. The analysis of tissue
types, pathological regions, and anatomical structures is carried out by the
segmentation process. The first task is to find a ROI, eliminating unwanted
regions from processing, and the second task is to segment the disease.
The boundary estimation, classification, and categorization of diseases are
done by segmentation to maintain the accuracy and sensitivity of detecting
lesions (Guo and Şengür, 2013). The features technique is used to eliminate
the false detection rate and improve diagnosis after the segmentation of the
disease. The suspicious areas can be categorized as benign or malignant on
the basis of selected features using different classification techniques. The
distinct histopathological components and malignant modules with vague
boundaries are often fused with adjoining tissues to produce the delineation
of tissue.17 To improve the accuracy, decrease the misdiagnosis from earlier
detection, and improve the correct diagnosis, all required a fully computer
ized system (Nguyen et al., 2019).
The tool is designed to produce cortical representations, and each stage of
the cortical surface identification process was captured, and ROI, mean area
thickness, and cortical area thickness were extracted. The present results are
compared with normal brain image values for further diagnosis. The average
thickness of the full brain is around 2.5–3 mm, and the individual brains vary
from about 2 mm at their thinnest in the calcarine cortex up to 4 mm and
over in the thicker regions of the precentral gyrus, superior temporal lobes,
and superior frontal lobes (Zilles, 1990). The cerebral cortex area thickness
lies between 1 and 4.5 mm, and it is a highly folded neuron sheet, with an
average of approximately 2.5 mm, which is compared with the measured
values to detect the abnormalities.
The tool is used for surface and volume registration, data diffusion, tensor
fitting, and structural coregistration. The brain image analysis is used to
68 Computational Imaging and Analytics in Biomedical Engineering
extract label, volume, and cortical surface anatomy and diffusion coregister
to structural images to allow modeling of labeled brain anatomy. Functional
magnetic resonance imaging (fMRI) measures the small changes in brain
activity occurring due to blood flow. It can detect abnormalities in the
brain that cannot be found with other imaging processes. Hemosiderin is an
iron-storage complex that is composed of lysosomes and partially digested
ferritin. The breakdown of the hemisphere gives rise to biliverdin and iron.
Then the body traps and releases the iron and stores it in hemosiderin tissues.
Hemosiderin (golden-brown granules, iron-positive) is usually found in
macrophages of red pulp and also in the marginal zone to a lesser degree.
The preprocessing steps of hemosiderin are shown in Figures 4.3 and 4.4,
and the preprocessing methods and extraction of surface area and volume
are shown in Figure 4.5.
TABLE 4.1 Region of Interest, Mean Thickness, and Cortical Area Calculated Based on
Various Brain Parts of fMRI Brain Seizure Images (Brain T1 AX POST) and fMRI Brain
Images (hemosiderin).
ROI_ Brain parts Mean thick- Cortical area Cortical area
ID ness (mm) (mm2) fMRI (mm2) fMRI
fMRI brain brain seizure brain images
seizure images images (brain (hemosiderin)
(brain T1 AX T1 AX post)
post)
120 R. superior frontal gyrus 0.610544 657.278228 3510.586298
121 L. superior frontal gyrus 1.304945 1655.557214 6183.092200
130 R. middle frontal gyrus 1.153275 337.825409 4008.851409
131 L. middle frontal gyrus 0.869990 718.740569 12,975.890310
142 R. pars opercularis 1.088048 15.596868 1286.285113
143 L. pars opercularis 0.585431 443.580352 1149.394662
144 R. pars triangularis 1.355878 142.183448 11,051.325879
145 L. pars triangularis 1.600054 927.154561 1144.524778
146 R. pars orbitalis 2.367167 1.241634 126.537966
147 L. pars orbitalis 6.727943 77.504858 184.713848
150 R. precentral gyrus 0.153721 66.350107 5353.633353
151 L. precentral gyrus 0.355621 1141.159916 5750.413625
162 R. transverse frontal gyrus 3.707373 0.000000 84.850144
163 L. transverse frontal gyrus 4.049633 136.511933 139.778236
164 R. gyrus rectus 0.000074 3.663462 557.282092
165 L. gyrus rectus 3.778476 346.149674 430.304697
166 R. middle orbitofrontal 6.180444 2.469826 216.428357
gyrus
167 L. middle orbitofrontal gyrus 0.997979 63.335274 156.230785
168 R. anterior orbitofrontal 4.456243 4.495258 153.023251
gyrus
169 L. anterior orbitofrontal 0.989992 39.046137 169.868460
gyrus
170 R. posterior orbitofrontal 5.490447 0.764592 559.311023
gyrus
171 L. posterior orbitofrontal 1.493429 108.787511 436.082553
gyrus
172 R. lateral orbitofrontal gyrus 6.009783 3.741775 148.448843
72 Computational Imaging and Analytics in Biomedical Engineering
Table 4.1 shows the various regions of the brain and their mean area
thickness and cortical area thickness in both the seizure-affected brain
and the hemosiderin brain. The values are compared with normal, healthy
people’s brain area values, and the severity of disease is calculated for further
diagnosis and clinical analysis with the purpose of treatment.
The medical care system follows various methods to detect and analyze
epileptic seizures. To identify the seizure type and etiology, diagnostic tools
like electroencephalogram (EEG), magnetic resonance imaging (MRI),
magnetoencephalogram (MEG), single photon emission computed tomog
raphy (SPECT), neuropsychiatric testing, and positron emission tomography
(PET) are used. For identifying specific seizure types, the EEG is critical.
For newly diagnosed patients, a CT scan will be used, but always an MRI
is preferred for the brain analysis. It can help determine the proper seizure
74 Computational Imaging and Analytics in Biomedical Engineering
type and syndrome. MRI may locate brain lesions like scars or anatomic
defects that are not detected by CT scans or conventional radiographs.
All data are fetched into any one of the analysis techniques of statistical
parametric mapping (SPM), MEG, or Curry analysis software to find and
detect the stages of seizures and where they start, and the requirements of
either medication or surgery are advised by medical practitioners. Machine
learning algorithms are used to detect and analyze datasets by using different
learning, classifier, and statistical measurement methods. Recent work is
focused on fMRI to find the correlation between epileptic seizures and cere
bral hemodynamic changes.18 Seizures can be detected through the observa
tion of the brain, heart rate, oxygen level, muscle activities, and artificial
sounds or visual signatures like MRI, motion, and audio or video recording
of the head and body of the person.19 Measuring asymmetric interactions
in the resting state of brain networks is discussed in Ref. [20]; correcting
inhomogeneity-induced distortion in fMRI using nonrigid registration in
Ref. [21]; the linear spherical deconvolution and model-free linear trans
form methods for diffusion MRI in Ref. [22]; and the identification of partial
correlation-based networks compared with cortical thickness data in Ref.
[23]. The Gaussian models are used to compare the nonrigid image registra
tion.24 The automatic cortical surface registration and labeling are analyzed
in Ref. [25]. The geodesic curvature flow on surfaces for automatic sulcal
delineation is compared with the cortical surface area.26 An invariant shape
representation using the anisotropic Helmholtz equation for the human brain
is shown in Ref. [27]. The Fourier 2-sphere linear transforms of diffusion
MRI are compared with different regions of the cortical surface.28 Correcting
the susceptibility-induced distortion in diffusion-weighted MRI using
constrained nonrigid registration is also discussed in Ref. [29].
4.5 CONCLUSIONS
KEYWORDS
• seizure
• hemosiderin
• fMRI (functional magnetic resonance image)
• software tool
• brain
REFERENCES
1. Kondziolka, D.; Bernstein, M.; Resch, L.; Tator, C. H.; Fleming, J. F.; Vanderlinden,
R. G.; Schutz, H. Significance of Hemorrhage into Brain Tumors: Clinicopathological
Study. J. Neurosurg. 1987, 67, 852–857.
2. Rosen, A. D.; Frumin, N. V. Focal Epileptogenesis After Intracortical Hemoglobin
Injection. Exp. Neurol. 1979, 66, 277–284.
3. Ueda, Y.; Willmore, L. J.; Triggs, W. Amygdalar Injection of FeCl3 Causes Spontaneous
Recurrent Seizures. Exp. Neurol. 1998, 153, 123–127.
4. Moran, N. F.; Fish, D. R.; Kitchen, N.; Shorvon, S.; Kendall, B. E.; Stevens, J. M.
Supratentorial Cavernous Haemangiomas and Epilepsy: A Review of the Literature and
Case Series. J. Neurol. Neurosurg. Psychiatry 1999, 66, 561–568.
5. van Breemen, M. S.; Wilms, E. B.; Vecht, C. J. Epilepsy in Patients with Brain Tumours:
Epidemiology, Mechanisms, and Management. Lancet Neurol. 2007, 6, 421–430.
6. Kucukkaya, B.; Aker, R.; Yuksel, M.; Onat, F.; Yalcin, A. S. Low Dose MK-801 Protects
Against Iron-Induced Oxidative Changes in a Rat Model of Focal Epilepsy. Brain Res.
1998, 788, 133–136.
7. Robinson, R. J.; Bhuta, S. Susceptibility-Weighted Imaging of the Brain: Current Utility
and Potential Applications. J. Neuroimaging. 2011, 21 (4), e189–e204.
8. Park, M. J.; Kim, H. S.; Jahng, G. H.; Ryu, C. W.; Park, S. M.; Kim, S. Y. Semi
Quantitative Assessment of Intra-Tumoral Susceptibility Signals Using Non-Contrast-
Enhanced High-Field High-Resolution Susceptibility-Weighted Imaging in Patients
with Gliomas: Comparison with MR Perfusion Imaging. AJNR Am. J. Neuroradiol.
2009, 30, 1402–1408.
9. Wen, P. Y.; Macdonald, D. R.; Reardon, D. A. Updated Response Assessment Criteria
for High-Grade Gliomas: Response Assessment in Neuro-Oncology Working Group. J.
Clin. Oncol. 2010, 28, 1963–1972.
10. Sato, J. R.; Hoexter, M. Q.; Fujita, A.; Rohde, L. A. Evaluation of Pattern Recognition
and Feature Extraction Methods in ADHD Prediction. Front. Syst. Neurosci. 2012.
11. Shinohar, R., T. et al. Statistical Normalization Techniques for Magnetic Resonance
Imaging. NeuroImage: Clin. 2014, 6, 9–19.
12. Godenschweger, F.; Kägebein, U.; Stucht, D.; Yarach, U.; Sciarra, A.; Yakupov, R.;
Lüsebrink, F. et al. Motion Correction in MRI of the Brain. Phys. Med. Biol. 2016, 61
(5), R32–R56.
76 Computational Imaging and Analytics in Biomedical Engineering
13. Parker, D. B. et al. The Benefit of Slice Timing Correction in Common fMRI
Preprocessing Pipelines. Front. Neurosci. 2019, 13, 821–830.
14. Fortin, J-P. et al. Removing Inter-Subject Technical Variability in Magnetic Resonance
Imaging Studies. Neuro Image 2016, 132, 198–212.
15. Zitova, B. In Encyclopedia of Biomedical Engineering, Acquisition Methods, Methods
and Modeling 2019.
16. Santhanam, V. et al. Generalized Deep Image to Image Regression, 2017.
17. Koundal, D.; Sharma, B. In Neutrosophic Set in Medical Image Analysis, 2019.
18. Fergus, P.; Hussain, A.; Hignett, David; Al-Jumeily, D.; Khaled, A-A.; Hani, H. A
Machine Learning System for Automated Whole- Brain Seizure Detection. Appl.
Comput. Inf. 2015.
19. Fergus, P.; Hignett, D.; Hussain, A.; Al-Jumeily, D.; Khaled, A-A. Automatic Epileptic
Seizure Detection Using Scalp EEG and Advanced Artificial Intelligence Techniques.
BioMed Res. Int. 2015, 1–17.
20. Joshi, A. A.; Salloum, R.; Bhushan, C.; Leahy, R. M. Measuring Asymmetric Interactions
in Resting State Brain Networks. Inf. Process Med. Imaging 2015, 24, 399–410.
21. Chambers, M. C.; Bhushan, C.; Haldar, J. P.; Leahy, R. M.; Shattuck, D. W. Correcting
Inhomogeneity-Induced Distortion in FMRI Using Non-Rigid Registration. Proc. IEEE
Int. Symp. Biomed. Imaging 2015, 1364–1367.
22. Haldar, J. P.; Leahy, R. M. The Equivalence of Linear Spherical Deconvolution and
Model-Free Linear Transform Methods for Diffusion MRI. IEEE 10th Int. Symp.
Biomed. Imaging (ISBI), 2013, 508–511.
23. Wheland, D.; Joshi, A.; McMahon, K.; Hansell, N.; Martin, N.; Wright, M.; Thompson,
P.; Shattuck, D.; Leahy, R. R. Identification of Partial-Correlation Based Networks
with Applications to Cortical Thickness Data. In 9th IEEE International Symposium on
Biomedical Imaging (ISBI), 2012; pp 2–5.
24. Somayajula, S.; Joshi, A. A.; Leahy, R. M. Non-Rigid Image Registration Using Gaussian
Mixture Mode. In Biomedical Image Registration. Springer: Berlin Heidelberg, 2012;
pp 286–295.
25. Joshi, A. A.; Shattuck, D. W.; Leahy, R. M. A Method for Automated Cortical Surface
Registration and Labeling. Biomed. Image Regist. Proc. 2012, 7359, 180–189.
26. Joshi, A. A.; Shattuck, D. W.; Damasio, H.; Leahy, R. M. Geodesic Curvature Flow on
Surfaces for Automatic Sulcal Delineation. In 9th IEEE International Symposium on
Biomedical Imaging (ISBI), 2012.
27. Ashrafulla, S.; Shattuck, D. W.; Damasio, H.; Leahy, R. M. An Invariant Shape
Representation Using the Anisotropic Helmholtz Equation. Med. Image Comput.
Comput. Assist. Interv. 2012, 15 (3), 607–614.
28. Haldar, J. P.; Leahy, R. M. New Linear Transforms for Data on a Fourier 2-Sphere with
Application to Diffusion MRI. In 9th IEEE International Symposium on Biomedical
Imaging (ISBI), 2012; pp 402–405.
29. Bhushan, C.; Haldar, J. P.; Joshi, A. A.; Leahy, R. M. Correcting Induced Distortion
in Diffusion-Weighted MRI Using Constrained Non-Rigid Registration. In Signal &
Information Processing Association Annual Summit and Conference (APSIPA ASC),
2012; pp 1–19.
CHAPTER 5
ABSTRACT
Atrial fibrillation is the popular factor of risk in evaluating the coronary and
heart diseases. Deep learning is a quite common area of interest in various
medical applications connected with classification of heartbeats with ECG
signals. ECG can be employed for the detection of AF. This paper describes
constructing a classifier for the detection of atrial fibrillation in signals of
ECG using deep neural networks. Generally, neural network structures of
convolution (CNN) and recurrent types (RNN) are employed for classifi
cation of heartbeats. The proposed method in this paper employs memory
networks and analysis using both time and frequency. If raw signals are
trained with the LSTM network, the classification accuracy obtained is
5.1 INTRODUCTION
FIGURE 5.2 Histogram plot of ECG signal lengths about 9000 samples.
FIGURE 5.3 Normal signal plot—P wave and QRS complex observation.
Design and Analysis of Classifier for Atrial Fibrillation 81
FIGURE 5.9 Visualize the improved training and testing accuracy using confusion matrix
chart.
Design and Analysis of Classifier for Atrial Fibrillation 85
5.3 CONCLUSION
KEYWORDS
• atrial fibrillation
• deep learning
• heartbeat
• dataset
• ECG
• heart disease
REFERENCES
ABSTRACT
train the model of regression and then transformed signals using short time
Fourier transform (STFT) were employed. The proposed model based on
STFT enhances the total especially at lower values of SNR. The mean square
error between actual EEG and denoised EEG signals is considered as the
metric for performance. The MSE value is also calculated between actual
signal and noisy EEG signals to indicate the worst case MSE if no denoising
is implemented. The results were simulated using MatLab and they indicate
that greater improvement in performance can be achieved by using the
sequences of STFT even at worst values of SNRs.
6.1 INTRODUCTION
FIGURE 6.1 (A) Time segment extracted; (B) signal tapered; (C) FFT on tapered signal;
(D) FFT into time-frequency domain.
90 Computational Imaging and Analytics in Biomedical Engineering
Figure 6.2 shows the block diagram of the proposed approach for the
removal of EOG artifacts. It includes extraction of features using STFT and
reconstruction of EOG artifact signal. The Matlab transformSTFT helper
function normalizes the input signal and then computes its STFT-based clean
EEG data.
The value of λ can be varied to control the artifact power with a specific
value of SNR. The following variables are included in the data segment for
implementation using MatLab.
• EEG for a clean EEG segment
• EOG for an EOG segment
Design and Analysis of Efficient Short Time Fourier 91
FIGURE 6.3 Training plot of the clean and EEG with EOG artifact segments.
Figure 6.3 shows the training plot of the clean and EEG with EOG artifact
segments. The purpose of the proposed system is to train a network so that
it can yield STFT denoised signal representations with respect to input of
STFT corresponding to that of noisy signals. Finally, inverse STFT (ISTFT)
is applied to recover the denoised signal as shown in Figure 6.4.
FIGURE 6.4 Recovering the denoised EEG STFT segment for deep learning regression.
92 Computational Imaging and Analytics in Biomedical Engineering
FIGURE 6.6 Resultant plot of EEG denoising noisy signals for different SNRs.
Design and Analysis of Efficient Short Time Fourier 93
FIGURE 6.7 Improved SNR performance plot of average MSE without denoising.
Figures 6.6 and 6.7 show the plot of average value of MSE obtained
without denoising and denoising network trained with raw input signals and
STFT transformed signals, respectively. The performance of the system got
improved by using STFT with respect to decreased values of SNR.
6.3 CONCLUSION
This paper explains how a deep network shall be trained with EEG signals
to perform regression for signal denoising and removing artifacts of EOG
using the feature extraction model. Additionally, a comparison between the
two models trained with raw clean and noisy signals of EEG is also done.
The other technique applied is short-time Fourier transform to train,
validate, and test datastores using the Matlab transformSTFT function. The
complex features of EEG signals are treated as independent real features
by using Matlab implementations. From the simulation results, it is clear
that using STFT-based sequences yields greater performance enhancement
at worst values of SNRs and both approaches converge in performance with
the improvement in SNR.
94 Computational Imaging and Analytics in Biomedical Engineering
KEYWORDS
• regression model
• short-time Fourier transform
• mean-squared error
• denoising
• deep learning networks
REFERENCES
1. Mashhadi, N.; et al. In Deep Learning Denoising for EOG Artifacts Removal from EEG
Signals, IEEE Global Humanitarian Technology Conference (GHTC), 2020.
2. Yang, B.; Duan, K.; Fan, C.; Hu, C.; Wang, J. Automatic Ocular Artifacts Removal in
EEG Using Deep Learning. Biomed. Signal Process. Control 2018, 43, 148–158. DOI:
10.1016/j.bspc.2018.02.021.
3. Zhang, H.; Zhao, M.; Wei, C.; Mantini, D.; Li, Z.; Liu, Q. A Benchmark Dataset for Deep
Learning Solutions of EEG Denoising, [Online] 2019. https://arxiv.org/abs/2009.11662
4. Gandhi, T.; Panigrahi, B. K.; Anand, S. A Comparative Study of Wavelet Families for
EEG Signals Classification. Neurocomputing 2011, 74 (17), 3051–3057.
5. Bruns, A. Fourier-, Hilbert- and Wavelet-Based Signal Analysis: Are They Really
Different Approaches. J. Neurosci. Methods 2004, 137 (2), 321–332.
6. Zeng, H.; Song, A. Removal of EOG Artifacts from EEG Recordings Using Stationary
Subspace Analysis. Hindawi Publishing Corporation. Sci. World J. 2014, 2014.
7. Hussin, S. S.; Sudirman, R. R. EEG Interpretation through Short Time Fourier
Transform for Sensory Response Among Children. Australian J. Basic Appl. Sci. 2014,
8 (5), 417–422.
8. Mowla, Md. R.; Ng, S. C.; Zilany, M. S. A.; Paramesran, R. Artifacts-Matched Blind
Source Separation and Wavelet Transform for Multichannel EEG Denoising. Biomed.
Signal Process. Control. 2015, 22, 111–118. DOI: 10.1016/ j.bspc. 2015.06.009.
9. Nguyen, H. A. T.; et al. EOG Artifact Removal Using a Wavelet Neural Network.
Neurocomputing 2012, 97, 374–389.DOI: 10.1016/j.neucom.2012.04.016.
10. He, C.; Xing, J.; Li, J.; Yang, Q.; Wang, R. A New Wavelet Threshold Determination
Method Considering Interscale Correlation in Signal Denoising. Math. Probl. Eng.
2015, 2015, 280251. DOI: 10.1155/2015/280251.
11. Klados, M. A.; Bamidis, P. D. A Semi-Simulated EEG/EOG Dataset for the Comparison
of EOG Artifact Rejection Techniques. Data Br. 2016, 8, 1004–1006. DOI: 10.1016/j.
dib.2016.06.032.
12. Muthukumaran, D.; Sivakumar, M. Medical Image Registration: A Matlab Based
Approach. Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol. 2017, 2, 29–34.
13. Ahalya, S.; Umapathy, K.; Sivakumar, M. Image Segmentation–A MATLAB Based
Approach. Int. J. Eng. Res. Comput. Sci. Eng. 2017, 10–14, ISSN: 2394–2320.
14. Lehari, M. N. S. Umapathy, K.; Sivakumar, K. M. SVD Algorithm for Lossy Image
Compression. Int. J. Eng. Res. Comput. Sci. Eng. 2017, 44–48, ISSN: 2394–2320.
CHAPTER 7
ABSTRACT
7.1 INTRODUCTION
“It is thrilling to see what machine learning can do for the area of computer
science and engineering. As a subfield of artificial intelligence, it promotes
the extraction of meaningful patterns from instances, which is an essential
component of human intelligence.”1 To educate a computer system to think
like an expert, machine learning methods are essential.2 The goal of machine
learning research is to give computers the ability to learn on their own.3 For
example, a computer may collect patterns from data and then analyze them
on its own for autonomous reasoning in this domain.4 Medical imaging is
a fast expanding study field that is used to identify and cure diseases early.
Based on certain expectations, digital image processing has a major impact
on decision-making procedures. It improves the accuracy and extraction of
features. The process of functional evaluation is complex and contains a
wide range of attributes.5,6 There are several computer systems available that
use digital image processing methods. Image processing technologies must
be validated before they can be used to implement specific operations that
impact their performance. Medical imaging techniques are used for making
choices and taking action. Basic and advanced functionality are available
for image analysis and visualization. There are two subdomains of Artificial
Intelligence: Machine Learning and Deep Learning.8 To achieve Machine
Learning (ML), AI, and Deep Learning (DL) all require the same methodolo
gies, which in turn are used to achieve AI.9
Medical pictures go through a series of processes before they may be used
to identify an output signal. It initially goes via the machine learning and
deep-learning algorithms. To focus on a certain area, the picture is divided up
into smaller sections. From these segments, attributes may be found utilizing
information retrieval methods. We then filter out any extraneous noise before
selecting the suitable quality. There are many ways to categorize the data
and generate predictions based on it in the end. The following stages are
always followed in a machine learning experiment. The most prevalent types
of machine learning algorithms include supervised, semisupervised, unsu
pervised, reinforcement, and active learning. As a result, deep learning is a
kind of advanced machine learning that uses neural networks to categorize
and predict more precisely.10,11
Machine Learning for Medical Images 97
“FIGURE 7.1 Artificial intelligence, machine learning, and deep learning domain.
”
Without any injury to the patient, we would be able to identify and treat
illnesses. Surgery and other invasive treatments are not necessary with
medical imaging, which allows us to observe what is going on within the
body without causing any harm. Possibly, we have taken it for granted at
one point or another. Medical imaging is one of the most powerful tools we
have for efficiently caring for our patients because it may be utilized for both
diagnosis and treatment.
In terms of diagnosis, common imaging types include:
• CT (computer tomography)
• MRI (magnetic resonance imaging)
• Ultrasound
• X-ray
• Nuclear medicine imaging (including positron-emission tomography
(PET)).
• Single photon emission computed tomography (SPECT), etc.
98 Computational Imaging and Analytics in Biomedical Engineering
The field of machine learning architecture has moved from the realm of
possibility to the realm of evidence. An early machine learning strategy for
identifying patterns established the framework for a future huge AI project.
There are three kinds of machine learning architecture: In addition to the
data collection, data processing, model engineering, exploration, and deploy
ment that go into this architecture, there are also three types of unsupervised
learning: reinforcement learning, unsupervised learning, and self-paced
learning.13
FIGURE 7.2 Block diagram of decision flow architecture for machine learning
systems ”
Machine Learning for Medical Images 99
Rather than training on a dataset, the system will make the decision on its
own. The system receives no labeling that may be utilized to make predic
tions. Unsupervised learning may be used to discover the hidden pattern
by learning features from the data. An unsupervised learning technique
known as clustering is used to classify inputs into distinct clusters. These
clusters had previously gone unnoticed. It creates groupings based on
similarities.
It only obtains training tags for a small number of active learning events.
A substance’s optimality is enhanced to achieve the desired tag,. when it
comes to an organization’s budgeting tasks, for instance.
Biological organisms and their progeny’s survival and fatalities are mostly
studied and predicted using this technique in biology. This model may be
used to predict how fitness information can be used to correct the outcome.
When neural networks are utilized for data learning and prediction, this is the
highest level of machine learning. Various algorithms make up this system.
Systems that can solve any issue and anticipate the result are built using
them. An extensive graph with various processing layers is used, which is
composed of numerous linear and nonlinear conversions.14
There are several factors that go into diagnosing an illness in today’s
medical environment. In order to accurately diagnose patients, doctors must
conduct thorough examinations and assessments. Health care data include
everything from medical assessments to patient comments to treatment
to supplement and prescription use. The healthcare business creates a lot
of data. Poor data management may lead to an association effect in these
reports.15 This is the main worry. If these medical records are to be mined
and processed effectively, this data has to be restructured. Different machine
learning methods may be used to distribute data based on its properties using
classifiers that are tailored to the specific needs of each project. Multiple
categories are available for sorting the data. Data from the medical field is
examined using these kinds of classifiers. It was for recognizing medical data
Machine Learning for Medical Images 101
sets that machine learning technologies were first developed and put to use.
A wide range of options are now available to organize and analyze medical
data using machine learning. Most new hospitals include data collecting and
inspection systems that are used to gather and share data. In the medical
field, it is utilized to rectify numerous diagnoses. There must be an accurate
diagnosis in order for an algorithm to work, and findings may be drawn from
earlier instances that have already been cracked. It is a machine learning
concept that uses patterns in medical pictures to predict and draw inferences
about the health of a patient.16,17
7.7 CONCLUSION
Since this technique was first conceived 50 years ago, machine learning
technology has advanced tremendously. Initially, the models were simplistic
and “brittle,” which meant that they could not handle any deviations from the
examples supplied in training. Due to the fast advancements in technology,
machine learning systems will soon be taking on tasks formerly reserved
for human beings. During the previous several years, advances in machine
learning have been made. Currently, machine learning algorithms are quite
resilient in actual circumstances, and the frameworks make the most of the
learning process. It was formerly used to describe the practice of medical
imaging, and it is predicted to increase quickly in the near future. The use
of machine learning in medical imaging has substantial implications for
medicine. It is imperative that our research leads to improving patient care.
Machine learning’s benefits must be taken seriously to make the most use of
them feasible.
KEYWORDS
• medical imaging
• machine learning
• image enhancement
• information retrieval
• supervised learning
• unsupervised learning
REFERENCES
1. Erickson, B. J.; Korfiatis, P.; Akkus, Z.; Kline, T. L. Machine Learning for Medical
Imaging. Radiographics 2017, 37 (2), 505–515 [Online]. DOI: 10.1148/rg.2017160130
PMCID: PMC5375621
104 Computational Imaging and Analytics in Biomedical Engineering
2. Latif, J.; Xiao, C.; Imran, A.; Tu, S. In Medical Imaging Using Machine Learning and
Deep Learning Algorithms: A Review, 2019 International Conference on Computing,
Mathematics and Engineering Technologies – iCoMET 2019, March 2019. DOI:
10.1109/ICOMET.2019.8673502
3. Valiant, L. G. A Theory of the Learnable. Commun. ACM 1984, 27 (11), 1134–1142.
4. Robert, C.; In Machine Learning, A Probabilistic Perspective; Taylor & Francis, 2014.
5. Doi, K. Computer-Aided Diagnosis in Medical Imaging: Historical Review, Current
Status and Future Potential. Comput. Med. Imag. Graph. 2007, 31 (4-5), 198–211.
6. Mahesh, M. Fundamentals of Medical Imaging. Med. Phys. 2011.
7. Jannin, P.; Grova, C.; Maurer, C. R. Model for Defining and Reporting Reference-Based
Validation Protocols in Medical Image Processing. Int. J. Comput. Assist. Radiol. Surg.
2006, 1 (2), 63–73.
8. https://www.intel.com/content/www/us/en/artificial-intelligence/posts/difference
between-ai-machine-learning-deep-learning.html
9. Michalski, R. S.; Carbonell, J. G.; Mitchell, T. M. Eds.; In Machine Learning: An
Artificial Intelligence Approach; Springer Science & Business Media, 2013.
10. Norris, D. J. Machine Learning: Deep Learning. In Beginning Artificial Intelligence
with the Raspberry Pi.; Springer, 2017; pp 211–247.
11. Jankowski, N.; Grochowski, M. In Comparison of Instance Selection Algorithms
I. Algorithms Survey, International Conference on Artificial Intelligence and Soft
Computing; Springer, 2004.
12. Wernick, M. N.; Yang, Y.; Brankov, J. G.; Yourganov, G.; Strother, S. C. Machine
Learning in Medical Imaging. Process. Mag. 2010, 27, 25–38.
13. https://www.educba.com/machine-learning-architecture/
14. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015,
61, 85–117.
15. Warwick, W.; et al. A Framework to Assess Healthcare Data Quality. Eur. J. Soc. Behav.
Sci. 2015, 13 (2), 1730.
16. Ghassemi, M.; et al. Opportunities in Machine Learning for Healthcare, 2018. arXiv
preprint arXiv:1806.00388.
17. Dua, S.; Acharya, U. R.; Dua, P. In Machine Learning in Healthcare Informatics, 2014.
18. Suzuki, K. Pixel-Based Machine Learning in Medical Imaging. J. Biomed. Imag. 2012,
2012, 1.
19. Agarwal, T. K.; Tiwari, M.; Lamba, S. S. In Modified Histogram Based Contrast
Enhancement Using Homomorphic Filtering for Medical Images, Advance Computing
Conference (IACC); 2014 IEEE International; IEEE, 2014.
20. Nanni, L.; Lumini, A.; Brahnam, S. Local Binary Pattern Variants as Texture Descriptors
for Medical Image Analysis. Artif. Intell. Med. 2010, 49 (2), 117–125
21. Shi, Z.; He, L. In Application of Neural Networks in Medical Image Processing,
Proceedings of the Second International Symposium on Networking and Network
Security; Citeseer, 2010.
22. Bratko, I.; Mozetič, I.; Lavrač, N. KARDIO: A Study in Deep and Qualitative Knowledge
for Expert Systems; MIT Press, 1990.
23. Narasimhamurthy, A. An Overview of Machine Learning in Medical Image Analysis:
Trends in Health Informatics. In Classification and Clustering in Biomedical Signal
Processing; IGI Global, 2016; pp 23–45.
CHAPTER 8
INNOVATIONS IN ARTIFICIAL
INTELLIGENCE AND HUMAN
COMPUTER INTERACTION IN THE
DIGITAL ERA
M. V. ISHWARYA,1 M. SURESH ANAND,2 A. KUMARESAN,3 and
N. GOPINATH4
1
Department of Artificial Intelligence and Data Science, Agni College
of Technology, Chennai, India
2
Department of Computing Technologies, School of Computing, SRM
Institute of Science & Technology, Kattankulathur, India
3
School of Computer Science and Engineering, Vellore Institute of
Technology, Vellore, India
4
Department of Computer Science and Engineering, Sri Sairam
Engineering College, Chennai, India
ABSTRACT
powerful communication with PCs. The new ascent of profound learning has
changed AI and has prompted a heap of pragmatic strategies and apparatuses
that fundamentally sway regions outside of center AI.
Specifically, current AI strategies presently power new ways for
machines and people to communicate. Accordingly, it is convenient to
examine how current AI can drive HCI research recently and how HCI
exploration can assist with coordinating AI advancements. This study
offers a gathering for specialists to examine new freedoms that lie in
bringing current AI strategies into HCI research, recognizing significant
issues to explore, exhibiting computational, and logical techniques that can
be applied, and sharing datasets and apparatuses that are now accessible
or proposing those that ought to be additionally evolved. The themes we
are keen on including profound learning techniques for comprehension
and demonstrating human practices and empowering new association
modalities, cross breed knowledge that consolidate human and machine
insight to address troublesome errands, and apparatuses and strategies for
collaboration information duration and enormous scope information driven
plan. At the center of these points, we need to begin the discussion on how
information driven and information driven methodologies of present day
AI can affect HCI.
With regards to these and different difficulties, the jobs of people working
couple with these frameworks will be significant, yet the HCI people group
has been just a tranquil voice in these discussions to date. This article diagrams
a background marked by the fields that recognize a portion of the powers that
avoided the fields as much as possible. Computer-based intelligence was by
and large set apart by an exceptionally aggressive, long haul vision requiring
costly frameworks, albeit the term was infrequently imagined as being the
length of it end up being, while HCI centered more around advancement and
improvement of broadly utilized equipment inside a brief time frame scale.
These distinctions prompted various needs, techniques, and appraisal draws
near.
We propose a human-PC communication stage for the hearing weakened,
that would be utilized in medical clinics and banks. To grow such a system,
we gathered Bosphorus Sign, a Turkish Sign Language corpus in wellbeing
and finance spaces, by counseling communication through signing etymolo
gists, local clients, and area specialists. Utilizing a subset of the gathered
corpus, we have planned a model system, which we called HospiSign that is
expected to help the Deaf in their clinic visits. The HospiSign stage directs its
clients through a tree-based movement outline by asking specific questions
Innovations in Artificial Intelligence and Human Computer Interaction 107
and requiring the clients to reply from the given alternatives. In order to
perceive signs that are offered as responses to the cooperation stage, we
propose during hand position, hand shape, hand development, and chest area
present highlights to represent signs. To demonstrate the fleeting part of the
signs we utilized dynamic time warping and temporal templates. The order
of the signs is finished utilizing k-Nearest Neighbors and Random Decision
Forest classifiers. We led experiments on a subset of Bosphorus Sign and
assessed the adequacy of the framework in terms of highlights, transient
demonstrating strategies, and characterization techniques. In our experi
ments, the mix of hand position and hand development highlights yielded
the highest acknowledgment execution while both of the fleeting demon
strating and classification techniques gave serious outcomes. In addition, we
researched the impacts of utilizing a tree-based action chart and discovered
the way to deal with increment the recognition performance, yet additionally
facilitate the transformation of the clients to the framework. Furthermore,
we researched area variation and facial milestone limitation methods and
examined their appropriateness to the motion and communication through
signing acknowledgment errands.
8.1 INTRODUCTION
to see, reason, and act. Since it is calculations that make up AI, the elements
of seeing, thinking, and acting can be cultivated heavily influenced by the
computational gadget (e.g., PCs or mechanical technology) being referred
to.
Simulated intelligence at least incorporates:
• Representations of “reality,” insight, and data, alongside-related
techniques for portrayal;
• Machine learning;
• Representations of vision and language;
• Robotics; and
• Virtual reality (characterized underneath)
Human–PC interface (HCI) comprises of the accompanying:
• The machine combination and understanding of information and their
show in a structure advantageous to the human administrator or client
(i.e., shows, human insight copied in computational gadgets, and
reproduction and engineered conditions).
Availability and, specifically, versatility, and ease of use are helpful
highlights for a PC – equipment or programming asset, since they univer
salize the advantages of its legitimate use. Openness implies the joining of
properties to permit utilizing a PC asset by individuals with some sort of
scholarly incapacity or sensorial/actual hindrance, just as the incorpora
tion and expansion of its utilization to all areas of society. Versatility is
characterized as the limit of a PC asset to work in various conditions by
changing its setup which could be made by end clients, or more than likely
its ability to learn with client collaboration and adjust to accommodate
its clients. At long last, ease of use is identified with the office to learn
and utilize a PC asset. These ideas support the full advancement of avail
ability, flexibility, and convenience highlights as a client alternative, and
not as an inconvenience of the asset configuration measure. To advance a
superior readability, the term openness is utilized in future as an overall
term for availability, versatility, and convenience. Def physical weakness,
just as the consideration and augmentation of its utilization to all areas of
society. Flexibility is characterized as the limit of a PC asset to work in
various conditions by changing its arrangement—which could be made
by end clients, or probably its ability to learn with client collaboration and
adjust to accommodate its clients. At last, convenience is identified with
the office to learn and utilize a PC asset.
110 Computational Imaging and Analytics in Biomedical Engineering
its further use in both giving clinical instruction and expert medical care will
change the medical care industry.
HCI professionals give off an impression of being staying away from
a significant region; similarly, individuals working in the space appear
to be ignorant of HCI. For instance, are software engineering divisions
graduating designers ignorant of HCI? Maybe plan/makers/controllers/
obtainment may not be utilizing qualified individuals. Assuming we are
to work on the nature of wellbeing basic regions, HCI must be an inno
vator in uncovering issues and showing it can keep away from or oversee
abandons. There is a minuscule writing on HCI in safety critical regions,
especially in medical care. In the medical care space, the majority of the
writing focuses on clinical sequelae of antagonistic occasions (e.g., by
directing CPR and a remedy, the patient recuperated from a medication
glut), not on the plan or HCI issues (e.g., the ergonomics of the keypad
urged the attendant to enter some unacceptable medication portion, and
the UI configuration gave no dynamic survey of the portion) that made the
idle conditions for the blunder. In the USA, the obligatory clinical gadget
mistake reports frequently fault the client at whatever point no specialized
breakdown could be recognized.
HCI and dormant conditions prompting the occurrence are by and large
overlooked. One of the conceivable outcomes is that HCI is intricate and
that clinical gadgets and conditions are perplexing. One may likewise add
further reasons like the exceptionally cut-throat market, quickly evolving
advancements, and issues undertaking dependable environmental client
contemplates (e.g., patient classification, or generally unessential clinical
exploration guidelines, for example, randomized control preliminaries). In
actuality, this paper showed that fundamental HCI can add to the medical
care. Obviously, a few issues of security basic frameworks lie outside HCI
itself; maybe plans of action inside an administrative system that disregards
HCI. Consider how we prepare and spur HCI experts to draw in—or stay
away from—significant regions like medical care. The turn of events and
take-up of various advancements might appear to be exceptionally unique in
various pieces of the world, due to accessibility, expenses, and specialized
availability. The Western world has seen an improvement from fixed PCs,
through PCs tablets, and versatile PCs, presently moving toward the Internet
of things, sensors, and universal advancements. It has gone from proficient
business related use to more private and recreation use. Numerous nations
from the creating scene skirt at least one stage in this interaction, furnishing
admittance to versatile advances with no or little utilization of fixed or PCs.
Innovations in Artificial Intelligence and Human Computer Interaction 119
Portable innovations are less expensive, more secure, and simpler to use in
a more customized way.
The methods of utilizing the computerized innovations fluctuate a ton in
various pieces of the world because of culture, yet in addition to accessibility.
In the Western world, a cell phone might be an extremely private device that
one infrequently provides for another person; yet in the country Africa, a cell
phone might be shared by numerous individuals in a more prominent social
setting.
Through the expanding digitalization of advanced education and the
expanding global accessibility through MOOCs and other carefully acces
sible innovations for instructing and learning, the chances for building
abilities and occupations in the agricultural nations also increment
immensely. A particularly mechanical improvement will, if appropriately
utilized, assume a significant part in democratization and fair freedoms for
advancement and in adding to a few of the maintainability advancement
objectives.
It is important to create and adjust the techniques and cycles for plan and
improvement of computerized change to the particular territorial conditions
in each separate region. By opening up advancement, publicly supporting
improvement to take into account numerous to contribute, by being more
deft and considering changing prerequisites improvement could be made
considerably more applicable for different pieces of the world. Changing
conduct and mentalities to permit a more client focused methodology,
notwithstanding, has so far ended up functioning admirably in various
pieces of the world, regardless of whether the manners by which it has been
embraced change in various districts.
There is likewise a danger for “imperialistic colonialization” of program
ming improvement in which “advanced superpowers” force their advance
ment societies without recognizing the neighborhood perspectives in light of
a legitimate concern for digitalization. Supporting global turn of events and
digitalization requires a reasonable cycle of quietude and regard for nearby
customs and societies to deal with a solid presentation of new improvement
measures, instruments, practices, and strategies. In addition, research has
shown that the presentation of new innovation in medical care is once in
a while met with opposition and that dormancy is solid with regards to the
foundation of innovation. At the point when online clinical records were
presented in Sweden doctors emphatically disliked the framework as they
were stressed over their workplace and about patients perusing clinical
records on the web and becoming stressed over what they read.25
120 Computational Imaging and Analytics in Biomedical Engineering
It is to some degree secret that sound isn’t utilized all the more broadly in
human–PC cooperation plan as we in reality can hear a ton in any case,
maybe, the hear-able truth is as well “undetectable” and hence hear-able
interface configuration seems, by all accounts, to be troublesome. There
is actually no explanation, aside from availability for outwardly weakened
clients, to add sound to human–PC interfaces that are as of now advanced for
the visual methodology, as it doesn’t actually add to framework convenience.
Innovations in Artificial Intelligence and Human Computer Interaction 121
This suggests that sound items, which can be utilized both as full grown
and transformative articles, are appropriate for hear-able portrayals for new
communication ideal models with non-stop control by client activities. With
cartoonification (or as some like, caricaturization), the sound models can
be made computationally more proficient than completely sensible models;
subsequently, they are bounded to be appropriate for “slender” stages like
wearable or handheld PCs.
Applications we need to truly consider if a bunch of sonification natives
can be incorporated with a working framework. This can thusly bring about
Innovations in Artificial Intelligence and Human Computer Interaction 125
the turn of events and check of solid toolbox for frameworks engineers, like
what is accessible for graphical UIs today. There is likewise a need to instruct
and uphold collaboration fashioners with the goal that they can open up their
innovative intuition toward intelligent sonification, for example, that it is
feasible to give persistent criticism continuously for signal-based gadgets. All
parts in human–PC interfaces likewise have tasteful properties. It is presum
ably conceivable to plan sonifications that are psychoacoustically right and
very productive yet extremely horrendous to pay attention to. As proposed
by Eric Somers, we need to draw upon the information and thoughts of
Foley specialists (sound plan for film, radio, and TV) just as exercises gained
from different hypotheses of acousmatic music. We have examined some
clever ways to deal with intuitive sonification in human–PC collaboration
plan, specifically, for relaxed use and pervasive and wearable applications.
We recommend that such plans ought to be founded on the outcomes from
listening tests with potential similitudes being extricated from clients’ depic
tions of ordinary sounds. The listening tests can likewise give direction in
our comprehension of how blends of hear-able symbols can be deciphered
by clients. At long last, we portray an instance of utilizing sound objects
for communication plan and propose spaces of future exploration. Have
executed a programmed interpretation framework, which converts finger-
spelled expressions to discourse and the other way around, in a customer
worker architecture. The objective of the investigation isn’t just to help a
meeting impeded individual yet additionally to assist an outwardly weakened
individual to connect with others. The framework upholds numerous spoken
and communications through signing, including Czech, English, Turkish,
and Russian, and the interpretation between these communicated in dialects
is taken care of utilizing the Google Translate API. The acknowledgment of
multi-lingual finger spelling and discourse was finished utilizing k-Nearest
Neighbors Algorithm (k-NN) and HMMs, separately. In the fingerspelling
synthesis model, a 3D enlivened symbol is utilized to communicate both
manual and non-manual features of an offered hint. He Dicta-Sign25 is a
multi-lingual communication through signing research project that points to
make Web 2.0 applications open for Deaf individuals so they can associate
with each other. In their Sign Wiki model, the creators show how their system
enables gesture-based communication clients to get data from the Web. Like
Wikipedia, in which users are approached to enter text as a contribution from
their console, gesture-based communication clients can search and alter any
page they need, and collaborate with the framework by means of a Microsoft
Kinect sensor in the Dicta-Sign Wiki. The Dicta-Sign is presently accessible
126 Computational Imaging and Analytics in Biomedical Engineering
The possibility of interface and related ideas like plan and convenience,
are the absolute generally vexed in contemporary processing. Meanings of
interface normally summon the picture of a “surface” or a “limit” where at
least two “frameworks,” “gadgets,” or “substances” come into “contact” or
“connect.” Though these terms support spatial translation, most interfaces
likewise exemplify worldly, haptic, and psychological components. The
directing wheel of a vehicle, the control board of a VCR, and the handle of
an entryway are altogether instances of ordinary interfaces that show these
measurements. With regards to PCs and processing, “interface” is regularly
utilized conversely with “graphical UI,” or GUI, most often experienced
as a work area windows climate. The order line brief is maybe the most
popular option in contrast to the GUI, however, there are a plenty of others,
including screen peruses, movement trackers, unmistakable UIs (TUIs, stun
ningly delivered in the 2002 film Minority Report), and vivid or increased
registering conditions. In the humanities, in the interim, it is progressively
entirely expected to experience the possibility that a book or a page is a sort
of interface, a reaction to the arrangement that the shows of composition and
print culture are no less mechanically resolved than those of the computer
ized world. Something like one onlooker, Steven Johnson, has characterized
our present recorded second as an “interface culture,” a term he employs
to accept not just the pervasiveness of PCs and electronic gadgets yet in
addition the manner by which interface has come to work as a sort of saying
or social getting sorted out standard—what the modern novel was to the
nineteenth century or TV to the rural American 1950s are his models.
However, much it is discussed; in any case, interface can now and again
appear to be minimal cherished. Ease of use master Donald A. Norman
states: “The genuine issue with interface is that it is an interface. Interfaces
disrupt everything. I would prefer not to zero in my energies on interface.
I need to zero in at work” (2002: 210). Nicholas Negroponte holds that the
“secret” of interface configuration is to “make it disappear” (1995: 93). To
additionally convolute issue, interface is frequently, practically speaking, an
exceptionally recursive wonder. Take the experience of a client sitting at
her PC and perusing the Web, maybe getting to content at a computerized
humanities website. The website’s inside plan forces one layer of interface
between the client and the substance, and the internet browser—its catches
and menus and edges—quickly force another. The client’s work area climate
and working framework then, at that point force a third layer of interface.
The ergonomics of the circumstance (we’ll accept our client is working with
a console and mouse, taking a gander at a screen situated the suggested 18
inches away) make still another layer of interface, a layer which becomes
clear when one considers choices, for example, getting to a similar substance
with a PDA or a wearable gadget, or in a room-based computer-generated
experience setting like a CAVE. Significantly, each of these “layers,” as I
have been calling them, shows the potential for collaboration with each other
just as with the client. The work area climate administers the conduct of
the program programming, whose highlights and capacities thus straightfor
wardly influence numerous parts of the client’s cooperation with the sites’
inside plan and content.
While all that I have quite recently been practicing is recognizable enough
in software engineering circles, especially the area known as human–PC
connection (HCI, additionally now and then distinguished as human–PC
interface), parts of this account might appear to be tricky to perusers prepared
in humanities disciplines. It would not be difficult to come by somebody
willing to contend that my whole situation is the result of one more unac
knowledged interface, a sort of normal social passage whose socially built
belief systems administer our assumptions as to innovation, portrayal, and
admittance to data. In addition, in the situation outlined over, my differentia
tion between various layers of interface and something I nonchalantly called
“content” is one that contradicts many years of work in scholarly and social
analysis, where structure and content are naturally perceived as inseparable
from each other. Accordingly, the heaviness of set up shrewdness in a field
like interface configuration lays on a basic disengage with the predominant
scholarly presumptions of most humanists—that an “interface,” regardless
130 Computational Imaging and Analytics in Biomedical Engineering
to the ideal of ease of use, the interface is likewise where we send our most
innovative highlights and creative twists. Time after time set up as the last
period of an undertaking under a tight cut-off time and a much more tight
spending plan, the interface turns into the first and in quite a while the select
insight of the venture for its end clients. Apparently, the most innovative or
instinctive phase of the advancement interaction, the interface is likewise
conceivably the most observational, subject to the thorough quantitative ease
of use testing spearheaded by HCI. This section makes neither endeavor to
offer a thorough overview of the tremendous expert writing on interface and
convenience, nor does it try to fill in as a plan groundwork or manual for
best practices (Perusers intrigued by those themes ought to counsel the part’s
ideas for additional perusing.
The measure of information in our reality has been detonating, and effec
tively investigating huge information gives an edge to the two organizations
just as people. BI betters dissect huge informational collections, and one
of the more significant apparatuses accessible to BI experts is representa
tion dashboards. By and by, these dashboards stay hard to make and alter
by people with little preparing. Besides, they actually miss the mark in
supporting point by point examination record keeping, and correspondence
of investigation results to other people. This postulation centers around
the upgrading client cooperation with BI dashboards, through (1) simple
creation and customization of dashboard for a more extensive client range,
(2) adding novel explanation support for investigation, and (3) utilization of
visual narrating for conveying dashboard examination results.
8.7 CONCLUSION
KEYWORDS
• HCI
• AI
• framework
• frequency
REFERENCES
1. Wang, T. M.; Tao, Y.; Liu, H. Current Researches and Future Development Trend of
Intelligent Robot: A Review. Int. J. Automat. Comput. 2018, 15 (5), 525–546.
144 Computational Imaging and Analytics in Biomedical Engineering
23. Zhou, H.; Huang, M.; Zhang, T.; Zhu, X.; Liu, B. Emotional Chatting Machine:
Emotional Conversation Generation with Internal and External Memory, 2017. arXiv
preprint arXiv:1704.01074.
24. Ren, F.; Huang, Z. Automatic Facial Expression Learning Method Based on Humanoid
Robot XIN-REN. IEEE Trans. Hum. Mach. Syst. 2016, 46 (6),
25. Minsky, M. L. In The Society of Mind; Simon & Schuster Press, 1988.
26. Picard, R. W. A®ective Computing; MIT Press, 1997.
27. Nagamachi, M. Kansei/A®ective Engineering; CRC Press, 2011.
28. Tu, X. Arti¯cial Emotion, The Paper Assembly of the 10th Annual CAAI; Guangzhou,
China, 2000.
29. Wang, Z.; Xie, L. Arti¯cial Psychology-an Attainable Scienti¯c Research on the Human
Brain; IPMM: Honolulu, 1999; vol 2, pp 1067–1072.
30. Wang, Z. Arti¯cial Psychology and Arti¯cial Emotion. CAAI Trans. Intell. Syst. 2006,
1 (1), 38–43.
31. Ledoux, J. E. Emotion Circuits in the Brain. Ann. Rev. Neurosci. 1999, 23 (23), 155–184.
32. Cardinal, R. N.; Parkinson, J. A.; Hall, J.; et al. Emotion and Motivation: The Role of the
33. Amygdala, Ventral Striatum, and Prefrontal Cortex. Neurosci. Biobehav. Rev. 2002, 26
(3) 321–352.
34. Hossain, M. S.; Muhammad, G. Audio-visual Emotion Recognition Using
Multidirectional Regression and Ridgelet Transform. J. Multimodal User Interfaces
2016, 10 (4), 325–333.
35. Shaver, P.; Schwartz, J.; Kirson, D.; et al. Emotion Knowledge: Further Exploration of
a Prototype Approach. J. Pers. Soc. Psychol. 1987, 52 (6), 1061.
36. Nicolaou, M. A.; Zafeiriou, S.; Pantic, M. In Correlated-Spaces Regression for Learning
Continuous Emotion Dimensions, ACM International Conference on Multimedia, 2013;
pp 773–776.
37. Barrett, L. F. Discrete Emotions or Dimensions? The Role of Valence Focus and Arousal
Focus. Cogn. Emot. 1998, 12 (4), 579–599.
38. Kamarol, S. K. A.; Jaward, M. H.; Kälviäinen, H.; et al. Joint Facial Expression
Recognition and Intensity Estimation Based on Weighted Votes of Image Sequences.
Pattern Recognit. Lett. 2017, 92 (C), 25–32.
39. Mehrabian, A. Pleasure-Arousal-Dominance: A General Framework for Describing
and Measuring Individual Di®erences in Temperament. Curr. Psychol. 1996, 14 (4),
261–292.
40. Breazeal, C. Function Meets Style: Insights from Emotion Theory Applied to HRI.
IEEE Trans. Syst. Man Cybern. C 2004, 34 (2), 187–194.
41. Ren, F.; Quan, C.; Matsumoto, K. Enriching Mental Engineering. Int. J. Innov. Comput.
Inf. Control 2013, 9 (8), 3271–3284.
42. Xiang, H.; Jiang, P.; Xiao, S.; et al. A Model of Mental State Transition Network. IEEJ
Trans. Electron. Inf. Syst. 2007, 127 (3), 434–442.
43. Mathavan, S. A.; et al. An Emotion Recognition Based Fitness Application for Fitness
Blenders. Euro. J. Mol. Clin. Med. 2020, 7 (2), 5280–5288.
44. Ren, F. A®ective Information Processing and Recognizing Human Emotion. Electron.
Notes Theor. Comput. Sci. 2009, 225, 39–50.
CHAPTER 9
COMPUTER-AIDED AUTOMATIC
DETECTION AND DIAGNOSIS
OF CERVICAL CANCER BY USING
FEATURE MARKERS
P. SUKUMAR1, R MURUGASAMI1, A. RAJAN2, and S. SHARMILA3
1
Department of Computer Science and Engineering, Nandha
Engineering College (Autonomous), Erode, Tamil Nadu, India
Department of ECE, Sree Rama Engineering College, Thirupathi,
2
ABSTRACT
re-processed papnicolaou spot cell image. False error rate can be decreased
by using the automated process.
9.1 INTRODUCTION
test for cervix cancer in women used to detect nonvisible Human Papilloma
Virus (HPV) infection.
A papanicolaou smear is a process performed by the therapist during
which a trail of cells are taken from the cervix-uteri using a minor swab and
examined for any abnormal microscopic appearances due to HPV infections.
Papanicolaou smear test is done either at the age of eighteen or at earlier.
Papanicolaou smear test should be done 1–5 years depending on the women’s
risk factor. Smoking may make cervical cells more susceptible to infection
and the development of cervical cancer because the carcinogens in cigarettes
pass from the lungs into the blood stream and into the cervical mucus.
A woman’s socioeconomic status is important because it is an indicator
of her access to health care. Women from lower socioeconomic backgrounds
tend to have less access to basic healthcare, which decreases the chance that
abnormal cervical cell changes will be caught before they lead to cancer.
Women with first-degree relatives who have had breast cancer are more
likely to perform breast self-examinations than are women without a family
history of breast cancer. Daughters whose mothers have had breast cancer are
also more likely to be involved in the medical setting and are more likely to
seek out relevant information. Such women are more likely to have physical
examinations and to have such examinations more frequently.
No doubt the daughters of these women are also less informed about
Pap smears since mother–daughter communication surrounding anything
perceived to be sexual is likely to be low. Generational acculturation may
also influence mother–daughter communication; as a generation becomes
more acculturated they are more likely to take on the morals/values of
the host country. Open and candid discussions about sex or the body may
become more frequent when a family has lived in this culture for two or
more generations.
Studies show that women, in general, are severely undereducated about
the Pap smear and about HPV. The purpose of the Pap smear is poorly under
stood. Some women think a Pap smear screens for Sexually Transmitted
Infections (STIs) other than HPV, for HIV, or for pregnancy. Others think
that the test screens for reproductive tract cancers other than cervical cancer
such as ovarian, endometrial, and uterine cancer.
Both age and education level influence women’s perceptions of the
importance of Pap smears. As a woman ages, she is more likely to believe
that Pap smears are important to her reproductive health.
This is even more obvious as women’s education levels increase. Adoles
cents and older women with minimal education are more likely than more
150 Computational Imaging and Analytics in Biomedical Engineering
educated women to fear Pap smear testing and to have more misconceptions
about the purpose of Pap smears.
The detection of cervical cancers from the papanicolaou smear images is
a challenging task in medicinal image processing. This can be improved in
two ways. One way is by selecting suitable well-defined exact features and
the other is by selecting the best classifier.
Several spontaneous and semi-spontaneous methods have been suggested
in various times to identify various periods of cervical cancer. These
methods were not supported in attaining the purposes of providing dignified
variables which could eradicate the clarification errors and interobserver
discrepancy.
9.3.1 GENERAL
9.4 METHODOLOGY
9.4.1 GENERAL
9.4.3 METHODS
The features are extracted from the re-processed papnicolaou spot cell
image. Features are used to distinguish the usual and irregular papnicolaou
spot images for cervical cancer identification.
154 Computational Imaging and Analytics in Biomedical Engineering
=sp I ( xy,=
yt ), s 0,......s − 2 (9.1)
x + R sin(2πp / Y )
xy = (9.2)
y − s cos(2πp / y)
ys = (9.3)
=p I ( xy,=
yt ), p 0,......P − 2 (9.4)
x + R cos(2πp / Y )
xp = (9.5)
The boundary operators are operated in the monotonous image to obtain
the expected value.
Standard operative is resultant from dual circulation. It is obtained from
threshold changes based on rules. The standard operative is
Computer-Aided Automatic Detection and Diagnosis of Cervical Cancer 155
∑ (9.6)
p=1
, R ( g c , yc )
LBPP= p=0
s ( s p − sc )2 p
Step 1: Original image is used for analysis in the order of three rows and
three columns
Step 2: Location of sub-image based on medical image applications
Step 3: Conversion of sub-image into mathematical transformation
Step 4: Values attained based on the transformation results
Step 5: Transformation results attained based on mathematical formulation
principle.
156 Computational Imaging and Analytics in Biomedical Engineering
By altering the sum of picture element in the adjacency and its radius
using a round adjacency and bilinear inserting values, the LBP might be
applied for dissimilar sized adjacency area. The gray-scale differences in the
local nearest might be used as the consistent difference portion. The picture
element adjacency is based on adjacent points.
Figure 9.3 shows the operational procedure of Reckoning operation.
Original image is subdivided into a number of segments as per the matrix
computation. Values are considered based on intensity measures as per the
operational procedure with 3 × 3 matrix.
The center pixel value is 54; it is compared with the other values in the
matrix. The values greater than center pixel value should be considered as
binary number high component as 1. The values lesser than center pixel
value should be considered as binary number low component as 0.
Now the sub-image value is converted into binary value as 1 and 0.
The intensity range of pixels varies based on the present value. The matrix
block contains 1 as high resolution and the matrix block contains 0 as low
resolution.
Sub-image is converted into binary values and it is converted into decimal
value. Before the conversion of decimal operations, in clockwise order the
values are considered 1,101,011 by converting into decimal value as 203.
H
= s ∑ x, y
I { fl (k ,=
z ) b=
} s 0,1,...., s −1 (9.7)
where, “n” is the numeral of LBP labels.
1 5 3 4
2 2 4 1
3 4 5 5
4 2 1 2
Computer-Aided Automatic Detection and Diagnosis of Cervical Cancer 157
The data are modeled based on repeated occurrence from the original
medium. It is calculated based on specific diagonal parameters. Contrast is
computed using this technique. The comparisons between first order to first
order, second order to second order, third order to third order, fourth order
to fourth order, and finally fifth order to fifth order are framed in the matrix
of computation and the expected values are attained based on the above
principles.
GLCM 1 2 3 4 5
1 0 1 0 0 0
2 0 0 0 1 1
3 1 1 0 0 0
4 1 0 0 0 1
5 0 1 0 1 0
Contrast
• It measures the strength distinction among a picture element and its
nearest.
Energy
• Yields the amount of shaped origins in the specific matrix.
Homogeneousness
• Yields a charge that actions the nearness of the circulation of basics in
the array of element.
Recorrelation
• Yields a portion of how connected a picture element to nearest in
comparison with the original values.
Law’s Energy Texture Features
• Law’s texture energy measures are derived from three simple vectors
of length 3,
• L3 = (1,2,1), E3 = (−1,0,1), S3 = (1,2, −1).
• Convolving these features.
158 Computational Imaging and Analytics in Biomedical Engineering
FIGURE 9.6 (a) Dysplasia Papanicolaou smear cell image. (b) Dysplasia cell subdivided
image using planned technique. (c) Dysplasia cell physically subdivided image.
Computer-Aided Automatic Detection and Diagnosis of Cervical Cancer 159
TABLE 9.2 Mined GLCM Features for Regular and Dysplasia Cells.
GLCM features Standard cells Dysplasia cells
Energies 1.56×10 -5
1.623×10-5
Entropies 1.1078 1.106
Autocorrelations 1.616×10 4
1.596×104
Contrasts 1.13×104 1.306×104
9.6 CONCLUSION
KEYWORDS
• pap smear
• human papilloma virus
• cervical cancer
• medical imaging
• image analysis
• morphological operations
REFERENCES
1. WHO Report. Cervical Cancer Screening in Developing Countries; Report of the World
Health Organization, 2002. http://whqlibdoc.who.int/publications/002/9241545720.pdf
2. National Cancer Institute. SEER Incidence and US Mortality Rates and Trends for the
Top 15 Cancer Sites by Race/Ethnicity; National Health Institute: Bethesda, 2005.
3. World Health Organization. Global Burden of Disease Report: Causes of Death in 2004;
Global Burden of Disease Report, World Health Organization: Geneva, 2004.
4. Roland, K. B.; Benard, V. B.; Greek, A.; Hawkins, N. A.; Manninen, D.; Saraiya, M.
Primary Care provider Practices and Beliefs Related to Cervical Cancer Screening with
the HPV Test in Federally Qualified Health Centers. Prev. Med. 2013, 57 (5), 419–425.
5. American Cancer Society (ACS). What is Cervical Cancer [Online] 2010. http://www.
cancer.org/cancer/cervicalcancer/detailedguide/cervical-cancer-what-is-cervical-cancer.
6. American Cancer Society (ACS). Key Statistics About Cervical Cancer
[Online] 2015. http://www.cancer.org/cancer/cervicalcancer/detailedguide/
cervical-cancer-key-statistics.
7. Song, D.; Kim, E.; Huang, X.; Patruno, J.; Muñoz-Avila, H.; Heflin, J.; Long, R. L.;
Antani, S. Multimodal Entity Coreference for Cervical Dysplasia Diagnosis. IEEE
Trans. Med. Imaging 2015, 34 (1), 229–235.
8. Gordon, S.; Zimmerman, G.; Greenspan, H. In Image Segmentation of Uterine Cervix
Images for Indexing in PACS, Proceedings of 17th IEEE Symposium on Computer-
Based Medical System, 2004; pp 298–303.
9. Ji, Q.; Engel, J.; Craine, E. Classifying Cervix Tissue Patterns with Texture Analysis.
Pattern Recognit. 2000, 33 (9), 1561–1574.
10. Park, S. Y.; Sargent, D.; Lieberman, R.; Gustafsson, U. Domain-Specific Image Analysis
for Cervical Neoplasia Detection Based on Conditional Random Fields. IEEE Trans.
Med. Imaging 2011, 30 (3), 867–878.
11. Horng, J. T.; Hu, K. C.; Wu, L. C.; Huang, H. D.; Lin, F. M.; Huang, S. L.; Lai, H. C.;
Chu, T. Y. Identifying the Combination of Genetic Factors that Determine Susceptibility
to Cervical Cancer. IEEE Trans. Inf. Technol. Biomed. 2004, 8 (1), 59–66.
12. Greenspan, H.; Gordon, S.; Zimmerman, G.; Lotenberg, S.; Jeronimo, J.; Antani, S.;
Long, R. Automatic Detection of Anatomical Landmarks in Uterine Cervix Images.
IEEE Trans. Med. Imaging 2009, 28 (3), 454–468.
Computer-Aided Automatic Detection and Diagnosis of Cervical Cancer 161
13. Alush, A.; Greenspan, H.; Goldberger, J. Automated and Interactive Lesion Detection
and Segmentation in Uterine Cervix Images. IEEE Trans. Med. Imaging 2010, 29 (2),
488–501.
14. Herrero, R.; Schiffman, M.; Bratti, C.; Hildesheim, A.; Balmaceda, I.; Sherman, M.
Design and Methods of a Population-Based Natural History Study of Cervical Neoplasia
in a Rural Province of Costa Rica: The Guanacaste Project. Rev. Panam. Salud. Publica.
1997, 1, 362–375.
15. Holland, J. H. In Adaptation in Natural and Artificial Systems, University of Michigan
Press: Ann Arbor, MI, 1975.
16. Kim, E.; Huang, X. A Data Driven Approach to Cervigram Image Analysis and
Classification; Color Medical Image Analysis, Series, In Lecture Notes in Computational
Vision and Biomechanics; Celebi, M. E., Schaefer, G., Eds.; Springer: Amsterdam, The
Netherlands, 2013; vol 6, pp 1–13.
17. Chang, S.; Mirabal, Y.; Atkinson, E.; Cox, D.; Malpica, A.; Follen, M.; Richards-
Kortum, R. Combined Reflectance and Fluorescence Spectroscopy for In Vivo Detection
of Cervical Pre-cancer. J. Lower Genital Tract Dis. 2005, 10 (2), 024031.
18. Davey, E.; Assuncao, J.; Irwig, L.; Macaskill, P.; Chan, S. F.; Richards, A.; Farnsworth,
A. Accuracy of Reading Liquid Based Cytology Slides Using the ThinPrep Image
Compared with Conventional Cytology: Prospective Study. Br. Med. J. 2007, 335
(7609), 31.
CHAPTER 10
ABSTRACT
The collection of people opinion with the use of social media in terms
of product selection is referred as sentiment analysis. Social media is the
method of sharing data with a large number of people. It can be addressed
as a medium of propagating information through an interface. In this paper,
fundamental concept and different algorithms used in sentiment analysis were
explained; also, it gives information about datasets for helping researchers
to do research in sentiment analysis. Based on the analysis of responses in
terms of views and feedbacks, three types of sentiments can be found—posi
tive, negative, and neutral.
10.1 INTRODUCTION
analytics; Section 10.7 discusses about open source sentiment analysis tools;
and Section 10.8 presents the limitations of sentiment analysis. The paper is
concluded in Section 10.9.
C. Hybrid-based approach:
The hybrid-based approach uses both ML and lexicon-based classification
approach. Few research techniques propose a mixture of lexicon-based and
automated learning techniques to enhance the classification of sentiment.
This hybrid approach is primarily advantageous as it can achieve the best of
both. The combination of Lexicon and Learning has demonstrated increased
accuracy.
168 Computational Imaging and Analytics in Biomedical Engineering
The following open source tools are all free and available for building and
maintaining your own sentiment analysis infrastructures and other NLP
systems. However, do keep in mind that in order to make use of the tools
below, you or someone on your team will need the necessary programming
and development skills to handle the coding and ML integration.
• NLTK: This includes lexical analysis, named entity recognition,
tokenization, PoS tagging, and sentiment analysis. It also offers some
great starter resources.
• Spark NLP: Considered by many as one of the most widely used
NLP libraries, Spark NLP is 100% open source, scalable, and
includes full support for Python, Scala, and Java. You’ll find a whole
host of NLP features, pre-trained models and pipelines in multiple
languages. There’s also an active Slack community for discussion and
troubleshooting.
• TextBlob: Built on the shoulders of NLTK, TextBlob is like an
extension that simplifies many of NLTK’s functions. It offers an easy
to understand interface for tasks including sentiment analysis, PoS
tagging, and noun phrase extraction. TextBlob is a recommended
natural language processing tool for beginners.
• Doccano: This open source text annotation tool has been designed
specifically for text annotation. It allows for the creation of labeled
data for sentiment analysis, named entity recognition, and text
summarization. This is a good option to look at for smaller datasets
and building initial proof of concept projects.
172 Computational Imaging and Analytics in Biomedical Engineering
10.9 CONCLUSIONS
KEYWORDS
• sentiment analysis
• approaches
• open source sentiment tools
• sentiment analysis classification
• social network channels
A Study on Sentiment Analysis 173
REFERENCES
1. Yi, S.; Liu, X. Machine Learning Based Customer Sentiment Analysis for Recommending
Shoppers, Shops Based on Customers’ Review. Complex Intell. Syst. 2020, 1 (1). DOI:
https://doi.org/10.1007/s40747-020-00155-2
2. Vohra, S.; Teraiya, J. A Comparative Study of Sentiment Analysis Techniques. Int. J. Inf.
Knowl. Res. Comput. Eng. 2013, 2 (2), 313–317.
3. Machine Learning & its Applications Outsource to India. [Online] May 18, 2020.
https://www.outsource2india.com/software/articles/machine-learning-applications
how-it-works-whouses-it.asp.
4. Leskovec, J. In Social Media Analytics: Tracking, Modeling and Predicting the Flow
of Information Through Networks, Proceedings of 20th International Conference
Companion World Wide Web, 2011; pp 277–278.
5. Hasan, A.; Moin, S.; Karim, A.; Shamshirband, S. Machine Learning-Based Sentiment
Analysis for Twitter Accounts. Math. Computat. Applicat. 2016, 21 (1), ISSN:
2297–8747.
6. Padmaja, S.; et al. Opinion Mining and Sentiment Analysis – An Assessment of Peoples’
Belief: A Survey. Int. J. Ad Hoc Sens. Ubiq. Comput. 2013, 4 (1).
7. Sahu, T. P.; Ahuja, S. In Sentiment Analysis of Movie Reviews: A Study on
Feature Selection & Classification Algorithms, 2016 International Conference on
Microelectronics, Computing and Communications (MicroCom), 2016.
8. Akhtar, Md. S.; Kumar, A.; Ekbal, A.; Bhattacharyya, P. In A Hybrid Deep Learning
Architecture for Sentiment Analysis, International Conference on Computational
Linguistics: Technical Papers, 2016; pp 482–493.
9. Anil Kumar, K. M.; Rajasimha, N.; Reddy, M.; Rajanarayana, A.; Nadgir, K. In Analysis
of Users’ Sentiments from Kannada Web Documents, International Conference on
Communication Networks, 2015; vol 54, pp 247–256.
10. Mittal, N.; Aggarwal, B.; Chouhan, G.; Bania, N.; Pareek, P. In Sentiment Analysis of
Hindi Review Based on Negation and Discourse Relation, International Joint Conference
on Natural Language Processing, 2013; pp 45–50.
11. Wagh, R.; Punde, P. In Survey on Sentiment Analysis using Twitter Dataset, 2nd
International Conference on Electronics, Communication and Aerospace Technology
(ICECA 2018) IEEE Conference, 2018; ISBN: 978-1-5386-0965-1.
12. Medhat, W.; Hassan, A.; Korashy, H. Sentiment Analysis Algorithms and Applications:
A Survey. Ain Shams Eng. J. 2014, 5 (4), 1093–1113. DOI: https://doi.org/10.1016/j.
asej.2014.04.011
13. Aydogan, E.; Akcayol, M. A. In A Comprehensive Survey for Sentiment Analysis
Tasks Using Machine Learning Techniques, International Symposium on Innovations
in Intelligent Systems and Applications, 2016; vol 1 (1), pp 1–7. DOI: https://doi.
org/10.1109/INISTA.2016.7571856
14. Ahmad, M.; Aftab, S.; Muhammad, S. S.; Ahmad, S. Machine Learning Techniques for
Sentiment Analysis: A Review. Int. J Multidiscip. Sci. Eng. 2017, 8 (3), 27–35.
15. Yogi, T. N.; Paudel, N. Comparative Analysis of Machine Learning Based Classification
Algorithms for Sentiment Analysis. Int. J. Innov. Sci. Eng. Technol. 2020, 7 (6), 1–9.
16. Patel, A. Machine Learning Algorithm Overview. Medium [Online] May 18, 2020. https://
medium.com/ml-research-lab/machine-learning-algorithm-overview-5816a2e6303.
174 Computational Imaging and Analytics in Biomedical Engineering
17. Mahendran, N.; Mekala, T. A Survey: Sentiment Analysis Using Machine Learning
Techniques for Social Media Analytics. Int. J. Pure Appl. Math. 2018, 118 (8), 419–422.
18. Abdul-Mageed, M.; Diab, M. T.; Korayem, M. In Subjectivity and Sentiment Analysis
of Modern Standard Arabic, Proceedings of the 49th Annual Meeting of the Association
for Computational Linguistics: Human Language Technologies: Short papers, 2011; vol
2.
19. Nakov, P.; Ritter, A.; Rosenthal, S.; Sebastiani|, F.; Stoyanov, V. In SemEval-2016
Task 4: Sentiment Analysis in Twitter, Proceedings of SemEval2016; Association for
Computational Linguistics, 2016.
20. Xie, H.; Wong, T.; Wang, F. L.; et al. Editorial: Affective and Sentimental Computing.
Int. J. Mach. Learn. Cybern. 2019, 10, 2043–2044.
CHAPTER 11
APPLICATIONS OF MAGNETIC
RESONANCE IMAGING TECHNIQUES
AND ITS ADVANCEMENTS
V. RAMESH BABU,1 S. MARY CYNTHIA,2 K. SAVIMA,3 and
G. LAKSHMI VARA PRASAD4
1
Department of CSE, Sri Venkateswara College of Engineering,
Sriperumbudur, India
2
Department of ECE, Jeppiaar Institute of Technology, Chennai, India
Department of Computer Science, S.T.E.T. Women’s College,
3
ABSTRACT
11.1 INTRODUCTION
T1 images indicate the time required for the protons to realign with the
direction of applied magnetic field which leads to short TE and repetition
time. Generally, fat takes short time for realignment compared with water
so fat appears bright while water emerges dark. The T1-weighted image
should contain short TR otherwise all protons appear with the same intensity
value. If the selected TR time is shorter than tissue’s recovery time then only
contrast images can be obtained.
11.2.2.1 APPLICATIONS
11.2.2.1.1 Identify Hematologic Marrow Diseases
Initially, from the T1- and T2-weighted MRI images, the bone marrow was
segmented and lot of features were extracted; among them the important
characteristics were selected based on principle component analysis (PCA)
and least absolute shrinkage and selection operator (LASSO).
Then, at last, classification models random forest (RF) and logistic regres
sion (LR) were used to classify bone chondrosarcoma, metastatic diseases,
and osteoporosis.9
Gliomas is a type of brain tumor. The conventional MRI images were used
to classify Gliomas in any one of the four grades given by World Health
Organization.10 This classification plays vital role in decision making and
planning of medical treatment.
T2-weighted images were used for the diagnosis and treatment of Achilles
tendon ruptures (ATR). The experimental results emphasized that this
method provides better positive correlation and higher specificity compared
with manual contour tracing (MCT).11
The selection of the method of medical treatment for cancer is based on the
stage of the disease. Also, the level of recurrence for the same variety of
178 Computational Imaging and Analytics in Biomedical Engineering
cancer may change from person to person. The conventional T1-, T2-weighted
images used to estimate Out-of-field recurrence (OFR) subsequences based
on this the treatment method is chosen from surgery, radiotherapy, and
chemotherapy methods.13
The fully automated quantitative segmentation of stroke lesion with the help
of T2-weighted MRI images as input is a very efficient method compared
with manual segmentation.15
The SEM image captured with the help of spin-echo pulse sequence consists
of 90° excitation pulse and a 180° inversion or refocusing pulse. These pulses
are applied to the tissues present in the region of interest.
The GRE image is obtained with the use of gradient-echo sequences which
are having flip-angle changing over a range of 10–80°. If the flip angle value
is large then it gives more T1-weighting to the image and if the value is small
then it provides T2-weighting to the image.
Diffusion tensor imaging (DTI) is a type of MRI which is based on the flow
of water molecules present in the white matter of central nervous system
(CNS). Since DTI provides the information about the structural connectivity
of the brain white matter, its demand has increased over the last two decades.
Because of limited resolution and contrast, the conventional MRI techniques
were not able give information about axonal organization. But it is possible
using DTI because it primarily depends on the diffusion of water molecules;
its value is high in the axonal bundles compared to the normal direction so
the axonal direction can be easily determined.
in mapping the issues of mTBI onto the white matter functionality of the
brain over time. It is possible with the use of DTI.1
The fractional anisotropy (FA) of water in the brain is quantified if its
value is less—it indicates that the occurrence of TAI. In this work, small
white matter bundles shorter than 4 cm length were neglected because
clustering algorithms used in DTI is not suitable for processing short
streamlines.
Different types of MRI are used to diagnose the autism spectrum disorder
(ASD); whereas structural MRI (sMRI) can be used to study physiological
characteristics and brain functions can be studied by using functional MRI
(fMRI); also DTI is involved in the diagnosis of ASD disorder by studying
brain connectivity.
In general, ASD, called as autism, which contain variety of symptoms
like struggling in social interaction, interpersonal skills, and restricted and
repetitive behaviors.
In which water molecules direction is measured in minimum of six direc
tions, by using this diffusion of water molecules in any other direction can
be determined. The mathematical representation of these directions can be
represented by diffusion tensor which is a 3 × 3 matrix and it is graphically
represented by an ellipsoid. A lot of characteristics may be taken out from
this diffusion tensor matrix, especially FA, axial and radial diffusivity, and
mean diffusivity (MD) which gives information about connectivity and
microstructure of white matter. Also the parameters calculated from these
important features such as trace, skewness, rotational invariance, and others
can be used to diagnosis ASD effectively.2 Image fusing will improve the
accuracy of diagnosis. This process contain three important steps. The first
step is preprocessing step which eliminates image artifacts result of improper
operation of imager and non-brain tissues. Feature extraction is the second
step for that any efficient atlas-based segmentation technique can be used
to calculate, extract, and select features. The final step is the classification
step the linear SVM classifier is used to classify ASD and TD (typically
developed) subjects. In this work, six different output features FA, mean
diffusivity (MD), axial diffusivity (AD), radial diffusivities in the directions
of two minor axis of diffusion ellipsoid, and skewness were used for the
determination of anisotropy.
Applications of Magnetic Resonance Imaging Techniques 181
Also, DTI was used to detect blood–brain barrier (BBB) opening at the
without the use of MRI contrast agent. In this method, diffusion-weighted
images are captured over several directions minimum six directions in
conjunction with an image captured with the absence of weights in order to
populate the diffusion tensor which is a three-by-three, symmetric, positive
definite matrix.3 In this, the word diffusion means that movement of particles
of a body move with the same velocity along parallel paths propelled by
the thermal energy of particles. While movement, these molecules explore
the neighboring tissues at a small scale. The consequences of this displace
ment corresponds to tissue structure is acquired by diffusion-weighted MRI
images. So it is used in many applications like diagnosis of stroke, edema
formation, subarachnoid hemorrhage, and multiple sclerosis.
FA mapping can identify structural changes in axons ensuing traumatic
brain injury.
The fMRI technique is used to determine the minute changes in blood flow
that happened during brain functions. It can predict anomaly brain functions
which cannot be determined with other imaging modalities.
In this work, resting state fMRI (rs-fMRI) is used to determine brain func
tional activities based on blood oxygen level-dependent (BOLD) signals
then the ASD was diagnosed with the help of the combined framework of
182 Computational Imaging and Analytics in Biomedical Engineering
In this work, post-traumatic stress disorder (PTSD) is detected with the use
of rs-fMRI data and the highly affected brain region is detected with the help
of artificial neural network (ANN). The resting-state fMRI is very helpful to
provide functional relationship between the areas of the brain. The ANN is
used to provide dominance level of classification among the affected brain
regions left and right regions of the hippocampus, medial prefrontal cortex,
and amygdala. The experimental results show that the left hippocampus is
the highly influenced brain area in PTSD individuals.6
conventional MRI. Also, SWI have the ability to discriminate calcium from
hemorrhage. It is very essential for traumatic brain injury patients to classify
their severity.
KEYWORDS
REFERENCES
1. Irimia, A.; Fan, D.; Chaudhari, N. N.; Ngo, V.; Zhang, F.; Joshi, S. H.; O'Donnell, L.
J. In Mapping Cerebral Connectivity Changes After Mild Traumatic Brain Injury in
Older Adults Using Diffusion Tensor Imaging and Riemannian Matching of Elastic
Curves, IEEE-17th International Symposium on Biomedical Imaging (ISBI), 2020; pp
1690–1693.
Applications of Magnetic Resonance Imaging Techniques 185
2. Elnakieb, Y. A.; Ali, Md. T.; Soliman, A.; Mahmoud, A. H.; Shalaby, A. M. Computer
Aided Autism Diagnosis Using Diffusion Tensor Imaging. IEEE 2020, 191298–191308.
3. Karakatsani, M. E.; Pouliopoulos, A. N.; Liu, M.; Jambawalikar, S. R.; Konofagou, E. E.
Contrast-Free Detection of Focused Ultrasound-Induced Blood-Brain Barrier Opening
Using Diffusion Tensor Imaging. IEEE Trans. Biomed. Eng. 2021, 68 (8), 2499–2508.
4. Deng, Z.; Wang, L.; Wu, Q.; Chen, Q.; Cao, Y.; Wang, L.; Cheng, X.; Zhang, J.; Zhu, Y.
Investigation of In Vivo Human Cardiac Diffusion Tensor Imaging Using Unsupervised
Dense Encoder-Fusion-Decoder Network. IEEE Access, 2020, 8, 220140–220151.
5. Liang, Y.; Liu, B.; Zhang, H. A Convolutional Neural Network Combined With Prototype
Learning Framework for Brain Functional Network Classification of Autism Spectrum
Disorder. IEEE 2020, 8, 2193–2202.
6. Shahzad, M. N.; Ali, H.; Saba, T.; Rehman, A.; Kolivand, H.; Bahaj, S. A. Identifying
Patients With PTSD Utilizing Resting-State fMRI Data and Neural Network Approach.
IEEE Access 2021, 9, 107941–107954.
7. Haweel, R.; Shalaby, A.; Mahmoud, A. H.; Ghazal, Md.; Seada, N.; Ghoniemy, S.;
Casanova, M. A Novel Grading System for Autism Severity Level Using Task-Based
Functional MRI: A Response to Speech Study. IEEE Access 2021, 9, 100570–100582.
8. Candemir, C.; Gonul, A. S.; Selver, A. M. Automatic Detection of Emotional Changes
Induced by Social Support Loss using fMRI. IEEE Trans. Affect. Comput. 2021, 1–12.
9. Hwang, E. J.; Kim, S.; Jung, J. Y. Bone Marrow Radiomics of T1-Weighted Lumber
Spinal MRI to Identify Diffuse Hematologic Marrow Diseases: Comparison With
Human Readings. IEEE Access 2020, 8, 133321–133329.
10. Ge, C.; Gu, I. Y. H.; Jakola, A. S.; Yang, J. Enlarged Training Dataset by Pairwise GANs
for Molecular-Based Brain Tumor Classification. IEEE Access 2020, 8, 22560–22570.
11. Regulsk, P. A.; Zielinski, J. Multi-Step Segmentation Algorithm for Quantitative
Magnetic Resonance Imaging T2 Mapping of Ruptured Achilles Tendons. IEEE 2020,
8, 199995–200004.
12. Yang, T.; Liang, N.; Li, J.; Yang, Y. Intelligent Imaging Technology in Diagnosis of
Colorectal Cancer Using Deep Learning. IEEE Access 2019, 7, 178839–178847.
13. Ikushima, H.; Haga, A.; Ando, K.; Kato, S.; Yuko, K.; Uno, T. Prediction of Out-of-
Field Recurrence After Chemo Radiotherapy for Cervical Cancer Using a Combination
Model of Clinical Parameters and Magnetic Resonance Imaging Radiomics: A Multi-
institutional Study of the Japanese Radiation Oncology Study Group. J. Radiat. Res.
2022, 63 (1), 98–106.
14. Li, Y.; Zhang, L.; Chen, H.; Yang, N. Lung Nodule Detection With Deep Learning in 3D
Thoracic MR Images. IEEE Access 2019, 7, 37822–37832.
15. Liu, Z.; Cao, C.; Ding, S. Towards Clinical Diagnosis: Automated Stroke Lesion
Segmentation on Multi-Spectral MR Image Using Convolutional Neural Network.
IEEE 2018, 6, 57006–57016.
CHAPTER 12
ABSTRACT
brain tumors, and inflammation of the spine. Neurosurgeon uses the MRI not
only for the study of brain anatomy, but for the reliability of the spinal cord
after trauma. MRI scanners generate 1500 images/second. With the help
of MRI imaging, it generates high-contrast images for studying soft tissue
anatomy. After doing many kinds of literature on brain tumor segmentation,
MRI images are mostly used by researchers.
This topic briefly gives the various segmentation methods. And it also
provides how the segmentation is performed on the MRI brain image. Andac
Hamamci,Nadir Kucuk, Kutlay Karaman, Kayihan Engin& Gozde Unal
2012 state that MRI brain image segmentation is a challenging job since
the captured image is affected by magnetic noise and other image artifacts.
Hence, many segmentation methods are implemented for processing MRI
images. But there is no one method which is appropriate for every image.5
Every method will be suitable for certain specific images. For instance,
the spatial information will be obtained from the texture features associ
ated with an image. But the intensity-based approach basically depends on
the gray level histogram, which does not provide spatial information. But
the segmentation based on the theory of graph cuts is applied to any type
of images like gray or binary images. An unsupervised fuzzy clustering
finds many applications like sensing of remote areas, geology, biomedical,
molecular or biological imaging.
The following topics give the detailed discussion on clustering and its types.
Shijuan He et al. (2001) told clustering is one of the simple unsupervised
learning algorithms. It is defined as a grouping of pixels with similar intensi
ties without using any training images. This classification operation on pixels
is performed without the knowledge of prior information, The clustering
algorithm trains by its own, using the available data.
Clustering is a technique which partitions the input image into different clus
ters by repeatedly calculating the centroid, and the pixel is forced to move to
the nearest cluster center. This is called hard clustering,9 because it pushes
each pixel into particular cluster center through the continuous iteration. The
authors in Ref. [9] state that there are three common types of hard clustering.
• K-means clustering.
• Fuzzy c-means clustering.
• Expectation and maximization (EM algorithm).
Macqueen proposed the algorithm in the year 1997. This comes under the
category of unsupervised algorithms. The algorithm is initiated by assigning
random values of the number of cluster K. Next, centroid is computed from
the cluster center. Each pixel value is estimated in contrast with the centroid.
Then the pixel is located to the particular cluster having the shortest path
among all. The same process7 is repeated by reestimating the centroid for
the next pixel. This process is repeated till convergence of the center. The
algorithm steps are explained as follows:
Step 1: Choose random values for the C cluster center.
Step 2: Euclidean distance has been evaluated among each pixel to cluster
center.
Step 3: Every pixel is assigned to the specific cluster, which has shortest
distance.
Step 4: The chief objective of the algorithm is to reduce the squared error
Xi –Vi is the Euclidean distance between Xi, Vi
194 Computational Imaging and Analytics in Biomedical Engineering
The above formula estimates the distance between the data point and
cluster center.
Step 2: Next, the data points near to the particular cluster center have the
largest membership value of that specific center. The membership value is
calculated by using the following formula:
Let xj be a data point, and let its degree of membership to a particular
cluster j be calculated as follows:
1
δij = 2
x −C m−1
∑ k =1 xi − C i
C
(12.3)
i k
∑ (δ x )
N m
ij j
Cj = i =1
(12.4)
∑ δ
N m
i =1 ij
the sole parameter is not sufficient to classify the brain tissue. When any
dissimilar structure appears, the conventional FCM14 is not sufficient for
segmentation. This can be avoided by adding the spatial information of
neighboring pixels which is considered to define the probability function of
each pixel. This spatial information helps to find new membership values
for each pixel. It leads to reducing the problem due to noise and intensity
inhomogeneity and increases the accuracy of the result.
Consider that X= {x1, x2, x3….. xn} is the set of data points
C=c1,c2,c3,…..cn} is the set of centers.
The following two equations are used to calculate membership and the
cluster center is updated for each iteration.
µ 1
ij =
(12.6)
2
−1
c dij m
∑
k =1 d
ik
Cj = ∑
( ij ) i
n µ m x
µij m (12.7)
i =1
dij—refers the distance between the ith data and the jth cluster
C—represents the number of clusters
m—fuzziness index
mij—membership of the ith data to the jth cluster data
n—number of data points
Cj—represents the jth cluster center
The following topic describes the brief discussion18 about results for various
segmentation algorithms. The algorithms are like K-means clustering, adap
tive k-means clustering, spatial fuzzy, c-means clustering used for segmenta
tion of brain image. The performance analysis between the various stages of
results of K-means, adaptive k-means algorithms is compared in terms of
accuracy, time, PSNR, and area. For the segmentation process the sample
brain images are acquired from hospital. The next topic discusses the various
results of the K means algorithm.
12.5 CONCLUSION
KEYWORDS
• segmentation
• clustering
• computer tomography (CT)
• magnetic resonance imaging (MRI)
• K-means
• fuzzy c-means
A Hybrid Clustering Approach for Medical Image Segmentation 199
REFERENCES
1. Banerjee, A.; Maji, P. Rough Sets and Stomped Normal Distribution for Simultaneous
Segmentation and Bias Field Correction in Brain MR Images. IEEE Trans. Image
Process. 2015, 24 (12), 5764–5776.
2. Gooya, A.; Biros, G.; Davatzikos, C. Deformable Registration of Glioma Images Using
EM Algorithm and Diffusion Reaction Modeling. IEEE Trans. Med. Imaging 2011, 30
(2), 375–389.
3. Ism, A.; Direkoglu, C.; Sah, M. In Review of MRI Based Brain Tumor Image Segmentation
Using Deep Learning Methods, Proceedings of 12th International Conference on
Application of Fuzzy Systems and Soft Computing; Vienna, Austria, Aug 29–30, 2016.
4. Roniotis, A.; Manikis, G. C.; Sakkalis, V.; Zervakis, M. E.; Karatzanis, I.; Marias,
K. High Grade Glioma Diffusive Modeling Using Statistical Tissue Information and
Diffusion Tensors Extracted from Atlases. IEEE Trans. Inf. Technol. Biomed. 2012, 16
(2), 255–263
5. Asanambigai, V.; Sasikala, J. Adaptive Chemical Reaction Based Spatial Fuzzy
Clustering for Level Set Segmentation of Medical Images. Ain Shams Eng. J. 2016, 9
(3), 459–467.
6. Islam, A.; Syed, M. S.; Khan, M. I. Multifractal Texture Estimation for Detection and
Segmentation of Brain Tumors. IEEE Trans. Biomed. Eng. 2013, 60 (11), 3204–3215.
7. Arizmendi, C.; Daniel, A. S.; Alfredo, V.; Enrique, R. Automated Classification of Brain
Tumours from Short Echo Time In Vivo MRS Data Using Gaussian Decomposition and
Bayesian Neural Networks. Expert Syst. Appl. 2014, 41 (11), 5296–5307.
8. Chen, L.; Weng, Z.; Yoong, L.; Garland, M. An Efficient Algorithm for Automatic
Phase Correction of NMR Spectra Based on Entropy Minimization. J. Magn. Reson.
2002, 158 (1), 164–168.
9. Eman, A. M.; Mohammed, E.; Rashid, A. L. Brain Tumor Segmentation Based on a
Hybrid Clustering Technique. Egypt. Inform. J. 2015, 16 (1), 71–81.
10. Xing, F.; Xie, Y.; Yang, L. Automatic Learning-Based Framework for Robust Nucleus
Segmentation. IEEE Trans. Med. Imaging 2016, 35 (2), 550–566.
11. Hai, S.; Xing, F.; Yang, L. Robust Cell Detection of Histopathological Brain Tumor
Images Using Sparse Reconstruction and Adaptive Dictionary Selection. IEEE Trans.
Med. Imaging 2016, 35 (6), 1575–1586.
12. Kalbkhani, H.; Mahrokh, G. S.; Behrooz, Z. V. Robust Algorithm for Brain Magnetic
Resonance Image (MRI) Classification Based on GARCH Variances Series. Biomed.
Signal Process. Control 2013, 8 (6), 909–919.
13. Yao, J.; Chen, J.; Chow, C. Breast Tumor Analysis in Dynamic Contrast Enhanced MRI
Using Texture Features and Wavelet Transform. IEEE J. Select. Top. Signal Process.
2009, 3 (1), 94–100
14. Jainy, S.; Kumarb, V.; Gupta, I.; Khandelwalc, N.; Kamal, C. A Package SFERCB
Segmentation, Feature Extraction, Reduction and Classification Analysis by Both SVM
and ANN for Brain Tumors. Appl. Soft Comput. 2016, 47, 151–167.
15. Jothi, G.; Inbarani, H. H. Hybrid Tolerance Rough Set Firefly Based Supervised
Featureselection for MRI Brain Tumor Image Classification. Appl. Soft Comput. 2016,
46, 639–651.
200 Computational Imaging and Analytics in Biomedical Engineering
16. Sallemi, L.; Njeh, I.; Lehericy, S. Towards a Computer Aided Prognosis for Brain
Glioblastomas Tumor Growth Estimation. IEEE Trans. Nanobiosci. 2015, 14 (7),
727–733.
17. Valarmathy, G.; Sekar, K.; Balaji, V. An Automated Framework to Segment and Classify
Gliomas Using Efficient Segmentation and Classification. Int. J. Adv. Sci. Technol.
2020, 29 (10S), 7539–754.
18. Valarmathy, G.; Sekar, K.; Balaji, V. An Automated Framework to Segment and Classify
Gliomas Using Efficient Shuffled Complex Evolution Convolutional Neural Network.
J. Med. Imag. Health Inf. 2021, 11, 2765–2770.
19. Malathi, M.; Sujatha, K.; Sinthia, P. Brain Tumour Segmentation Using Clustering And
EM Segmentation. Int. J. Appl. Eng. Res. 2015, 10 (11), 29105–29119.
20. Malathi, M.; Sujatha, K.; Sinthia, P. Detection and Classification of Brain Tumour using
Back Propagation Algorithm. Int. J. Control Theory Appl. 2016, 9 (24), 299–306.
CHAPTER 13
ABSTRACT
13.1 INTRODUCTION
The X-ray image acquired for dental purposes can be classified as intraoral
and extraoral. The former states images are acquired within the mouth and
the latter states that images are acquired outside the mouth. The majority of
the survey has been carried out using “intraoral segmentation” via threshold-
based segmentation. The extraoral dataset had been discussed in Ref. [1],
with a review of varying segmentation algorithms. The oral cavity has been
analyzed with malignant and benign lesions using deep learning techniques
specifically for dentigerous cysts.2
Otsu level of thresholding divides the object into segments of two groups
based on variance levels.18 Haung level of thresholding incorporates object
attributes for segmenting an image.19 Both thresholding algorithms used in
the image dataset show unbiased results of the mandible region in sample 1.
Section 13.2 deals with a literature survey of dental imaging, and
segmentation. Section 13.3 deals with the steps involved in the proposed
system development. Section 13.4 deals with statistical and medical imaging
algorithms. Section 13.5 concludes the overall work with future scope.
images and their age had been more than 20 years in Ref. [13] to avoid
deciduous teeth. The demographic detail of age less than 20 years has been
considered in Ref. [14]. Asymmetric mandibles exist wherein it is difficult
to analyze the three-dimensional shape via conventional techniques. Further,
the study states the influence of landmarks and angular measurements in
accessing the morphology.15
An extensive review of dental X-rays is given in Ref. [16] as it states
the evolution of image processing, deep learning, and machine learning
practices. Variation of intensity in an irregular manner in some cases might
degrade the image quality. In Ref. [17], the discussed works state strange
ness in acquired medical images has to be visualized via image processing
for extracting its features. A canny edge detector has been used in DICOM
images for pinpointing the exactness of boundaries without loss of its features.
Examining patients based on verbal exchange has been enhanced using deep
learning techniques.21 The support vector technique state it provides more
accuracy for diagnostic and prognostic features.
13.3 METHODOLOGY
Panoramic Images were obtained from the imaging archive13 with a segmented
mandible region. Length measurement across the mandible region has been
performed by the mandible inferior border and the superior border of the
“alveolar” for the image.
Then, Otsu thresholding has been done using the ImageJ plugin. It is a
single intensity value that separates the pixels into its classes. It signifies
there is minimal interclass variance that exhibits in an image.
Subsequently, Haung thresholding is done based on the function of
Shannon entropy.
Figure 13.2 shows full image with the mandible region included. Figure
13.3 is a segmented image available in the dataset by dentist 1. Figure 13.4
is a segmented image available in the dataset by dentist 2.
FIGURE 13.5 Segmented mandible length measured manually indicated via yellow lines.
FIGURE 13.6 Segmented mandible length measured manually indicated via yellow lines.
In Figure 13.6, a length of 579 has been measured between the mandible
inferior border and the superior border of the “alveolar” for the image
obtained from dentist 2.
FIGURE 13.9 The threshold value of the image 2 via Otsu thresholding.
Approaches for Analyzing Dental Images 209
Figure 13.11 shows whole image with the mandible region included.
Figure 13.12 shows a segmented image available in the dataset by dentist 1
for an original image shown in Fig. 13.11. Figure 13.13 shows segmented
image available in the dataset by dentist 2 for an original image shown in
Fig. 13.12.
Figure 13.14 shows the edge detection region of interest for dentist 1
image shown in Figure 13.12. Figure 13.15 used the Haung threshold
maximum value lies at 6 for interpretation.
FIGURE 13.15 Haung threshold method for edge detection image for mandible region.
212 Computational Imaging and Analytics in Biomedical Engineering
Figure 13.17 shows the Haung threshold for dentist image 2 where the
maximum value lies at 6.
13.5 CONCLUSIONS
Two samples of oral panoramic X-ray images were taken for analysis
followed by processing them via segmented mandible regions from experts.
Approaches for Analyzing Dental Images 213
In the first case, the image and its measurement between the inferior border
and the superior border of the mandible region were performed. The work
was further analyzed by thresholding the region via Haung and Otsu-based
analysis. In sample 2, the image obtained was processed via edge detection
and analyzed with subsequent Haung-based thresholding. The future work
will incorporate the impact of age and related dosage associated with volume
metrics analysis in the mandible region.
KEYWORDS
• thresholding
• edge detection
• intraoral
• extraoral
• segmentation algorithms
REFERENCES
1. Silva, G.; Oliveira, L.; Pithon, M. Automatic Segmenting Teeth in X-ray Images:
Trends, a Novel Data Set, Benchmarking and Future Perspectives. Expert Syst. App.
2018, 107, 15–31.
2. Yang, H.; Jo, E.; Kim, H. J.; Cha, I. H.; Jung, Y. S.; Nam, W.; Kim, D. Deep Learning
for Automated Detection of Cyst and Tumors of the Jaw in Panoramic Radiographs. J.
Clin. Med. 2020, 9 (6), 1839.
3. Akarslan, Z. Z.; Akdevelioglu, M.; Gungor, K.; Erten, H. A Comparison of the
Diagnostic Accuracy of Bitewing, Periapical, Unfiltered and Filtered Digital Panoramic
Images for Approximal Caries Detection in Posterior Teeth. Dentomaxillofacial Radiol.
2008, 37 (8), 458–463.
4. Akarslan, Z. Z.; Peker, I. Advances in Radiographic Techniques Used in Dentistry;
IntechOpen, 2015; Chapter 34.
5. Hung, K.; Montalvao, C.; Tanaka, R.; Kawai, T.; Bornstein, M. M. The Use and
Performance of Artificial Intelligence Applications in Dental and Maxillofacial
Radiology: A Systematic Review. Dentomaxillofacial Radiol. 2020, 49 (1), 20190107.
6. Cavalcanti, M. D. G. P.; Ruprecht, A.; Vannier, M. W. 3D Volume Rendering Using
Multislice CT for Dental Implants. Dentomaxillofacial Radiol. 2002, 31 (4), 218–223.
7. Kanuri, N.; Abdelkarim, A. Z.; Rathore, S. A. Trainable WEKA (Waikato Environment
for Knowledge Analysis) Segmentation Tool: Machine-Learning-Enabled Segmentation
on Features of Panoramic Radiographs. Cureus 2022, 14 (1).
214 Computational Imaging and Analytics in Biomedical Engineering
8. Salih, O.; Duffy, K. J. The Local Ternary Pattern Encoder–Decoder Neural Network for
Dental Image Segmentation. IET Image Process. 2022, 1–11. https://doi.org/10.1049/
ipr2.12416.
9. Park, J.; Lee, J.; Moon, S.; Lee, K. Deep Learning Based Detection of Missing Tooth
Regions for Dental Implant Planning in Panoramic Radiographic Images. Appl. Sci.
2022, 12 (3), 1595.
10. van der Stelt, P. F. From Pixel to Image Analysis. Dentomaxillofacial Radiol. 2021, 50
(2), 20200305.
11. Nafi'iyah, N.; Fatichah, C.; Astuti, E. R.; Herumurti, D. The Use of Pre and Post
Processing to Enhance Mandible Segmentation using Active Contours on Dental
Panoramic Radiography Images. In 2020 3rd International Seminar on Research of
Information Technology and Intelligent Systems (ISRITI); IEEE, Dec 2020; pp 661–666.
12. Loubele, M.; Jacobs, R.; Maes, F.; Denis, K.; White, S.; Coudyzer, W.; ... & Suetens, P.
Image Quality vs Radiation Dose of Four Cone Beam Computed Tomography Scanners.
Dentomaxillofacial Radiol. 2008, 37 (6), 309–319.
13. Abdi, A. H.; Kasaei, S.; Mehdizadeh, M. Automatic Segmentation of Mandible in
Panoramic X-ray. J. Med. Imag. 2015, 2 (4), 044003.
14. Chuang, Y. J.; Doherty, B. M.; Adluru, N.; Chung, M. K.; Vorperian, H. K. A Novel
Registration-Based Semi-Automatic Mandible Segmentation Pipeline Using Computed
Tomography Images to Study Mandibular Development. J. Comput. Assist. Tomogr.
2018, 42 (2), 306.
15. Inoue, K.; Nakano, H.; Sumida, T.; Yamada, T.; Otawa, N.; Fukuda, N.; Mori, Y. A
Novel Measurement Method for the Morphology of the Mandibular Ramus Using
Homologous Modelling. Dentomaxillofacial Radiol. 2015, 44 (8), 20150062.
16. Kumar, A.; Bhadauria, H. S.; Singh, A. Descriptive Analysis of Dental X-ray Images
Using Various Practical Methods: A Review. PeerJ Comput. Sci. 2021, 7, e620.
17. Chikmurge, D.; Harnale, S. Feature Extraction of DICOM Images Using Canny Edge
Detection Algorithm. In International Conference on Intelligent Computing and
Applications; Springer: Singapore, 2018; pp 185–196.
18. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst.
Man Cybern. 1979, 9 (1), 62–66.
19. Huang, L. K.; Wang, M. J. J. Image Thresholding by Minimizing the Measures of
Fuzziness. Pattern Recogn. 1995, 28 (1), 41–51.
20. Fadili, A.; Alehyane, N.; Halimi, A.; Zaoui, F. An Alternative Approach to Assessing
Volume-of-Interest Accuracy Using CBCT and ImageJ Software: In Vitro Study. Adv.
Radiol. 2015.
21. Menon, N. G.; Shrivastava, A.; Bhavana, N. D.; Simon, J. Deep Learning Based
Transcribing and Summarizing Clinical Conversations. In 2021 Fifth International
Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC); IEEE,
2021; pp 358–365.
CHAPTER 14
AN INVESTIGATION ON DIABETES
USING MULTILAYER PERCEPTRON
J. SURENDHARAN1, A. KARUNAMURTHY2, R. PRAVEENA3, and
K. SHEBAGADEVI3
1
HKBK College of Engineering, Bengaluru, Karnataka, India
2
BWDA Arts and Science College, Villupuram, India
3
Department of ECE, Muthayammal Engineering College, Namakkal,
India
ABSTRACT
14.1 INTRODUCTION
of 769 data points, 500 of which are free of diabetes and 268 of which are
positive for the presence of diabetes.
According to the research history, a variety of ML algorithms is used on
this dataset for the aim of disease classification, none of which has reached an
accuracy of more than 76%, according to the research history. As a result, we
came up with the idea of improvising as a group rather than individually. The
subject of this research is ML models, and it investigates their performance,
theory, and attributes in greater depth.
The classification approach has been adopted by scientists in place of the
regression strategy for making disease predictions. Its performance has been
assessed using the accuracy, precision, recall, and F1-score of the aforemen
tioned algorithm as measures of its effectiveness.
study, which used data from the Pima Indian diabetes study. The current as
well as the level of accuracy obtained, are notably absent from the document.
Choubey et al.18 conducted a study in which they compared several
diabetes classification techniques. The datasets utilized were a local diabetes
dataset and Pima Indian datasets. Feature engineering was carried out using
principal component analysis (PCA) and linear discriminant analysis (LDA),
both of which were shown to be beneficial in boosting the accuracy of the
classification method and removing undesired features from the dataset.
An ML paradigm that they developed was used to identify and predict
diabetes by Maniruzzaman et al.19 They employed four ML techniques for
the classification of diabetes: Naive Bayes, decision trees, AdaBoost, and
random forests, among others. Additionally, they used three alternative parti
tion techniques in addition to the 20 trials they conducted. The researchers
used data from the National Health and Nutrition Survey (NHNS) for both
diabetic and nondiabetic patients to put their innovative technique through
its paces.
Ahuja et al.20 conducted an examination of ML algorithms, including
neural networks, deep learning, and multilayer perceptrons (MLPs), on
the Pima dataset for diabetic classification. When compared to the data,
MLP was determined to be superior to other classifiers. According to the
authors, fine-tuning and efficient feature engineering can help to increase the
performance of MLP algorithms. Recent research by Mohapatra et al.21 has
demonstrated the use of MLP to classify diabetes.
Singh and Singh22 proposed an ensemble method for the prediction of
type 2 diabetes to improve accuracy. The Pima dataset from the University
of California, Irvine Machine Learning Repository was used in this work.
The bootstrap approach with cross-validation was used to train the four base
learners of the stacking ensemble, which were used to train the stacking
ensemble four base learners. However, neither variable selection nor a
comparison of the current state of the art is mentioned, though.
Kumari et al.23 created a soft computing-based diabetes prediction system
based on an ensemble of three commonly used supervised ML algorithms.
It was discovered that they had used Pima datasets in their investigation.
When they compared their system performance to that of state-of-the-art
individual and ensemble approaches, they discovered that it outperformed
them by 79%.
To forecast diabetes in its early or onset stage, Islam et al.24 used a
combination of techniques. Training methods included cross-validation and
percentage splits, which were both employed in this study. They collected
An Investigation on Diabetes Using Multilayer Perceptron 219
data from 529 Bangladeshi patients, both diabetic and nondiabetic, using
questionnaires administered at a hospital in the nation. The experimental
results reveal that the random forest algorithm outperforms them all by a
significant margin. Although there is no comparison to the present state of
the art, there is no clear reporting of the accuracy that was attained in this
study.
The use of ML approaches to predict early and ongoing DM in females has
been demonstrated in several studies.25 They employed typical ML methods
to construct a framework for predicting diabetes to better understand the
disease.
Using ML models for diabetes prediction that were published between
2010 and 2019, Hussain and Naaz26 did a comprehensive evaluation of the
literature on this topic. They evaluated the algorithms based on the Matthews
correlation coefficient and discovered that Naive Bayes and random forests
outperformed the other algorithms in terms of overall performance.
14.3.1 CLASSIFICATION
To put the proposed diabetes classification system to the test, the Pima
Indian diabetes dataset is employed. A comparison study is also carried
out, utilizing the most up-to-date computational techniques, which are
also included in the package. On the basis of the experimental results, the
suggested method outperforms the currently available algorithms in terms
of performance. Included in this section are sections devoted to defining the
dataset, performance indicators, and conducting a comparison study.
14.4.1 DATASET
This study made use of the Pima Indian diabetes dataset. Create an intelligent
model for predicting if a person has diabetes based on some of the metrics
contained in this dataset using data from this dataset. When it comes to the
classification of diabetes, it is a binary classification problem to be solved.
The variables are shown in Table 14.1.
222 Computational Imaging and Analytics in Biomedical Engineering
14.5 CONCLUSIONS
In this paper, the authors proposed a model for supporting the healthcare
business. The study developed an algorithm for the classification of diabetes
that was based on MLPs. The primary purpose of the proposed system is to
aid users in keeping track of their vital signs through the use of their mobile
phones and other mobile devices. Users will be able to recognize their elevated
risk of diabetes at an earlier stage as a result of the model projections about
future blood glucose levels, which is an extra benefit. Diabetic patients are
classified and predicted using MLP. The proposed methodologies are tested
on the Pima Indian diabetes dataset, which is available online. In terms of
accuracy, the two approaches outperform existing best practices by 86.083
and 87.26%, respectively, when compared to current best practices.
KEYWORDS
• fatal diseases
• diabetes mellitus
• prediction
• classification
REFERENCES
2. Babalola, O. M.; Garcia, T. J.; Sefcik, E. F.; Peck, J. L. Improving Diabetes Education in
Mexican American Older Adults. J. Transcult. Nurs. 2021, 32 (6), 799–809.
3. Zaccardi, F.; Kloecker, D. E.; Buse, J. B.; Mathieu, C.; Khunti, K.; Davies, M. J. Use of
Metformin and Cardiovascular Effects of New Classes of Glucose-Lowering Agents: A
Meta-Analysis of Cardiovascular Outcome Trials in Type 2 Diabetes. Diab. Care 2021,
44 (2), e32–e34.
4. Kannan, S.; Idrees, B. A.; Arulprakash, P.; Ranganathan, V.; Udayakumar, E.; Dhinakar,
P. Analysis of Convolutional Recurrent Neural Network Classifier for COVID-19
Symptoms Over Computerised Tomography Images. Int. J. Comput. App. Technol.
2021, 66 (3–4), 427–432.
5. Saravanan, D.; Surendiran, J. A New Framework for Video Data Retrieval Using
Hierarchical Clustering Technique. Int. J. Eng. Adv. Technol. (IJEAT) ISSN: 2249-8958
2019, 8 (6S3).
6. Ojugo, A. A.; Ekurume, E. Predictive Intelligent Decision Support Model in Forecasting
of the Diabetes Pandemic Using a Reinforcement Deep Learning Approach. Int. J. Educ.
Manag. Eng 2021, 11 (2), 40–48.
7. Khunti, K.; Knighton, P.; Zaccardi, F.; Bakhai, C.; Barron, E.; Holman, N. et al.
Prescription of Glucose-Lowering Therapies and Risk of COVID-19 Mortality in
People with Type 2 Diabetes: A Nationwide Observational Study in England. Lancet
Diab. Endocrinol. 2021, 9 (5), 293–303.
8. Syed, S. A.; Sheela Sobana Rani, K.; Mohammad, G. B.; Chennam, K. K.; Jaikumar, R.;
Natarajan, Y. et al. Design of Resources Allocation in 6G Cybertwin Technology Using
the Fuzzy Neuro Model in Healthcare Systems. J. Healthcare Eng. 2022.
9. Greiver, M.; Havard, A.; Bowles, J. K.; Kalia, S.; Chen, T.; Aliarzadeh, B. et al. Trends
in Diabetes Medication use in Australia, Canada, England, and Scotland: A Repeated
Cross-Sectional Analysis in Primary Care. Br. J. Gen. Pract. 2021, 71 (704), e209–e218.
10. Vidhya, R. G.; Batri, K. Segmentation Classification of Breast Cancer Using an Krill
Herd Oprimization. Med. Imaging Health Inf. 2020, 10 (6), 1294–1300.
11. Anandaraj, S. P.; Kirubakaran, N.; Ramesh, S.; Surendiran, J. Efficient Way to Detect
Bone Cancer Using Image Segmentation. International Journal of Engineering and
Advanced Technology (IJEAT) ISSN: 2249-8958, 2019, 8 (6S3).
12. Ferrari, M.; Speight, J.; Beath, A.; Browne, J. L.; Mosely, K. The Information-
Motivation-Behavioral Skills Model Explains Physical Activity Levels for Adults with
Type 2 Diabetes Across all Weight Classes. Psychol. Health Med. 2021, 26 (3), 381–394.
13. Shah, N.; Karguppikar, M.; Bhor, S.; Ladkat, D.; Khadilkar, V.; Khadilkar, A. Impact of
Lockdown for COVID-19 Pandemic in Indian Children and Youth with Type 1 Diabetes
from Different Socio-Economic Classes. J. Pediatric Endocrinol. Metabol. 2021, 34
(2), 217–223.
14. Mansi, I. A.; Chansard, M.; Lingvay, I.; Zhang, S.; Halm, E. A.; Alvarez, C. A.
Association of Statin Therapy Initiation with Diabetes Progression: A Retrospective
Matched-Cohort Study. JAMA Intern. Med. 2021, 181 (12), 1562–1574.
15. Bouyahya, A.; El Omari, N.; Elmenyiy, N.; Guaouguaou, F. E.; Balahbib, A.; Belmehdi,
O. et al. Moroccan Antidiabetic Medicinal Plants: Ethnobotanical Studies, Phytochemical
Bioactive Compounds, Preclinical Investigations, Toxicological Validations and Clinical
Evidences; Challenges, Guidance and Perspectives for Future Management of Diabetes
Worldwide. Trends Food Sci. Technol. 2021, 115, 147–254.
An Investigation on Diabetes Using Multilayer Perceptron 227
16. Saritha, G.; Saravanan, T.; Anbumani, K.; Surendiran, J. Digital Elevation Model and
Terrain Mapping Using LiDAR. Mater. Today: Proc. 2021, 46 (9), 3979–3983. ISSN
2214-7853.
17. Vidhya, R. G.; Saravanan, G.; Rajalakshmi, K. Mitosis Detection for Breast Cancer
Grading. Int. J. Adv. Sci. Technol. 2020, 29 (3), 4478–4485; Gupta, S.; Verma, H.
K.; Bhardwaj, D. Classification of Diabetes Using Naive Bayes and Support Vector
Machine as a Technique. In Operations Management and Systems Engineering;
Springer: Singapore, 2021; pp 365–376.
18. Choubey, D. K.; Kumar, M.; Shukla, V.; Tripathi, S.; Dhandhania, V. K. Comparative
Analysis of Classification Methods with PCA and LDA for Diabetes. Curr. Diab. Rev.
2020, 16 (8), 833–850.
19. Satheeshwaran, U.; Sreekanth, N.; Surendiran, J. X-ray CT Reconstruction by Using
Spatially Non Homogeneous ICD Optimization. Int. J. Eng. Adv. Technol. (IJEAT),
ISSN: 2249-8958, 2019, 8 (6S3).
20. Ahuja, R.; Sharma, S. C.; Ali, M. A Diabetic Disease Prediction Model Based on
Classification Algorithms. In Annals of Emerging Technologies in Computing (AETiC),
Print ISSN, 2019; pp 2516–0281.
21. Mohapatra, S. K.; Swain, J. K.; Mohanty, M. N. Detection of Diabetes Using Multilayer
Perceptron. In International Conference on Intelligent Computing and Applications;
Springer: Singapore, 2019; pp 109–116.
22. Singh, N.; Singh, P. Stacking-Based Multi-Objective Evolutionary Ensemble Framework
for Prediction of Diabetes Mellitus. Biocybern. Biomed. Eng. 2020, 40 (1), 1–22.
23. Surendiran, J.; Saravanan, S. V.; Elizabeth Catherine, F. Glaucoma Detection Using
Fuzzy C- Mean (FCM). IJPT 2016, 8 (3), 16149–16163.
24. Islam, M. M.; Ferdousi, R.; Rahman, S.; Bushra, H. Y. Likelihood Prediction of Diabetes
at Early Stage Using Data Mining Techniques. In Computer Vision and Machine
Intelligence in Medical Image Analysis; Springer: Singapore, 2020; pp 113–125.
25. Malik, S.; Harous, S.; El-Sayed, H. Comparative Analysis of Machine Learning
Algorithms for Early Prediction of Diabetes Mellitus in Women. In International
Symposium on Modelling and Implementation of Complex Systems; Springer: Cham,
2020; pp 95–106.
26. Hussain, A.; Naaz, S. Prediction of Diabetes Mellitus: Comparative Study of Various
Machine Learning Models. In International Conference on Innovative Computing and
Communications; Springer: Singapore, 2021; pp 103–115.
CHAPTER 15
DERMOSCOPIC IMPLEMENTATION
AND CLASSIFICATION ON
MELANOMA DISEASE USING
GRADIENT BOOST CLASSIFIER
B. BALAKUMAR1, K. SAKTHI MURUGAN2, N. SURESHKUMAR3,
A. PURUSHOTHAMAN4
1
Centre for Information Technology and Engineering, Manonmaniam
Sundaranar University, Tirunelveli, India
2
Department of ECE, PSN College of Engineering and Technology,
Tirunelveli, India
3
Department of ECE, Muthayammal College of Engineering,
Rasipuram, India
4
Department of ECE, Hindhusthan Institute of Technology, Coimbatore,
India
ABSTRACT
Melanoma is a form of skin cancer that grows when melanocyte become out
of balance (the cells that render the skin tan or brown). Cancer starts with
cells developing out of balance in the body. Cells may become cancer in
virtually every part of the body and then spread to other parts of the body.
The melanoma of certain other skin cancers is considerably less common.
However, melanoma is harmful as it travels to certain parts of the body even
faster than detected and handled. In this study, we propose a deep learning
15.1 INTRODUCTION
Melanoma is one of the skin cancer’s deadliest types of disease. Late diag
nosis is curable, although only professionally qualified specialists can reliably
diagnose the disease. Despite insufficient availability of resources, computer
devices that can classify pathogens can save lives, decrease unwanted
biopsies and rising prices. In order to do this, we propose a framework that
combines recent advances in profound learning with existing method of
machine learning, creates sets of methods which segment skin lesions and
analyzes the detected region and the surrounding tissue to detect melanoma.
In people’s daily life, skin diseases are very normal.1
Millions of individuals in the United States suffer various forms of
skin conditions annually. Diagnosing skin disorders also requires a strong
degree of knowledge in the different aspects of their vision. Because human
judgment is always arbitrary and hard to replicate, a computer-aided diag
nostic device would be considered to obtain a more accurate and effective
diagnosis. In this2 article, we investigate the viability in the development
of a deep convolutionary neural network (CNN), a standardized diagnostic
method of skin diseases.
One out of five Americans was treated in their lifespan with cutaneous
malignancy. While melanomas constitute less than 5% of all skin cancers in
the United States, nearly 75% of all deaths due to skin cancer and more than
10,000 deaths are recorded in the United States alone per year. Early detec
tion is important, as the average 5-year melanoma survival rate decreases
from 99% in the earliest stages to around 14% in the latest level. We also3
established a statistical approach that can proactively map skin lesions and
diagnose cancer sooner by physicians and patients.
Second, many AL training functions depend on model instability, but
such model vulnerability is not always reflected by profound learning
methods. In this article, we incorporate in a realistic way recent develop
ments in Bayesian5 profound learning in the active learning sense. With
very limited current literature, we are designing an active learning system
for higher dimensional data that have been incredibly challenging to date.
Dermoscopic Implementation and Classification 231
will have a redundant 33%, NSCT is being used. The sample architecture
diagram is shown in Figure 15.1.
15.4 CONCLUSIONS
KEYWORDS
• dermoscopic
• NSCT
• deep learning
• gradient boost
• accuracy
• specificity
REFERENCES
1. Codella, N. C.; Nguyen, Q. B.; Pankanti, S.; Gutman, D. A.; Helba, B.; Halpern, A.
C.; Smith, J. R. Deep Learning Ensembles for Melanoma Recognition in Dermoscopy
Images. IBM J. Res. Dev. 2017, 61 (4/5), 5–1.
2. Liao, H. A Deep Learning Approach to Universal Skin Disease Classification; University
of Rochester Department of Computer Science, CSC.
3. Esteva, A.; Kuprel, B.; Novoa, R. A.; Ko, J.; Swetter, S. M.; Blau, H. M.; Thrun, S.
Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks. Nature
2017, 542 (7639), 115–118.
4. Sonia, R. Melanoma Image Classification System by NSCT Features and Bayes
Classification. Int. J. Adv. Signal Image Sci. 2016, 2 (2), 27–33.
5. Gal, Y.; Islam, R.; Ghahramani, Z. Deep Bayesian Active Learning with Image Data.
Proc. 34th Int. Conf. Mach. Learn. 2017, 70, 1183–1192.
6. Premaladha, J.; Ravichandran, K. S. Novel Approaches for Diagnosing Melanoma Skin
Lesions Through Supervised and Deep Learning Algorithms. J. Med. Syst. 2016, 40 (4),
96.
7. Jafari, M. H.; Karimi, N.; Nasr-Esfahani, E.; Samavi, S.; Soroushmehr, S. M. R.; Ward,
K.; Najarian, K. Skin Lesion Segmentation in Clinical Images Using Deep Learning.
In 2016 23rd International conference on pattern recognition (ICPR); IEEE, 2016; pp
337–342.
8. Masood, A.; Al-Jumaily, A.; Anam, K. Self-Supervised Learning Model for Skin Cancer
Diagnosis. In 2015 7th International IEEE/EMBS Conference on Neural Engineering
(NER); IEEE, 2015; pp 1012–1015.
CHAPTER 16
ABSTRACT
16.1 INTRODUCTION
Lung cancer is the most prevalent disease not to be missed which induces late
healthcare mortality. Now, CT is available support clinicians diagnose early
stage lung cancer.2 The test to diagnose lung cancer is also calculated by the
Doctors’ expertise, which certain patients can neglect and trigger problems.
Broad awareness has proven to be common and solid process in many testing
fields of medical imaging. Three forms of deep neural networks are included
in this paper (e.g., CNN, DNN, and SAE) are the calcification of lung cancer.
The CT images recognition function includes these networks. One of the
main methods used by pathologists for evaluating the stage, the types and
subtypes of lung cancers are the visual analysis of histopathological slides
of lung cell tissues. In this research, we learned a profound learning neural
convolutionary network (CNN) model (first v3).
This model1 was used to identify whole-slide pathological photos in
correct adenocarcinoma, squamous cell carcinomas, and typical lung tissue
from the Cancer Genome Atlas (TCGA). Deep learning in pattern recogni
tion and classification is considered to be a popular and powerful method.
However, in the area of medical diagnostics imagery, there are not many
highly organized implementations because broad databases are not often
visible in medical images. In this3 research, we checked if deep learning
algorithms are feasible for the diagnosis of lung cancer in cases.
Lung cancer is one of the world’s main causes of death. At least, an appropriate
distinction of the clinical treatment of lung cancer types of cancer (adeno
carcinoma, squamous carcinoma, and small cell carcinoma) is common.
Nonetheless, the quality increases yet diagnosis reliability is complicated.4
We also established an automated classification method in this analysis for
lung cancers. In microscopic photos, a big deeper learning technique is the
deep CNN (DCNN).
Late diagnosis of lung cancer will make a big difference decreased
death rate for lung cancer, and is blamed for more than 17% of overall
Image Processing and Deep Learning Techniques 237
16.3.1 PRE-PROCESSING
The first step is pre-processing the images for the intensity measurement
of particles. The use of standard segmentation technology is to divide the
processed image. The image is, therefore, filled with cancer nodules. Other
characteristics such as area, perimeter, and eccentricities were extracted
during extraction of features such as centroid, diameter, and median intensity
of the pixels. This is accompanied by a classification module that distin
guishes between benign and malignant tumors based on the CT scan images.
Extracted functions are used as preparation and the corresponding training
model is developed for the classification followed by model evaluation with
increasing precision, accuracy, specificity, and sensitivity for detection and
classification.
238 Computational Imaging and Analytics in Biomedical Engineering
Lung cancer is an early form of lung cancer. The lungs are two oxygen-
treating spongy bodies in the chest that inhale and release carbon dioxide as
you exhale. In this study, we implemented deep learning techniques for lung
disease prediction using lung CT images as a dataset. It is used to segment
images as shown in Figure 16.3. Also predict sensitivity, specificity, and
accuracy values using KNN classifier as shown in Table 16.1.
16.7 CONCLUSIONS
KEYWORDS
• deep learning
• CNN
• random boost classifier
• breathing
• lung cancer
REFERENCES
1. Coudray, N.; Ocampo, P. S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.;
Moreira, A. L.; Razavian, N.; Tsirigos, A. Classification and Mutation Prediction from
Non–Small Cell Lung Cancer Histopathology Images Using Deep Learning. Nat. Med.
2018, 24 (10), 1559–1567.
2. Song, Q.; Zhao, L.; Luo, X.; Dou, X. Using Deep Learning for Classification of Lung
Nodules on Computed Tomography Images. J. Healthcare Eng. 2017, 2017.
3. Sun, W.; Zheng, B.; Qian, W. Computer Aided Lung Cancer Diagnosis with Deep
Learning Algorithms. In Medical Imaging 2016: Computer-Aided Diagnosis, Vol. 9785;
International Society for Optics and Photonics, 2016; p 97850Z.
4. Teramoto, A.; Tsukamoto, T.; Kiriyama, Y.; Fujita, H. Automated Classification of Lung
Cancer Types from Cytological Images Using Deep Convolutional Neural Networks.
BioMed Res. Int. 2017, 2017.
5. Kumar, D.; Wong, A.; Clausi, D. A. Lung Nodule Classification Using Deep Features
in CT images. In 2015 12th Conference on Computer and Robot Vision; IEEE, 2015;
pp 133–138.
6. Ciompi, F.; Chung, K.; Van Riel, S. J.; Setio, A. A. A.; Gerke, P. K.; Jacobs, C.; Scholten,
E. T.; Schaefer-Prokop, C.; Wille, M. M.; Marchiano, A.; Pastorino, U. Towards
Automatic Pulmonary Nodule Management in Lung Cancer Screening with Deep
Learning. Sci. Rep. 2017, 7, 46479.
7. Hua, K. L.; Hsu, C. H.; Hidayati, S. C.; Cheng, W. H.; Chen, Y. J. Computer-Aided
Classification of Lung Nodules on Computed Tomography Images via Deep Learning
Technique. OncoTargets Therap. 2015, 8.
8. Skourt, B. A.; El Hassani, A.; Majda, A. Lung CT Image Segmentation Using Deep
Neural Networks. Procedia Comput. Sci. 2018, 127, 109–113.
9. Manikandan, M. Image Segmentation and Image Matting for Foreground Extraction
using Active Contour Based Method. Int. J. MC Square Sci. Res. 2011, 3 (1), 18–38.
CHAPTER 17
ABSTRACT
Many people in the universe are highly affected with skin-related cancer
diseases at present. Many researchers used many soft computing approaches
for detecting the cancer part. To aid the dermatologist, we propose an auto
mated approach with Cuckoo search optimization and K-means clustering
for detecting the skin cancer and classifying the cancer as normal and
abnormal by support vector machine. For preprocessing, median filters are
used to reduce noise inference in the input image. For validating the accuracy
we utilize the IISC-DSI dataset and our proposed approach yields 98.2%
segmentation accuracy and 98.6% classification accuracy and compared
with ABC with K-means, FCM for segmentation and for classification CNN
and KNN. Our approaches take the average of 7s for processing the images
using hybrid approach for cancer detection.
17.1 INTRODUCTION
According to the world medical council, skin cancer is placed in the top place
of cancer. Among various skin cancers, melanoma is the most dangerous
skin cancer causing more than 80% peoples to lose their lives. It looks like
a mole and appears anywhere in the skin where the sunrays do not fall, and
it is brown in color which spreads fast and affects the blood vessels in the
skin. With the features like asymmetry, border, color, dark, and elevation
are the basic symptoms for melanoma skin cancer. Clustering is one of the
fundamental methods for understanding various objects and categorizes
it based on their similarity. Many clustering methods are used in various
applications like image processing,l–3 text mining, and so on. K-means
clustering provides fast convergence rate compared with other clustering
like hierarchical clustering. Cuckoo search optimization4 is recently devel
oped optimization algorithm which is based on brood parasitic behaviors
of cuckoo. It is used in many medical applications for its global optimal
solution.5 Deep neural network solved many medical problems more
globally and yields the expected outcome. To overcome the local optimal
problem, this paper proposes a hybrid approach with the integration of
Cuckoo Search with K-means for detecting the cancer part and classifies
them using the DNN model. Many optimization techniques are applied to
medical images to yield accurate output; one among them is cuckoo search
optimization which imitates the swarm behavior and yields the global opti
mization. Integrating FCSO with K-means6 for classifying the melanoma
as malignant and benign with 10 datasets utilized from online dataset for
validating the performance proves that this integration provides the result
faster than K-means and its limitation is computational time and number
of iterations. With the utilization of IDB2 and UCI dataset combination
of firefly and K-means7 is used for enhancing the exploitation and also
reduces the cluster distance measures. To detect the cancerous part nearby
unwanted noisy information are removed by applying morphological
based fuzzy threshold8 is applied for the enhancement and integration of
the K-means with the firefly optimization technique for detecting the skin
lesions with ISIC online dataset and proves its accuracy is superior than
PSO9 using K-means clustering for categorizing cancers and predicting the
cancer using total dermoscopic value (TDO) for various levels of cancer
and enhances the features using GLCM concept and proves its detection
and prediction is superior than various state-of-the-art techniques and its
Design Detecting and Classifying Melanoma Skin Cancer 245
limitation is that very few images are only used for validation. Feature
selection10 is done with the integration of BBA (Binary BAT Algorithm)
with threshold based approach for segmenting the cancer part and toclas
sify them using Radial Basis Function Network (RBFN) and support vector
classifier (SVM) with 300 images and proved its specificity rate is far
better than CNN, KNN etc. DCNN methods11 are for classifying the tumor
part with three steps: in the first step, enhancing the color transformation
is done, in the second step lesion boundary is detected by using CNN,
and in the last step to extract the deep features and yield accurate output
DNN is used. Based on skin color12 lesions are extracted with K-means
and Multi-class Support Vector Machine for classification and obtained
96% classification accuracy with ISIR dataset.13 With the combination of
genetic algorithm with PSO enhanced K-means is used for segmenting the
skin lesions and back propagation neural network is used for classifica
tion and yields 87.5% accuracy.14 Using hybrid techniques like dynamic
particle swarm optimization and K-means is performed for skin segmenta
tion with high quality and it ensured that K-means is superior than K-mean
clustering.15 With 8 colors and 200×150 pixels the image is used as the
input image and with CNN skin cancers are segmented and analyzed its
performance with various existing start-of-the-art techniques.16 With the
combination of grab cut and K-means skin lesions are segmented and
obtained 0.8236 and 0.9139 dice and Jaccard coefficient.
i. Image Description
For validating the efficiency and accuracy of skin cancer using the
hybrid approach, we collect various skin-related disease images
from online dataset like from IISC-DSI dataset. We utilized 320
cancer images and 100 normal images for this approach. Dermos
copy or Epiluminescence Light Microscopy is an imaging method
which examines the skin lesion in detail; the working process of
dermoscopy is to place an oil immersion between sin and the optical
lens. Microscope is used to identify the pigmented surface in terms
of color, shape, and structure in detail for analyzing the lesions
intensively for prevention or for diagnosis. Flow diagram briefs our
hybrid approach and it is shown in Figure 17.1.
246 Computational Imaging and Analytics in Biomedical Engineering
f ( x, y ) = median ( s ( x, y ) ) (17.1)
where F(x,y) is the median filtered output and s(x,y) is the input
image for preprocessing. To remove the hair, air bubbles median
filtering is used and also aids in shaping and sharpening the edges of
images for extracting the affected part in accuracy.
Design Detecting and Classifying Melanoma Skin Cancer 247
Step 5: Perform the above step repeatedly until all the pixels in each
cluster is converged
Cuckoo search optimization is integrated with K-means to yield global
optimal solution, K-means aids in finding the initial cluster and to
update the centroid CS is integrated for yielding best outcome of this
hybrid approach. Features set (12) are extracted from the segmented
248 Computational Imaging and Analytics in Biomedical Engineering
result and fed to SVM for classification with 70% training set and
30% testing images. Various features like energy, autocorrelation,
contrast, skewness, Kurtosis, Difference Entropy, Variance, Standard
deviation, Correlation, Homogeneity, dissimilarity are computed
for classification. The formulas for these features are mentioned in
Table 17.1:
Energy g −1 (17.4)
∑ − log ( h )
2
ij
ij =0
Autocorrelation g −1 (17.5)
∑ ( ij ) h ( ij )
ij =0
Contrast g −1 (17.6)
∑ − hlj ( i − j )
2
ij =0
Skewness g −1 (17.8)
∑(oi − mean)3 h ( oi )
h=0
Kurtosis g −1 (17.9)
∑(o − mean) h ( oi )
h=0
i
4
Variance g −1 (17.10)
∑(o − mean) h ( oi )
h=0
i
2
Design Detecting and Classifying Melanoma Skin Cancer 249
∑(o − mean) h ( oi )
h=0
i
2
Correlation (17.12)
g −1
( i − µ )( j − µ )
∑ hij
i , j =0 1+ ( i − j )
2
Homogeneity (17.13)
g −1
hij
∑ 1+
(i − j )
2
i , j =0
dissimilarity g (17.14)
∑ i − j h ( ij )
ij
v. SVM Classifier
An SVM classifier classifies the skin cancer into benign and malig
nant based on their illumination shape and various features extracted
from the segmented image. The following steps brief the working
principle of the SVM classifier
Step 1: Collect the images from dataset and partition the data into
training (70%) and testing (30%).
Step 2: All features extracted from segmented images are classified
and label it.
Step 3: Compute support value and estimate it.
Step 4: Iterate the following steps until instance value is null.
Step 4.1: Check if support value is equal to similarity between each
instance, if so means, find the total error value.
Step 5: Check the instance if it is less than zero and then calculate the
FA where
Support _ value
FA = (17.15)
Total Error
The optimized features are taken from segmented output and partitioned
as 70% for training and 30% for testing ratio. The performance measures
ensured the efficiency of the proposed approach. Sensitivity, specificity, and
accuracy measure ensure both segmentation and classification accuracy and
proved the efficiency of the proposed approach.
Specivicity = 1− FPR *100 (17.16)
TP
Sensitivity = *100 (17.17)
TP + FN
TP + TN
Accuracy = *100 (17.18)
TP + FN + TN + FP
where TP represents the correctly detected part, FN represents the incorrectly
detected part, FPR represents the total number of wrongly detected parts, and
the classification comparison diagram is shown in Figure 17.2.
From Figure 17.2 it was found that our SVM classifier yields good accu
racy measures compared with existing approaches.17–19 From the segmenta
tion result it was proved that the CSO with K-means is highly suitable for skin
cancer detection and its segmentation accuracy is compared with existing
approaches like FCM, K-means, adaptive K-means, and its comparison chart
is shown in Table 17.2.
Design Detecting and Classifying Melanoma Skin Cancer 251
17.3 CONCLUSIONS
KEYWORDS
• dermoscopy imaging
• median filter
• cuckoo search optimization
• K-means clustering
• performance measures
REFERENCES
1. İlkin, S.; Gençtürk, T. H.; Gülağız, F. K.; Özcan, H.; Altuncu, M. A.; Şahin, S. SVM:
Bacterial Colony Optimization Algorithm Based SVM for Malignant Melanoma
Detection. Eng. Sci. Technol. 2021, 24, 1059–1071.
Design Detecting and Classifying Melanoma Skin Cancer 253
19. Sreelatha, T.; Subramanyam, M.; Prasad, M. G. Early Detection of Skin Cancer Using
Melanoma Segmentation Technique. J. Med. Syst. 2019, 43, 190–205.
20. Sumathi, R.; Arjunan, S. Towards Better Segmenting Low Depth of Filed Images Using
Morphological Reconstruction Filters. Int. J. Signal Syst. Eng. 2014, 7, 189–194.
CHAPTER 18
Chennai, India
ABSTRACT
Lung cancer is caused due to anomalous growth of cells that develop into
a tumor. Various researches report that the death rate of lung cancer is the
highest among all other types of cancer. In the first part of work segmenta
tion of lungs, a tumor in CT image is used for Spatially Weighted Fuzzy C
Means Clustering (SWFCM) techniques. The overall accuracy, sensitivity,
and predictive values achieved are 86.082, 85.636, and 92.673%, respec
tively. In the second part of the work segmentation of lungs, a tumor in PET
image is used for Spatially Weighted Fuzzy C Means Clustering (SWFCM)
techniques. The overall accuracy, sensitivity, and predictive values achieved
are 89.31, 87.27, and 95.88%, respectively. In the third part of the work, the
diagnosis is strengthened for mass screening; the CT and the PET images
are fused effectively. The four fusion methods namely Wavelet Transform,
Curvelet Transform, Non Subsample Contourlet Transform (NSCT), and
Multimodal image fusion are applied. The performance analysis Entropy,
Peak Signal Noise Ratio (PSNR), Standard Deviation (SD), Structural
Similarity Index Measure (SIM), and Root Mean Square Error (RMSE) are
computed.
18.1 INTRODUCTION
The initial stage of cancer is when cells unusually grow out of control
anywhere in the body. The growth of cancer cell and normal cell differs.
The cancer cells do not die but instead they grow and form several
abnormal cells whereas normal cells die. Cancer cells can also possibly
invade into other organs, but normal cells cannot. The process of cells
abnormally growing without control and affecting other tissues leads to the
formation of a cancer cell. Utilization of anatomical priors in the process
of segmentation of PET lung tumor images was proposed by Ref. [1]. This
method combines both the anatomical and functional images and hence
provides effective tumor staging and improved treatment planning. PET/
CT detects tumor invasion into adjacent tissues successfully as well as
leads to precise localization of lesions, even though there are no morpho
logical changes found in CT2 as presented in a PET and CT images based
automated lung nodule detection. It was an automatic method to detect
lung nodules in PET and CT. Here, the nodules that are in proximity region
and similar are combined into one by a split-up postprocessing step, and
the time of the localization can be minimized to a greater extent from more
than one hour to maximum five minutes. This method when executed and
authenticated on real clinical cases in Interview Fusion clinical evaluation
software (Mediso) proved successful in detecting lung nodules and may
be a valuable aid for physicians for the daily routine of oncology. Ref. [3]
presented a method of Statistical Texture Features analysis for Automatic
Lung Cancer Detection with the PET/CT images. The overall endurance
rate of lung cancer patients is observed to be only 14%.4 Early detections
increase the chance of choosing the apt treatment for the cancer patients.
Computational systems diagnoses are highly useful for radiologists in the
elucidation of images. Image preprocessing methods namely Contrast
Limited Adaptive Histogram Equalization (CLAHE) and Wiener filtering
have been performed to remove the artifact due to contrast variations
and noise. Haralick statistical texture features were chosen as they could
extract more texture information from the cancer regions than the visual
assessment. In order to classify the regions as normal or abnormal, Fuzzy
C-Means (FCM) clustering was used.5
Detection of Lung Cancer Using Fusion Methods for CT and PET Images 257
Figure 18.1 shows the detection of lung tumor from CT image. The lungs’
tumor from CT image is segmented by using Spatially Weighted Fuzzy–C
Means Clustering.6
The first step is the removal of bone region from the lung CT image since
the bone region affects the segmentation accuracy. Toward this, R-plane,
G-plane, and B-plane are separated from the RGB image. In this entire plane,
the bone region is detected. By subtracting all these images, the resultant
image is obtained. Figure 18.2 shows R-plane, G-plane, and B-plane images.
Figure 18.3 shows input CT image.7
Figure 18.4 shows the enhanced and bone removal image. T is added
with difference between G-plane and B-plane as given in eq 3.1 to find the
affected area of the disease. S=T+[G-B]
The quality of the CT image will not be very good, so the SWFCM method
is adopted for the corresponding region.8
In SWFCM, to exploit the spatial information, a spatial function is
defined as eq 18.1
hij = ∑ k∈NB(x j ) uik (18.1)
2
∑ Nj 1 =
J== ∑Ci 1 uijm x j − υi (18.5)
Figure 18.5 shows the SWFCM output image. The image with three
clusters is shown in Figure 18.5.
The segmented image has three clusters, namely the background region
of interest and small regions. The background and small regions are elimi
nated and only large size of cluster is considered. Figure 18.6 shows tumor-
affected area in the CT image. Figure 18.7 shows the tumor-affected area in
identified cluster superimposed with the CT input image. Table 18.1 shows
the data sets of cancer-affected area of CT image.
FIGURE 18.8 Bar graph-comparison of lung cancer-affected area for the CT image.
The FLICM is applied to the PET image. The major characteristic of FLICM
is that it guarantees noise immunity and preserves image detail and it is free
of any parameter selection.11
Detection of Lung Cancer Using Fusion Methods for CT and PET Images 263
FLICM incorporates local spatial and gray level information into its
objective function as defined in eqs 18.7 and 18.8
2
Jm =∑iN 1 =
∑ ck 1[U mki xi − vk + G ki ] (18.7)
=
where the ith pixel is the center of the local window, k is the reference cluster,
and the jth pixel belongs to the set of the neighbors falling into a window
around the ith pixel (Nj). di,j is the spatial Euclidean distance between pixels
i and j, Ukj is the degree of membership of the jth pixel in the kth cluster, m
is the weighting exponent on each fuzzy membership shown in eq 18.9, and
Vk is the prototype of the center of cluster k shown in eq 18.10
Here G =fuzzy factor
1
(1−U kj ) X j − Vk
m 2
Gki =
∑ j∈ N i (18.8)
d +1
i≠ j ij
1
U ki = 1
2 m−1
xi − vk + Gki
∑Cj =1 (18.9)
2
x − vj + G ji
i
∑iN=1 U kim xi
Vk = (18.10)
∑iN=1 U kim
intensity regions. Figure 18.10 shows the segmented tumor region from the
FLICM output. From the FLICM output corner cluster is removed, the zero
pixels are tumor regions. Figure 18.11 shows the segmented tumor region
from the FLICM output. Table 18.2 illustrate data sets of cancer-affected
area of the PET image.
Figure 18.12 shows the Bar Graph of the above obtained result.
FIGURE 18.12 Bar graph: comparison of lung cancer-affected area for PET image.
266 Computational Imaging and Analytics in Biomedical Engineering
The first column in Figure 18.13 shows CT images, the second column
shows the PET images, the third column shows the fused image for wavelet
transform based maximum absolute fusion rule, and the fourth column shows
the PCA fusion rule
The first column in Figure 18.14 shows CT images, the second column
shows the PET images, the third column shows the fused image for Fast
Discrete Curvelet Transform based maximum absolute fusion rule, and the
fourth column shows the PCA fusion rule.
Detection of Lung Cancer Using Fusion Methods for CT and PET Images 267
The first column in Figure 18.15 shows CT images, the second column
shows the PET images, the third column shows the fused image for Non
Sub-sampling Contourlet Transform (NSCT) based maximum absolute
fusion rule, and the fourth column shows the PCA fusion rule.
The first column in Figure 18.16 shows CT images, the second column
shows the PET images, the third column shows the fused image for multi-
modal based maximum absolute fusion rule, and the fourth column shows
the PCA fusion rule.
268 Computational Imaging and Analytics in Biomedical Engineering
FIGURE 18.16 The result for fused image in multi-model image fusion based.
18.6 CONCLUSIONS
fusion method under fusion rule of maximum absolute and PCA are 7.568,
49.58, 60.282, 0.623, and 0.978, respectively. Higher resolution of CT and
PET image may be used for future work to increase the level of accuracy for
the result.
KEYWORDS
• CT
• PET
• SWFCM
• FLICM
• RMSE
• PSNR
REFERENCES
1. Aggarwal, P. et al. Sardana, Semantic and Content-Based Medical Image Retrieval for
Lung Cancer Diagnosis with the Inclusion of Expert Knowledge and Proven Pathology.
In Proceedings of the IEEE Second International Conference on Image Information
Processing ICIIP'2013, 2013; pp 346–351.
2. Zsoter, N.; Bandi, P.; Szabo, G.; Toth, Z.; Bundschuh, R. A.; Dinges, J.; Papp, L. PET-CT
Based Automated Lung Nodule Detection. In Annual International Conference of the
IEEE Engineering in Medicine and Biology, 2012; pp 4974–4977.
3. Punithavathy, K.; Ramya, M. M.; Poobal, S. Analysis of Statistical Texture Features for
Automatic Lung Cancer Detection in PET/CT Images. In International Conference on
Robotics, Automation, Embedded & Control RACE 2015, 2015.
4. Akram, S.; Nicolini, S.; Carolia, P.; Nannia, C.; Massarob, A.; Marzolab, M.-C.;
Rubellob, D.; Fantia, S. PET /CT Imaging in Different Types of Lung Cancer: An
Overview. Eur. J. Radiol. 2013, 81, 988–1001.
5. Akram, S.; Javed, M.-Y.; Qamar, U.; Khanum, A.; Hassan, A. Artificial Neural Network
based Classification of Lungs Nodule Using Hybrid Features from Computerized
Topographic Images. Appl. Math. Inf. Sci. 2015, 9 (1), 183–195.
6. Sharma, D.; Jindal, G. Identifying Lung Cancer Using Image Processing Techniques.
Int. Conf. Comput. Techniq. Artif. Intell. (ICCTAI’2011) 2011, 17, 872–880.
7. Tong, J.; Da-Zhe, Z.; Ying, W.; Xin-Hua, Z. Xu, W. Computer-Aided Lung Nodule
Detection Based on CT Images. In IEEE/ICME International Conference on Complex
Medical Engineering, 2007; pp 816–819.
8. Ganeshbabu, T. R. Segmentation of Optic Nerve Head for Glaucoma Detection using
Fundus images. Biomed. Pharmacol. J. 2014, 7 (2), 1–9.
Detection of Lung Cancer Using Fusion Methods for CT and PET Images 273
9. Kanakatte, A.; Mani, N.; Srinivasan, B.; Gubbi, J. Pulmonary Tumor Volume Detection
from Positron Emission Tomography Images. Int. Conf. Bio Med. Eng. Inf. 2008, 2 (8),
213-217.
10. Kumar, A.; Kim, J.; Wen, L.; Dagan, D.; Feng, F. A Graph-Based Approach to the
Retrieval of Volumetric PET-CT Images. In International Conference of the IEEE
Engineering in Medicine and Biology, 2012; pp 5408–5411.
11. Ganesh Nabu, T. R.; Tamil Thendral, M.; Vidhya, K. Detection of Lung Cancer
Tumorusing Fuzzy Local Information C- Means Clustering. Int. J. Pure Appl. Math.
2018, 118 (17), 389–400.
12. Ahmed, H.; Hassan, E. N.; Badr, A. A. Medical Image Fusion Algorithm Based on Local
Average Energy-Motivated PCNN in NSCT Domain. Int. J. Adv. Comput. Sci. App.
2016, 7 (10), 269–276.
13. Patel, K. Fusion Algorithms for Images Based on Principal Component Analysis and
Discrete Wavelet Transform. Int. J. Innov. Res. Sci. Technol. 2015, 1, 180–182.
14. Nahvi, N.; Sharma, O. C. Comparative Analysis of Various Image Fusion Techniques
for Biomedical Images: A Review. Int. J. Eng. Res. App. 2015, 4 (5), 81–86.
15. Rajesh, K.; Ravichandran, C. G. Curvelet Transform Based Image Fusion With Noise
Reduction Using Gaussian Filter. In Australian J. Basic Appl. Sci. 2015, 9 (35), 161–166.
16. Rajalingam, B.; Priya, R. A Novel Approach for Multimodal Medical Image Fusion
using Hybrid Fusion Algorithms for Disease Analysis. Int. J. Pure Appl. Math. 2017,
117 (15), 599–619.
17. Wang, W.; Chang, F. A Multi-Focus Image Fusion Method Based on Laplacian Pyramid.
J. Comput. 2011, 6 (12), 2559–2566.
18. Sun, Y.; Zhao, C.; Jiang, L. A New Image Fusion Algorithm Based on Wavelet Transform
and the Second Generation Curvelet Transform. In IEEE International Conference on
Image Analysis and Signal Processing, 2010; pp. 438–441.
19. Fei, Y.; Wei, G.; Zongxi, S. Medical Image Fusion Based on Feature Extraction and
Sparse Representation. Int. J. Biomed. Imaging 2017.
20. Yang, Y.; Huang, S.; Gao, J.; Qian, Z. Multi-Focus Image Fusion Using an Effective
Discrete Wavelet Transform Based Algorithm. Measurement Sci. Rev. 2014, 14 (2),
102–108.
21. Liu, Z.; Chai, Y.; Yin, H.; Zhou, J.; Zhu, Z. A Novel Multi-Focus Image Fusion Approach
Based on Image Decomposition. Inf. Fusion 2017, 35, 102–116.
CHAPTER 19
A FRAMEWORK PROMOTING
POSITION TRUST EVALUATION
SYSTEM IN CLOUD ENVIRONMENT
S. R. SRIDHAR, S. PRAGADEESWARAN, and M. GANTHIMATHI
Department of CSE, Muthayammal Engineering College, Namakkal,
Tamil Nadu, India
ABSTRACT
Trust management is one of the biggest obstacles to the adoption and growth
of cloud computing. Both consumers and corporations commonly use cloud
computing. Protecting the customer’s privacy would not be a simple task due
to the sensitive data contained in the interaction between the customer and
the trust evaluation service. It might be challenging to defend cloud services
against malicious users. The design and development of CloudArmor, a
legacy trust evaluation framework that offers a set of functions to provide
Trust as a Service, will be discussed in this chapter (TaaS). This chapter also
discusses the problems with calculating trust based on input from cloud users.
This technology effectively protects cloud services by detecting hostile and
inappropriate behavior by the use of trust algorithms, which may recognize
on/off assaults and colluding attacks by using different security criteria. In
conclusion, the findings demonstrate that the suggested trust model system
may deliver high security by lowering security risk and enhancing the
decision-making capabilities of cloud users and cloud operators.
19.1 INTRODUCTION
Two major players have emerged in the new computing method known as
cloud computing. cloud-end consumers and cloud service providers. There
are multiple definitions given forth by various authors to precisely clarify
what cloud computing is. In the realm of computers, cloud computing is a
new business model. According to NIST’s official definition, “public cloud
is a platform which enables widespread, comfortable, on-demand access
to a common cloud computing model” (i.e., connections, servers, storage,
applications, and services) that can be quickly provisioned and released with
little interaction from service providers.
Cloud computing adoption raises privacy problems. Customers and cloud
service providers may engage in dynamic exchanges that include sensitive
data. There have been a number of instances of privacy violations, including
the disclosure of behavioral data or sensitive information (such a person’s
address and birth date). Services that use customer information (such as
interaction history) should unquestionably protect that information.
Collusion attempts become a threat whenever an individual or group of
evil elements tries to undermine the system. A reputational system’s cred
ibility is typically put at greater risk when several evil actors work together
than when they act maliciously alone. Here are a few instances of particular
cooperation attacks.
Collusive slandering attacks, also known as corrupt business badmouthing
attacks, take place when dishonest users team up to disseminate negative
testimonials about a reliable person in an effort to badly damage that user’s
reputation. They also hope to boost their own reputations by complimenting
one another.
Several industries, including e-commerce, human sociology, wireless
systems, and others employ trust management extensively. Finding a service
provider you can trust in the cloud environment requires careful consider
ation of your level of trust. When evaluating a service provider’s reliability,
cloud service user evaluations play a big role. This study looks at a number
of risks that may arise when trust is determined by user input from the cloud.
The link among cloud user’s services is offered by a trust management
system (TES) for efficient trust management. However, because to the unpre
dictability of user numbers and the extremely dynamic character of the cloud
environment, ensuring TES uptime is a challenging task. It is unsuitable for
cloud systems to evaluate user preferences and capabilities using indicators
A Framework Promoting Position Trust Evaluation 277
The opinions of cloud service consumers are a reliable source for deter
mining how trustworthy cloud services are overall. As demonstrated in
Figure 19.1, unique methodologies are introduced in this paper that assist
in identifying reputation-based assaults and enable users to quickly select
reliable cloud services. It presents a version of success that not only
recognizes deceptive trust feedbacks arising from differential attack but
also recognizes Sybil attacks whether they occur over a lengthy or brief
period of time, also provides a model of availability that keeps the trust
comprehensive suite operating at the desired level; it also provides a model
of availability that keeps the trust management solutions operating at the
desired level.
A model of credibility. The effectiveness of the trust management is
significantly influenced by the veracity of feedback. Therefore, suggest a
number of metrics, such as Output Volume and Frequent Feedback Colluding,
for the detection of feedback collusion. These measurements separate decep
tive user feedback from malevolent user behavior.
A Framework Promoting Position Trust Evaluation 279
The trust model has additional layers added to it to improve the system’s
overall efficacy. The following section provides descriptions of the various
TES subsections.
The central repository serves as the place where interactions are stored.
For later use by the deciding engine trust for determining the values of the
tasks and roles, it maintains all types of trust data and interaction histories
produced by interaction skills and duties. Elements not found in the TES are
inaccessible to the central repository.
• Role Behavior Analyzer: This part examines how the simplest levels
of trust laws apply to shared resources, including roles and functions.
Based on the feedback provided by the service in the central reposi
tory, it assesses those regulations that have been recognized in the
level of trust. The role behavior checker links the roles in order to
learn more about them and find any leaks that might exist. In order
to easily follow unauthorized users or attackers and provide proof of
any type of data loss, it is crucial to identify the user and keep track of
every actions they do.
• Task Behavior Analyzer: The task behavior analyzer is in charge
of assessing tasks and functions in light of minimal trust level laws
while gaining access to shared resources. By computing the trust
value and storing it in the central repository, the tasks indicated only
within confidence level are examined in terms of such feedback of
owners. When determining the histories of users with relation to the
stored data, it gathers data from the channels, though there are two
in this case reports from development comes information leakage
and data from the role behavior analyzer. The task behavior analyzer
can help identify customers, and it’s important to keep track of the
280 Computational Imaging and Analytics in Biomedical Engineering
that uses credibility and feedback to determine the value of trust. The
percentage of recognized consensus feedback can help determine the
veracity of other feedback.
B. Accuracy of Trust Result
Because the cloud environment is changing, selecting the reliable
feedbacks from among the many supplied feedbacks is the most
difficult problem. Accuracy of Trust and Trustworthiness are signifi
cantly correlated with others. By minimizing potential threats, we
can obtain an accurate trust result.
C. Collusion Attacks
When several users band together to submit false feedback in an
effort to boost or lower the service provider’s trust score, the attack
takes place. This conduct is referred to as tacit collusion malicious
feedback behavior. Collusion assaults of three different types are
feasible.
a) A promotion-focused attack
Entire group Enter all encouraging comments to help the cloud
service provider grow.
b) Slanderous Assault
The entire group entered all critical comments for the cloud
service provider.
Occasional Feedback Collusion.
c) Oblique Feedback Collusion
Cloud services occasionally experience collusion assaults. Time
is a crucial factor in recognizing sporadic collusion attacks.
Irissappane suggested a clustering strategy that would differ
entiate submitted feedback from malicious feedback presented
clustering approach to increase the performance of the trust
system. This approach creates clusters based on the variances
in all the ratings. The value of trust is increased by combining
reliable weighted ratings. We can calculate the irregular change
in submitting all feedback loops within the overall feedback
behavior to identify Occasional Feedback Collusion Sybil
Attacks.
d) Such an attack takes place when malicious people leave several
false reviews while using different identities (i.e., producing
numerous false ratings while making a limited number of
different purchases in a brief period of time) in an effort to boost
or lower the trust rating.
A Framework Promoting Position Trust Evaluation 283
F. Newcomer Attacks
If a person can readily register under a new identity even when the
member already has registered with the service provider and displayed
some unsavory behavior in the past, the member can launch a newbie
assault. Re-entry attack is another name for this assault. It succeeded
in Sybil’s attack in the end. By matching the credential recodes with
various parameters, such as location and unique id, we can lessen
newbie attacks.
Until input from the confidence decision engine is received, the task
entity analyses the task’s parameters that determine a consumer's cloud task
membership, and any harmful consumer membership is removed.
When a task’s owner complains about it because of information leaks, the
task entity sends Channel 7 information about the leak to the task behavior
analyzer. Then, the analyzers utilize Channels 6 and 8 to continuously update the
trust information again for roles and responsibilities in the centralized database;
The TMS does a trust analysis whenever an owner desires that his data be
transferred and protected in the cloud. Following receipt of the request, the
TMS contacts the proprietors through Channel 9; the trusted decision engine
informs the owners of the results of the trust management for their respective
responsibilities. Based on the findings, the data owners decide whether or not
to grant customers access to their services.
The trust model uses the penalties for bad behavior to establish the worth
of contact trust for malicious users. Figures 19.4 and 19.5 show how fresh
feedback affects interactions between trustworthy users.
FIGURE 19.5 Trust scores for interactions with 100 malevolent customers.
A Framework Promoting Position Trust Evaluation 287
Figure 19.7 demonstrates how the attacker scale (AS), which is the size
of the attack scale for distinct collusion sets, is determined using the trust
model. To assault and undermine the trust model, the harmful recommenders
in a recommender’s community must make up a sizeable portion of all
recommenders. This is known as the collusion set (CS).
Figure 19.8 gives the result of the attack target scale (ATS), which
provides the destructive feedback rate from one collusion set (CS) for a
certain user.
Figure 19.9 gives the result of a collusion attack strength (CAS), which
is calculated by calculating the rate of all harmful feedback from various
collusion sets (CS) for a certain user.
a) Experimental Evaluation:
We put our credibility model to the test using actual customer reviews of cloud
services. We specifically crawled a number of review websites, including
CloudHostingReviewer.com, fog. findthebest.com, cloud storage supplier
sreviews.com, and cloud computing. findthebest.com, where consumers
submit feedback on cloud services they have utilized. The gathered informa
tion is shown as a field H, where the feedback corresponds to the various
QoS criteria stated previously and is supplemented with a credential for each
associated customer. We were able to compile 10,076 comments submitted
by 6982 people about 113 actual cloud services. The project website has
made the gathered dataset available to the scientific community. The gath
ered information was split into six categories of cloud computing, three of
which were used for experimental reasons.
Where each group, consisting of 100 users, was used to validate the
criteria needed against differential attack, as well as the other three catego
ries have been used to model validation against Sybil assaults. Each cloud
storage group served as a representation of a different type of assaulting
behavior, including Waves, Uniform, and Peaks. The behavior models show
how many harmful feedbacks were introduced overall during a specific time
instance, for example, |V(s)| = 50 malicious feedbacks.
When testing against collusion attacks Ti = 50. When testing against
Sybil assaults, the behavior models also show the total number of names
created by attackers over a period of time (for instance, |I(s)| = 78 malevolent
individuals when Tj = 30. We modeled malicious feedback to improve the
trust results for cloud services in collusion attacks (i.e., a self-promotional
attack), while we modeled hostile feedback to decrease the trust results for
Sybil attacks (i.e., slandering attack). To assess how resilient our credibility
model is to malevolent conduct (such as conspiracy and Sybil assaults),
We carried out two sets of experiments: (1) testing the robustness of the
credibility model using a standard model Con(s, t0, t) (i.e., setting Cr(c, s,
t0, t) to 1 for all confidence feedbacks); and (2) assessing the performance of
the model using accuracy and recall (i.e. how well TMS detected attackers)
(i.e., how many detected attacks are actual attacks). In our trials, TMS started
paying cloud services that engaged in harmful behavior once the attack
percentage reached 35% (i.e., et(s) = 35%), so the awarding procedure would
only occur when there was a significant drop in the trust result.
292 Computational Imaging and Analytics in Biomedical Engineering
Six of the 12 experiments we ran were to test how well our credibility
model stood up to collusion assaults, while the other six were to test how
well our model stood up to Sybil attacks. According to Table 19.3, each
study is identified by a letter ranging from A to F.
a result of TMS’s requirement that the percentage of assaults over the same
period of time exceeds a certain threshold before rewarding the impacted
cloud services (i.e., which is set to 25% in this case). This indicates that TMS
has given the impacted cloud service a reward based on the factor for the
change rate of trust outcomes. Additionally, Figure 19.11D′, 19.11E′19.11D′,
and 19.11F′ show that our credibility model performs best in terms of
accuracy when the Waves behavior model is applied (i.e., 0.48; see Fig.
19.11D′), whereas the maximum recall score is obtained when the Uniform
behavior model is applied (i.e., 0.75; see Fig. 19.11A′). The ability of TMS
to commend the impacted cloud service that uses the rate of confidence
results factor shows that our model can identify Attacks (i.e., either corporate
strategy attacks like those in the Waves and Uniform behavior models or
infrequent attacks like those in the Peaks behavior model) successfully.
than the smaller RMSE value. The trustworthiness outcome caching reli
ability of one specific cloud service is displayed in Figure 19.13. The chart
shows that when the cache threshold rises, the caching error grows approxi
mately linearly.
is fairly low and much more reliable than the overall number of TMS
nodes when reallocation is not employed. The outcomes of experi
ments II are displayed in Figure 19.13. The graphic shows that the
number of TMS nodes decreases as the workload threshold increases.
The number of TMS nodes, however, is lower when the reshuffling of
trust feedback systems technique is used than when reshuffling is not
taken into account. This means that by lowering the number of TMS
nodes, our solution provides advantages in cutting the bandwidth
cost.
19.6 CONCLUSIONS
KEYWORDS
• trust evaluation
• CloudArmor
• colluding attacks
• trust management
• cloud computing
298 Computational Imaging and Analytics in Biomedical Engineering
REFERENCES
1. Noor, T. H.; Sheng, M.; Alfazi, A. , In Proceedings of the 12th IEEE International
Conference on Trust; Security and Privacy in Computing and Communications,
Melbourne, Australia, July 16–18, 2013; pp 469–476.
2. Chang, E. , In Proceedings of the IEEE International Conference on Computer
Communications, Valencia, Spain, July 29–August 1, 2019.
3. Mahajan, S.; Mahajan, S.; Jadhav, S.; Kolate, S. Trust Management in E-commerce
Websites. 2017, , 2934–2936.
4. Habib, S. M.; Hauke, S.; Ries, S.; Muhlhauser, M. Trust as a Facilitator in Cloud
Computing: A Survey. 2012, , 19.
5. Khan, K.; Malluhi, Q. Establishing Trust in Cloud Computing, Qatar University, IEEE
IT Professional, 2010; vol (5).
6. Manuel, P.; Somasundaram, T. S. A Novel Trust Management System for Cloud
Computing – IaaS Providers. 2011, , 3–22.
7. Chong, S. K.; Abawajy, J.; Hamid, I. R. A.; Ahmad, M. A Multilevel Trust Management
Framework for Service Oriented Environment. 2013, , 396–405.
8. Wang, D.; Mullerr, T.; Liu, Y.; Zhang, J. Towards Robust and Effective Trust Management
for Security: A Survey. 2014.
9. Kotikela, S. N. S.; Gomathisankaran, M. In , International Conference on Cyber
Security, 2012.
10. Muchahari, M. K.; Sinha, S. K. In , IEEE International Symposium on Cloud and
Services Computing (ISCOS0), 2012.
11. Canedo, E. D.; de Sousa, R. T.; de Carvalho, R. R.; de Oliveira, A. R. In , IEEE International
Conference on Cyber Security; Cyber Warfare and Digital Forensic(CyberSec), 2012.
12. Noor, T. H.; Sheng, Q. Z.; Yao, L.; Dustdar, S.; Ngu, A. H. H. CloudArmor: Supporting
Reputation-Based Trust Management for Cloud Services. 2014.
13. Xiong, L.; Liu, L. Peertrust: Supporting Reputation-Based Trust for Peer-to-Peer
Electronic Communications. 2004, (7), 843–857.
14. Irissappane, A. A.; Jiang, S.; Zhang, J. In , Proceedings of the 2014 International
Conference on Autonomous Agents and Multi-Agent Systems, 2014; pp 1385–1386.
15. Liu, S.; Zhang, J.; Miao, C.; Theng, Y. L.; Kot, A. C. In , Proceedings of the 10th
International Conference on Autonomous Agents and Multi agent System, 2011; vol 3,
pp 1151–1152.
CHAPTER 20
ABSTRACT
20.1 INTRODUCTION
AI may be a system for seeing plans that may be applied to medical images.
The same manner that associate necessary resource will facilitate in transfer
clinical findings, it will normally be twisted. AI often begins with the
machine learning calculation system schemingthe image remembers that are
conceded to be of significance for creating the supposition or examination of
interest. The Machine Learning calculation system additionally, at that time,
perceives the fashionable mixture of these image options for requesting the
image or enrolling some estimation for the given image space. There are
a handful of designs that may be used, every with totally different charac
teristics and scarcities. There are open-supply metamorphoses of utmost of
those machine learning systems that alter them to endeavor to use to film
land. A handful of estimations for assessing the donation of a calculation
live; anyhow, one ought to be apprehensive of the doable connected snares
that may succeed in deceiving estimations. So a lot of recently, important
accomplishments have begun to be used; this fashion enjoys the profit that it
does not bear image purpose ID-associated computation as an underpinning
advance; rather, options are worthy as a region of the accomplishment frame.
AI has been employed in clinical imaging and can have an extra clear
impact from then on out. Those operating in Medicalpictures ought to be
apprehensive of however machine accomplishment capacities.
Clinical imaging helps experts to all the more promptly review patients’
bones, organs, tissue, and veins through effortless means. The methodologies
306 Computational Imaging and Analytics in Biomedical Engineering
The guideline benefit of clinical picture dealing with is that it considers each
around, but inoffensive examination of internal life fabrics. 3D models of the
actuality designs of interestcan be made and audited to also foster treatment
results for the case, encourage better clinical contrivances and medicine
movement structures, or achieve further tutored anatomize. It has come one
of the crucial instruments used for clinical progress of late.42
The constantly dicing down at nature of imaging joined with state-of
the-art programming instruments works with exact progressed duplication
of factual plans at colorful scales, too comparatively likewise with for the
utmost part changing parcels including bone and fragile apkins. Assessment
and development of re-sanctioning models which intertwine pukka factual
calculations allow the important occasion to more complete the process of
understanding, for illustration of relationship between determined life fabrics
and clinical contrivances.43
On the other hand, freelance AI calculations are used once the data
won’t to make preparations is neither gathered nor named. Freelance skill
focuses on how structures will accumulate associate capability to depict a
310 Computational Imaging and Analytics in Biomedical Engineering
This is possibly the main locale for Machine Learning, and it will end up
being impressively more notable in the near future. You can isolate robotized
operation into the going with classes:
• Programmed stitching.
• Careful work process displaying.
• Improvement of automated careful materials.
• Careful expertise assessment.
Efficient Machine Learning Techniques for Medical Images 311
Area Challenges
Data governance Clinical information is as yet private and taboo for access. In any
case, as per a Wellcome Foundation study in the UK, just 17%
of public respondents are against offering their clinical data to
outsiders.
Transparent algorithms The need for straightforward calculations is not simply expected
to meet severe medication improvement guidelines, yet addition
ally as a general rule, individuals need to see how precisely
calculations create ends.
Optimizing electronic There is still a great deal of split information between various
records informational indexes that need genuinely sorting out. At the
point when the current situation improves, it will provoke
propels in private treatment courses of action.
Embracing the power The medical care industry should change its view on the worth
of data silos of information and the manner in which it could bring esteem
from the drawn out viewpoint. Drug organizations, for instance,
are ordinarily hesitant to change their item methodologies and
exploration without prompt monetary advantages.
Data science experts Attracting more Machine Learning trained professionals and
data science specialists is truly huge for both the clinical
consideration and medication adventures.
4. Personalized drugs
The conventions will still the method that a lot of be possible by coordinative
individual substance with even handed examination is additionally ready
Efficient Machine Learning Techniques for Medical Images 313
area unit for redundant essay and higher grievance assessment. This moment,
specialists area unit restricted to poring a selected course of action of ends or
check the bet to the case considering his intriguing history and open heritable
info. Anyhow, AI in drug is creating implausible strides, and IBM Watson
medical specialty is at the frontal line of this advancement by victimization
patient clinical history to assist with making completely different treatment
opinions. Sooner instead of late, we will see more contrivances and biosen
sors with current substance assessment capacities hit the request, permitting
more information to open up for similar fashionable in school ML-grounded
clinical advantages marches.
loading of your time and rich person and may invest in some occasion to
complete, once in mistrustfulness. Applying ML- grounded perceptive
assessment to fete implicit clinical starter contenders will facilitate consul
tants with drawing a pool from a good assortment of instructional effects, for
case, past skilled visits, on-line recreation, etc. AI has conjointly detected
use in icing nonstop checking and information access of the abecedarian
individualities, seeing the trendy model size as tried, and victimization the
ability of electronic records to lessen information grounded botches.
9. Higher radiotherapy
One in every of the foremost pursued uses of AI in clinical thought is within
the area of Radiology. Clinical image assessment has colorful separate rudi
ments which may arise at a selected definition of your time. There are unit
colorful injuries, illness foci, etc., which may not be primarily displayed
victimization advanced circumstances. Since ML-grounded estimations gain
from the massive range of various models open exhausting, it becomes less
advanced to anatomize and notice the rudiments. One in every of the fore
most notable functions of AI in clinical image assessment is that the depic
tion of papers, for case, injuries into orders like traditional or uncommon,
injury or non-sore, etc. Google’s DeepMind Health is de facto serving to
experimenters in UCLH with creating calculations which may fete the isola
tion among sound and dangerous towel and farther foster radiation treatment
for identical.
Efficient Machine Learning Techniques for Medical Images 315
KEYWORDS
• machine learning
• image processing
• artificial intelligence
• electronic tomography
• convolutional neural networks
• MRI Images
• CT Images
REFERENCES
Internet of Medical Things Platform. J. Ambient Intell. Human Comput. 2021, 12,
3303–3316. https://doi.org/10.1007/s12652-020-02218-1
4. Vijayaraj, A.; Vasanth Raj, P. T.; Jebakumar, R.; Gururama Senthilvel, P.; Kumar, N.;
Suresh Kumar, R.; Dhanagopal, R. Deep Learning Image Classification for Fashion
Design. InWireless Communications and Mobile Computing; Hashmi, M. F., Ed.;
Hindawi Limited, 2022; vol. 2022, pp 1–13. https://doi.org/10.1155/2022/7549397
5. Saravanakumar, P.; Sundararajan, T. V. P.; Dhanaraj, R. K.; Nisar, K. Memon, F. H.; et
al. Lamport Certificateless Signcryption Deep Neural Networks for Data Aggregation
Security in WSN. Intell. Autom. Soft Comput. 2022, 33 (3), 1835–1847.
6. Jeyaselvi, M.; Dhanaraj, R. K.; Sathya, M.; et al. A Highly Secured Intrusion Detection
System for IoT Using EXPSO-STFA Feature Selection for LAANN to Detect Attacks.
Cluster Comput. 2022. https://doi.org/10.1007/s10586-022-03607-1
7. Das, B.; Mushtaque, A.; Memon, F.; Dhanaraj, R. K.; Thirumalaisamy, M.; Shaikh, M.
Z.; Nighat, A.; Gismalla, M. S. M. Real-Time Design and Implementation of Soft Error
Mitigation Using Embedded System. J. Circuits Syst. Comput. World Scientific
Pub Co Pte Ltd., 2022. https://doi.org/10.1142/s0218126622502802
8. Pichumani, S.; Sundararajan, T. V. P.; Dhanaraj, R. K.; Nam, Y.; Kadry, S. Ruzicka
Indexed Regressive Homomorphic Ephemeral Key Benaloh Cryptography for Secure
Data Aggregation in WSN. J. Int. Technol. 2021, 22 (6), 1287–1297.
9. Dhanaraj, R. K.; Krishnasamy, L.; et al. Black-Hole Attack Mitigation in Medical
Sensor Networks using the Enhanced Gravitational Search Algorithm. Int. J. Uncertain.
Fuzziness Knowledge-Based Syst. 2021, 29, 397–315. https://doi.org/10.1142/
S021848852140016X
10. Dhanaraj, R. K.; Ramakrishnan, V.; Poongodi, M.; Krishnasamy, L.; Hamdi, M.;
Kotecha, K.; Vijayakumar, V. Random Forest Bagging and X-Means Clustered
Antipattern Detection from SQL Query Log for Accessing Secure Mobile Data. In
Wireless Communications and Mobile Computing; Jain, D. K., Ed.; Hindawi Limited.,
2021; vol 2021, pp 1–9. https://doi.org/10.1155/2021/2730246
11. Dhanaraj, R. K.; Lalitha, K.; Anitha, S.; Khaitan, S.; Gupta, P.; Goyal, M. K. Hybrid
and Dynamic Clustering Based Data Aggregation and Routing for Wireless Sensor
Networks. J. Intell. Fuzzy Syst. 2021, 40 (6), 10751–10765). https://doi.org/10.3233/
jifs-201756
12. Krishnamoorthi, S.; Jayapaul, P.; Dhanaraj, R. K.; et al. Design of Pseudo-random
Number Generator from Turbulence Padded Chaotic Map. Nonlinear Dyn. 2021. https://
doi.org/10.1007/s11071-021-06346-x
13. Dhanaraj, R. K.; Krishnasamy, L.; Geman, O.; Izdrui, D. R. Black Hole and Sink Hole
Attack Detection in Wireless Body Area Networks. Comput. Mater. Continua 2021, 68
(2), 1949–1965. DOI: 10.32604/cmc.2021.015363
14. Ramasamy, M. D.; Periasamy, K.; Krishnasamy, L.; Dhanaraj, R. K.; Kadry, S.; Nam, Y.
Multi-Disease Classification Model Using Strassen’s Half of Threshold (SHoT) Training
Algorithm in Healthcare Sector. IEEE Access DOI: 10.1109/ACCESS.2021.3103746.
15. Ramakrishnan, V.; Chenniappan, P.; Dhanaraj, R. K.; Hsu, C. H.; Xiao, Y.; Al- Turjman,
F. Bootstrap Aggregative Mean Shift Clustering for Big Data Anti-pattern Detection
Analytics in 5G/6G Communication Networks. Comput. Electr. Eng. 2021,
95, 107380. https://doi.org/10.1016/j.compeleceng.2021.107380
16. Krishnasamy, L.; Ramasamy, T.; Dhanaraj, R.; Chinnasamy, P. A Geodesic Deployment
and Radial Shaped Clustering (RSC) Algorithm with Statistical Aggregation in Sensor
Networks. Turkish J. Electr. Eng. Comput. Sci. 2021, 29 (3). DOI: 10.3906/elk-2006-124
Efficient Machine Learning Techniques for Medical Images 317
17. Kumar, D. R.; Krishna, T. A.; Wahi, A. Health Monitoring Framework for in Time
Recognition of Pulmonary Embolism Using Internet of Things. J. Computat. Theor.
Nanosci. 2018, 15 (5), 1598–1602. https://doi.org/10.1166/jctn.2018.7347
18. Krishnasamy, L.; Dhanaraj, R. K.; Ganesh Gopal, D.; Reddy Gadekallu, T.; Aboudaif,
M. K.; Abouel Nasr, E. A Heuristic Angular Clustering Framework for Secured
Statistical Data Aggregation in Sensor Networks. Sensors 2020, 20 (17), 4937. https://
doi.org/10.3390/s20174937
19. Sathyamoorthy, M.; Kuppusamy, S.; Dhanaraj, R. K.; et al. Improved K-Means Based Q
Learning Algorithm for Optimal Clustering and Node Balancing in WSN. Wirel. Pers.
Commun. 2022, 122, 2745–2766. https://doi.org/10.1007/s11277-021-09028-4
20. Dhiviya, S.; Malathy, S.; Kumar, D. R. Internet of Things (IoT) Elements, Trends
and Applications. J. Computat. Theor. Nanosci. 2018, 15 (5), 1639–1643. https://doi.
org/10.1166/jctn.2018.7354
21. Jena, S. R.; Shanmugam, R.; Dhanaraj, R. K.; Saini, K. Recent Advances and Future
Research Directions in Edge Cloud Framework. Int. J. Eng. Adv. Technol. 2019, 9 (2),
439–444. https://doi.org/10.35940/ijeat.b3090.129219
22. Kumar, R. N.; Chandran, V.; Valarmathi, R. S.; Kumar, D. R. Bitstream Compression
for High Speed Embedded Systems Using Separated Split Look Up Tables (LUTs).
J. Computat. Theor. Nanosci. 2018, 15 (5), 1719–1727. https://doi.org/10.1166/
jctn.2018.7367
23. Irfan, S.; Dhanaraj, R. K. BeeRank: A Heuristic Ranking Model to Optimize the
Retrieval Process. Int. J. Swarm Intell. Res. 2021, 12 (2), 39–56. http://doi.org/10.4018/
IJSIR.2021040103
24. Rajesh Kumar, D.; Shanmugam, A. A Hyper Heuristic Localization Based Cloned Node
Detection Technique Using GSA Based Simulated Annealing in Sensor Networks. In
Cognitive Computing for Big Data Systems Over IoT; Springer International Publishing,
2017; pp. 307–335. https://doi.org/10.1007/978-3-319-70688-7_13
25. Prasanth, T.; Gunasekaran, M.; Kumar, D. R. In Big data Applications on Health Care,
2018 4th International Conference on Computing Communication and Automation
(ICCCA), Dec 2018. https://doi.org/10.1109/ccaa.2018.8777586
26. Sathish, R.; Kumar, D. R. In Dynamic Detection of Clone Attack in Wireless Sensor
Networks, 2013 International Conference on Communication Systems and Network
Technologies, Apr 2013 . https://doi.org/10.1109/csnt.2013.110
27. Rajesh Kumar, D.; ManjupPriya, S. In Cloud based M-Healthcare Emergency Using
SPOC, 2013 Fifth International Conference on Advanced Computing (ICoAC), Dec
2013. https://doi.org/10.1109/icoac.2013.6921965
28. Sathish, R.; Kumar, D. R. In Proficient Algorithms for Replication Attack Detection in
Wireless Sensor Networks —: A Survey, 2013 IEEE International Conference ON
Emerging Trends in Computing, Communication and Nanotechnology (ICECCN), Mar
2013. https://doi.org/10.1109/ice- ccn.2013.6528465
29. Lalitha, K.; Kumar, D. R.; Poongodi, C.; Arumugam, J. Healthcare Internet of Things –
The Role of Communication Tools and Technologies. In Blockchain, Internet of Things,
and Artificial Intelligence; Chapman and Hall/CRC, 2021; pp 331–348. https://doi.
org/10.1201/9780429352898-17
30. Rajesh Kumar, D.; Rajkumar, K.; Lalitha, K.; Dhanakoti, V. Bigdata in the Management
of Diabetes Mellitus Treatment. In Studies in Big Data; Springer: Singapore, 2020; pp
293–324. https://doi.org/10.1007/978-981-15-4112-4_14
318 Computational Imaging and Analytics in Biomedical Engineering
44. Arvindhan, M.; Dhanaraj, R. K. The Firefly Technique with Courtship Training
Optimized for Load Balancing Independent Parallel Computer Task Scheduling in
Cloud Computing. Int. J. Health Sci. 2022, 6 (S1), 8740–8751. https://doi.org/10.53730/
ijhs.v6nS1.6999
45. Juyal, V.; Pandey, N.; Saggar, R. In Impact of Varying Buffer Space for Routing Protocols
in Delay Tolerant Networks, 2016 International Conference on Communication and
Signal Processing (ICCSP); IEEE, 2016. https://doi.org/10.1109/iccsp.2016.7754562
46. Juyal, V.; Singh, A. V.; Saggar, R. In Message Multicasting in Near-Real Time Routing
for Delay/Disruption Tolerant Network, 2015 IEEE International Conference on
Computational Intelligence & Communication Technology (CICT); IEEE, 2015. https://
doi.org/10.1109/cict.2015.79
47. Juyal, V.; Saggar, R.; Pandey, N. On Exploiting Dynamic Trusted Routing Scheme in
Delay Tolerant Networks. Wirel. Pers. Commun. 2020, 112, 1705–1718. https://doi.
org/10.1007/s11277-020-07123-6
48. Deserno T. Medical image processing. Optipedia, SPIE Press, Bellingham, WA. 2009.
INDEX
A digitalization, 120
DMAIC (define, measure, analyze,
Achilles tendon ruptures (ATR), 177
improve, and control), 116
Artificial general intelligence (AGI), 113
DSS and CDSS, 115
Artificial intelligence (AI), 2, 105
EMR, 116, 117
ACM SigCHI meeting, 110, 111
EMR quality improvement, 114
artificial general intelligence (AGI), 113
HCI professionals, 118
business intelligence (BI)
MOOCs, 119
commonplace measurements, 131
portable innovations, 119
exploration views (EV), 134 Subjective Mental Effort Question
frameworks, 131 (SMEQ), 116
human memory, restrictions, 135 UI augmentation, 115
human-focused plan, 133 HospiSign stage, 106
measure of information, 136 human-PC communication stage, 106
OLAP inquiries, 131 logical and designing disciplines, 108
primary objective, 133 man-made consciousness, 108
SAP Research, 132 market investigation
digital hearing hyper-combined foundation market, 138
comparative perceptions, 123 hyper-converged infrastructure market,
Dicta-Sign Wiki, 125 139
human–PC cooperation plan, 120, 121 Hyper-Converged Infrastructure (HCI)
intelligent sonifications, 121 Market, 136
k-Nearest Neighbors Algorithm hyper-intermingling framework, 137
(k-NN), 125 hyper-joined foundation market, 138
Sign Wiki model, 125 hyper-merged foundation market, 137
signal-based gadgets, 125 hyper-united foundation market, 138
sound items, 124 monetary firms, 137
unpredictable human activities, 122 Ordinary AI (GOFAI), 110, 111
digital humanities (DH) rationalistic reaction, 112
advanced humanities, 126 track plans, 107
advanced philology, 127 two-way course, 110
challenge, 130 Atrial fibrillation, 78
computational philology, 127 materials and methods
enormous tent, 127 ADAM solver, 81–82
human–PC connection, 129 classifier performance enhancement,
interfaces, 130, 131 82–83
rural American 1950s, 128 ECG data description, 78, 79, 81
Edified experimentation, 112 time-frequency features, 83–84
flexibility, 109 Attack target scale (ATS), 288
Hadoop with, 112 Attacker scale (AS), 288
healthcare systems Autism spectrum disorder (ASD), 180
computerized diagnostics, 115 Automatic segmentation, 41
322 Index
N R
National Health and Nutrition Survey Radial Basis Function Network (RBFN), 245
(NHNS), 218 Realignment (REALIGN), 30
Non Sub-sampling Contourlet Transform Receiver-running feature (ROC), 6–7
(NSCT), 267 Registration process, 65–66
Regression, 66
O Risk ratio (RR), 6
Odds ratio (OR), 6
Open source tools S
Doccano, 171 Seizure
NLTK, 171 genetic meta analysis
Spark NLP, 171 GWA datasets, 5
TextBlob, 171
HapMap loci, 4
Ordinary AI (GOFAI), 110, 111
image processing, 5–6
Out-of-field recurrence (OFR), 177–178
meta analysis risk, 5–6
hemosiderin, 64
P image preprocessing, 64
Papanicolaou smear test, 149 intensity normalization, 65
Partial volume classifier (PVC), 44 motion, 65
Peak signal-to-noise ratio (PSNR), 268, 269 registration process, 65–66
Pediatric brain regression, 66
cortical surface extraction and slice time correction, 65
brain regions for different datasets, 48–59 smoothing spatial filter, 66
brain surface extractor (BSE), 43
materials and methods
cerebrum labeling, 44
Brainsuite tool, 67
gray matter (GM), 59
nonuniformity correction, 43–44 clustering, method, 3–4
outputs of, 45–49 MedCalc, 3
partial volume classifier (PVC), 44 meta-analysis, 4
skull stripping, 43 nonlinear regression, 7
surface and volume registration odds ratio (OR), 6
(SvReg), 44–45 receiver-running feature (ROC), 6–7
tissue classification, 44 risk ratio (RR), 6
topology correction, 44 results and discussion, 67
volume estimation, 45 analysis techniques, 73–74
white matter (WM), 59 cerebrum labeling, 69
WISP removal, 44 cluster analysis, 8
materials and methods continuous measure, 11–12
human brain’s important parts and correction of topology, 69
functions, 42–43 correlation, 13
real-time datasets, 42 cortical thickness estimation, 70
Pima dataset, 218 functional magnetic resonance imaging
Post-traumatic stress disorder (PTSD), 182 (fMRI), 68
Principal component analysis (PCA), 218 generic meta analysis, 10–11
hemosiderin, 68
Q identification of cortex, 69
Quality metrics MedCalc, 7
SSIM, 38 medical care system, 73
T1 and T2 images, 38 non-linear regression method, 15–16
328 Index
Visual interpretation, 2 Z
X Zero-mean Gaussian-distributed noise, 23