0% found this document useful (0 votes)
262 views419 pages

AI-Driven IoT Systems For Industry 40 - Deepa Jose

This book discusses the integration of Internet of Things (IoT) and artificial intelligence (AI) in transforming automation within Industry 4.0, focusing on enhancing efficiency and reliability. It covers various topics including digital transformation challenges, AI-driven decision-making, and the convergence of IIoT and edge AI for smart factories. Targeted at a diverse audience, the book aims to provide insights into advanced analytics, automation, and the role of AI in manufacturing processes.

Uploaded by

phunghm1822
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
262 views419 pages

AI-Driven IoT Systems For Industry 40 - Deepa Jose

This book discusses the integration of Internet of Things (IoT) and artificial intelligence (AI) in transforming automation within Industry 4.0, focusing on enhancing efficiency and reliability. It covers various topics including digital transformation challenges, AI-driven decision-making, and the convergence of IIoT and edge AI for smart factories. Targeted at a diverse audience, the book aims to provide insights into advanced analytics, automation, and the role of AI in manufacturing processes.

Uploaded by

phunghm1822
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 419

AI-Driven IoT Systems for

Industry 4.0

The purpose of this book is to discuss the trends and key drivers of Internet of
Things (IoT) and artificial intelligence (AI) for automation in Industry 4.0. IoT and
AI are transforming the industry thus accelerating efficiency and forging a more
reliable automated enterprise. AI-driven IoT systems for Industry 4.0 explore current
research to be carried out in the cutting-edge areas of AI for advanced analytics,
integration of industrial IoT (IIoT) solutions and Edge components, automation in
cyber-physical systems, world leading Industry 4.0 frameworks and adaptive supply
chains, etc.
A thorough exploration of Industry 4.0 is provided, focusing on the challenges
of digital transformation and automation. It covers digital connectivity, sensors, and
the integration of intelligent thinking and data science. Emphasizing the significance
of AI, the chapter delves into optimal decision-making in Industry 4.0. It exten-
sively examines automation and hybrid edge computing architecture, highlighting
their applications. The narrative then shifts to IIoT and edge AI, exploring their
convergence and the use of edge AI for visual insights in smart factories. The book
concludes by discussing the role of AI in constructing digital twins, speeding up
product development lifecycles, and offering insights for decision-making in smart
factories. Throughout, the emphasis remains on the transformative impact of deep
learning and AI in automating and accelerating manufacturing processes within the
context of Industry 4.0.

This book is intended for undergraduates, postgraduates, academicians, researchers,


and industry professionals in industrial and computer engineering.
Edge AI in Future Computing
Series Editors:
Arun Kumar Sangaiah, SCOPE, VIT University, Tamil Nadu
Mamta Mittal, G. B. Pant Government Engineering College, Okhla, New Delhi

AI-Driven IoT Systems for Industry 4.0


Deepa Jose, Preethi Nanjundan, Sanchita Paul, and Sachi Nandan Mohanty
Big Data and Edge Intelligence for Enhanced Cyber Defense: Principles and
Research
Chhabi Rani Panigrahi, Victor Hugo C. de Albuquerque, Akash Kumar Bhoi,
and Hareesha K. S.
Soft Computing Techniques in Engineering, Health, Mathematical and
Social Sciences
Pradip Debnath and S. A. Mohiuddine
Machine Learning for Edge Computing: Frameworks, Patterns and Best Practices
Amitoj Singh, Vinay Kukreja, and Taghi Javdani Gandomani
Internet of Things: Frameworks for Enabling and Emerging Technologies
Bharat Bhushan, Sudhir Kumar Sharma, Bhuvan Unhelkar, Muhammad Fazal Ijaz,
and Lamia Karim
Soft Computing: Recent Advances and Applications in Engineering and
Mathematical Sciences
Pradip Debnath, Oscar Castillo, and Poom Kumam
Computational Statistical Methodologies and Modeling for Artificial Intelligence
Priyanka Harjule, Azizur Rahman, Basant Agarwal, and Vinita Tiwari

For more information about this series, please visit: https://www.routledge.com/


Edge-AI-in-Future-Computing/book-series/EAIFC
AI-Driven IoT Systems
for Industry 4.0

Edited by
Deepa Jose, Preethi Nanjundan, Sanchita Paul,
and Sachi Nandan Mohanty
Cover image: © Shutterstock

First edition published 2025


by CRC Press
2385 NW Executive Center Drive, Suite 320, Boca Raton FL 33431

and by CRC Press


4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN

CRC Press is an imprint of Taylor & Francis Group, LLC

© 2025 selection and editorial matter, Deepa Jose, Preethi Nanjundan, Sanchita Paul, and Sachi Nandan
Mohanty, individual chapters, the contributors

Reasonable efforts have been made to publish reliable data and information, but the author and pub-
lisher cannot assume responsibility for the validity of all materials or the consequences of their use.
The authors and publishers have attempted to trace the copyright holders of all material reproduced in
this publication and apologize to copyright holders if permission to publish in this form has not been
obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or here-
after invented, including photocopying, microfilming, and recording, or in any information storage or
retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, access www.copyright.com
or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-
750-8400. For works that are not available on CCC please contact [email protected]

Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used
only for identification and explanation without intent to infringe.

ISBN: 978-1-032-55415-0 (hbk)


ISBN: 978-1-032-55805-9 (pbk)
ISBN: 978-1-003-43231-9 (ebk)

DOI: 10.1201/9781003432319

Typeset in Times LT Std


by KnowledgeWorks Global Ltd.
Contents
About the Editors.......................................................................................................ix
List of Contributors....................................................................................................xi
Preface...................................................................................................................... xv

Chapter 1 A Novel Hybrid Approach Based on Attribute-Based


Encryption for Secured Message Transmittal for
Sustainably Smart Networks................................................................. 1
Sucheta Panda, Sushree Bibhuprada B. Priyadarshini,
Biswa Mohan Acharya, Tripti Swarnkar,
and Sachi Nandan Mohanty

Chapter 2 Object Detection Using Deep Learning (DL) and


OpenCV Approach.............................................................................. 23
Ajit Kumar Mahapatra, Sushree Bibhuprada B. Priyadarshini,
Lokesh Kumar Nahata, Smita Rath, Nikhil Singh,
Shatabdi Chakraborty, Jyotirmayee Pradhan,
Sachi Nandan Mohanty, and Prabhat Sahu

Chapter 3 Enhancing Industrial Operations through AI-Driven


Decision-Making in the Era of Industry 4.0....................................... 42
Prakruthi R Rai, Preethi Nanjundan, and Jossy Paul George

Chapter 4 Acne Detection Using Convolutional Neural Networks


and Image-Processing Technique........................................................ 56
Premanand Ghadekar, Aniket Joshi, Atharv Vanjari,
Mohammed Raza, Shubhankar Gupta, and Anagha Gajaralwar

Chapter 5 Key Driving Technologies for Industry 4.0......................................... 70


Anesh D Sundar ArchVictor and C. Emilin Shyni

Chapter 6 Opportunities and Challenges of Digital Connectivity


for Industrial Internet of Things..........................................................97
Mahesh Visveshwarappa

Chapter 7 Malicious QR Code Detection and Prevention................................. 103


Premanand Ghadekar, Faijan Momin, Tushar Nagre,
Sanika Desai, Prathamesh Patil, and Vinay Aher
v
vi Contents

Chapter 8 Integration of Advanced Technologies for Industry 4.0.................... 114


Tanmay Paliwal, Aditya Sikdar, and Zidan Kachhi

Chapter 9 Challenges in Digital Transformation and Automation


for Industry 4.0.................................................................................. 143
Manjari Sharma, Tanmay Paliwal, and Payashwini Baniwal

Chapter 10 Design and Analysis of Embedded Sensors for IIoT: A


Systematic Review............................................................................ 164
Kogila Raghu and Macharla Mounika

Chapter 11 AI for Optimal Decision-Making in Industry 4.0 ............................ 185


Ravichandran Bargavi

Chapter 12 Challenges in Lunar Crater Detection for TMC-2 Obtained


DEM Image Using Ensemble Learning Techniques ........................206
Sanchita Paul, Chinmayee Chaini, and Sachi Nandan Mohanty

Chapter 13 A Framework of Intelligent Manufacturing Process by


Integrating Various Function ........................................................... 241
T. Rajasanthosh Kumar, Laxmaiah G., and S. Solomon Raj

Chapter 14 Adaptive Supply Chain Integration in Smart Factories ................... 255


Deepak Mathivathanan and Sivakumar Kirubanandan

Chapter 15 Implementation of Intelligent CPS for Integrating the


Industry and Manufacturing Process ............................................... 273
T. Rajasanthosh Kumar, Mahesh. M. Kawade,
Gaurav Kumar Bharti, and Laxmaiah G.

Chapter 16 Machine-Learning-Enabled Stress Detection in Indian


Housewives Using Wearable Physiological Sensors ........................ 289
Shruti Gedam and Sanchita Paul

Chapter 17 Rising of Dark Factories due to Artificial Intelligence ....................304


Anjali Mathur
Contents vii

Chapter 18 Deep Learning for Real-Time Data Analysis from Sensors............. 315
Sagar C V, Harshit Bhardwaj, and Anupama Bhan

Chapter 19 Blockchain as a Controller of Security in Cyber-Physical


Systems: A Watchdog for Industry 4.0.............................................. 339
Adri Jovin John Joseph, Marikkannan Mariappan,
and Marikkannu P

Chapter 20 Energy Management in Industry 4.0 Using AI ................................ 349


Jeevitha D, Deepa Jose, Sachi Nandan Mohanty, and A. Latha

Chapter 21 Deployment of IoT with AI for Automation .....................................364


Abith Kizhakkemuri Sunil, Preethi Nanjundan,
and Jossy Paul George

Chapter 22 A Comparison of the Performance of Different Machine


Learning Algorithms for Detecting Face Masks ............................. 382
Andra Likhith Sai, Gona Jordan Ryan, Namoju Prudhvi Sai,
and Neeraj Kumar Misra

Index ...................................................................................................................... 399


About the Editors
Dr. Deepa Jose works as Head of Department of Sponsored Research and
Consultancy and Professor of ECE at KCG College of Technology, Chennai, India,
and is an IEEE Senior Member and IEEE Women in Engineering Chair. She has
done various outreach activities for women empowerment through IEEE WIE. She
has completed Ph.D. in VLSI Design from College of Engineering Guindy, Anna
University Chennai in the year 2015. She is a life time member of IEI and IET. She has
more than 18 years of teaching experience. She has Guideship from Anna University
and produced one Ph.D. student and currently guiding eight Ph.D. students. She has
Indian Patent Granted, two FER completed and one International Patent Grant. She
has published more than 60 research papers in Journals, International Conferences
in India and abroad. She is a member of Technical Committees of conferences and
journals. Her areas of research interest include VLSI for wireless communication,
deep learning, biomedical signal processing, IoT for healthcare, GIS initiatives, soft
computing, and AI. She is recipient of Best Academic Practitioner Award from IET
Chennai and IEEE Award for Professional Achievement. Deepa Jose has conducted
3 International Conferences and more than 30 Fdps/workshops/webinars. She has
won three Best Paper awards. She has won two Best Research Paper Awards in the
Fifth International Congress and Expo on Biotechnology and Bioengineering held
in the United Kingdom and at IEEE INDISCON 2023 conducted by IEEE India
Council.

Dr. Preethi Nanjundan is an Associate Professor (SRG) in the Department of Data


Science at Christ University, Pune, Lavasa campus, Maharashtra, India. She received
her Doctorate degree from Bharathiar University, Coimbatore, in 2014. She received
her Master of Philosophy in computer science from Bharathiar University in 2007 and
earned Master’s degree in Computer Applications from Bharathidasan University in
2004. Her research and teaching experience spans 18 years. Besides publishing over
20 papers in international refereed journals, she has contributed chapters to various
books and published 5 books. Four of her patents have also been granted. In 2020,
she received the Best Professor award from Lead India and Vision Digital India.
Her contributions to a book titled “Covid 19 and its Impact” have been inducted
into the Indian and Asian books of records. Her research area includes machine
learning, natural language processing, and neural network. She is a lifetime member
of professional societies, including Computer Society of India (CSI), International
Association of Computer Science and Information Technology (IACSIT), Computer
Science Teachers Association, and Indian Society for Technical Education (ISTE).

Dr. Sanchita Paul is presently working as Associate Professor in BIT Mesra, Ranchi,
Jharkhand. She received her Ph.D. Degree from BIT Mesra, Ranchi, Jharkhand, in
the year January 2012. She received her M. Tech Degree from BIT Mesra, Ranchi,
Jharkhand, in the year 2006 and BE Degree from Burdwan University, West Bengal,

ix
x About the Editors

in the year of 2004. Her research areas include artificial intelligence, cloud comput-
ing, Internet of Things, machine learning and deep learning. She has guided five
Ph.D. Scholars. She has published 60 International Journals of International repute.
She also has six patents in area of health informatics, IoT and Cloud Computing.
She has acted as session chair and editorial member of many international journals
and conferences. She has published one book on cloud computing in Scholar’s Press,
Germany and five book chapters. She has completed two projects and one ISRO-
funded project is ongoing. She is life member of CSI. She is principal investigator in
setting of cloud computing lab at BIT Mesra, Ranchi.

Dr. Sachi Nandan Mohanty received his PostDoc from IIT Kanpur in the year
2019 and Ph.D. from IIT Kharagpur, India, in the year 2015, with MHRD scholar-
ship from Govt of India. He has authored/edited 28 books, published by IEEE-Wiley,
Springer, Wiley, CRC Press, NOVA, and DeGruyter. His research areas include data
mining, big data analysis, cognitive science, fuzzy decision making, brain-computer
interface, cognition, and computational intelligence. Prof. S N Mohanty has received
four Best Paper Awards during his Ph.D. at IIT Kharagpur from International
Conference at Beijing, China, and the other at International Conference on Soft
Computing Applications organized by IIT Roorkee in the year 2013. He has awarded
best thesis award first prize by Computer Society of India in the year 2015. He
has guided nine Ph.D. scholars. He has published 120 International Journals of
International repute and has been elected as FELLOW of Institute of Engineers,
European Alliance Innovation (EAI), and Senior member of IEEE Computer Society
Hyderabad chapter. He is also the reviewer of Journal of Robotics and Autonomous
Systems (Elsevier), Computational and Structural Biotechnology Journal (Elsevier),
Artificial Intelligence Review (Springer), Spatial Information Research (Springer).
List of Contributors
Biswa Mohan Acharya Sanika Desai
Siksha ‘O’ Anusandhan University Vishwakarma Institute of Technology
Bhubaneswar, Odisha, India Pune, Maharashtra, India

Vinay Aher Laxmaiah G


Vishwakarma Institute of Technology Chaitanya Bharathi Institute of
Pune, Maharashtra, India Technology
Hyderabad, Telangana, India
Payashwini Baniwal
CHRIST (Deemed to be University) Anagha Gajaralwar
Lavasa Campus, Lavasa, Pune, Vishwakarma Institute of Technology
Maharashtra, India Pune, Maharashtra, India

Anupama Bhan Shruti Gedam


Amity University Uttar Pradesh Birla Institute of Technology Mesra
Noida, Uttar Pradesh, India Ranchi, Jharkhand, India

Harshit Bhardwaj Jossy Paul George


Amity University Uttar Pradesh CHRIST University
Noida, Uttar Pradesh, India Bengaluru, Karnataka, India

Sushree Bibhuprada B. Priyadarshini Premanand Ghadekar


Siksha O’ Anusandhan University Vishwakarma Institute of Technology
Bhubaneswar, Odisha, India Pune, Maharashtra, India

Sagar C. V. Shubhankar Gupta


Amity University Uttar Pradesh Vishwakarma Institute of
Noida, Uttar Pradesh, India Technology
Pune, Maharashtra, India
Chinmayee Chaini
Birla Institute of Technology Mesra Adri Jovin John Joseph
Ranchi, Jharkhand, India Sri Ramakrishna Institute of
Technology
Shatabdi Chakraborty Coimbatore, Tamil Nadu, India
Siksha O’ Anusandhan University
Bhubaneswar, Odisha, India Aniket Joshi
Vishwakarma Institute of
Jeevitha Damotharan Technology
Jeppiaar Engineering College Pune, Maharashtra, India
Chennai, Tamil Nadu, India

xi
xii List of Contributors

Zidan Kachhi Sachi Nandan Mohanty


PES University Siksha ‘O’ Anusandhan University
RR Campus, Bengaluru, Karnataka, Bhubaneswar, Odisha, India
India
Faijan Momin
Sivakumar Kirubanandan Vishwakarma Institute of Technology
Loyola Institute of Business Pune, Maharashtra, India
Administration
Chennai, Tamil Nadu, India Macharla Mounika
DURA Automotive Services (I)
Abith Kizhakkemuri Sunil Hyderabad, Telangana, India
CHRIST (Deemed to be University)
Lavasa Campus, Lavasa, Pune, Preethi Nanjundan
Maharashtra, India CHRIST (Deemed to be University)
Lavasa Campus, Lavasa, Pune,
Gaurav Kumar Bharti Maharashtra, India
Bharti, Indian Institute of Information
Technology Lokesh Kumar Nahata
Bhopal, Madhya Pradesh, India Siksha O’ Anusandhan University
Bhubaneswar, Odisha, India
Ajit Kumar Mahapatra
Siksha O’ Anusandhan University Marikkannu P
Bhubaneswar, Odisha, India Anna University Regional Campus
Coimbatore, Tamil Nadu, India
M. Kawade Mahesh
PES’s Modern College of Engineering Tanmay Paliwal
Pune, Maharashtra, India CHRIST (Deemed to be University)
Lavasa Campus, Lavasa, Pune,
Marikkannan Mariappan Maharashtra, India
Government College of Engineering
Erode, Tamil Nadu, India Sucheta Panda
Siksha ‘O’ Anusandhan University
Deepak Mathivathanan Bhubaneswar, Odisha, India
Loyola Institute of Business
Administration Prathamesh Patil
Chennai, Tamil Nadu, India Vishwakarma Institute of Technology
Pune, Maharashtra, India
Anjali Mathur
Vellore Institute of Technology (VIT), Sanchita Paul
Bhopal University Birla Institute of Technology Mesra
Bhopal, Madhya Pradesh, India Ranchi, Jharkhand, India

Sachi Nandan Mohanty Jyotirmayee Pradhan


Singidunum University Siksha ‘O’ Anusandhan University
Belgrade, Serbia Bhubaneswar, Odisha, India
List of Contributors xiii

Kogila Raghu C. Emilin Shyni


Geethanjali College of Engineering & Presidency University
Technology Bangalore, Karnataka, India
Hyderabad, Telangana, India
Nikhil Singh
Tulala Rajasanthosh Kumar Siksha ‘O’ Anusandhan University
Puducherry Technological University Bhubaneswar, Odisha, India
Puducherry, India
Raj S Solomon
Smita Rath Chaitanya Bharathi Institute of
Siksha ‘O’ Anusandhan University Technology
Bhubaneswar, Odisha, India Hyderabad, Telangana, India

Bargavi Ravichandran Manjari Sharma


SRM Institute of Science and CHRIST (Deemed to be University)
Technology Lavasa Campus, Lavasa, Pune,
Kattankulathur Campus, Chengalpattu, Maharashtra, India
Tamil Nadu, India
Aditya Sikdar
Prakruthi Ravishanker Rai CHRIST (Deemed to be University)
CHRIST (Deemed to be University) Lavasa Campus, Lavasa, Pune,
Lavasa Campus, Lavasa, Pune, Maharashtra, India
Maharashtra, India
Anesh D Sundar ArchVictor
Gona Jordan Ryan Northwest University, Hodos Institute
VIT-AP University Mukilteo, Washington, USA
Amaravathi, Andhra Pradesh, India
Abith Sunil
Mohammad Raza CHRIST (Deemed to be University)
Vishwakarma Institute of Technology Lavasa Campus, Lavasa, Pune,
Pune, Maharashtra, India Maharashtra

Prabhat Sahu Tripti Swarnkar


Siksha ‘O’ Anusandhan University Siksha ‘O’ Anusandhan University
Bhubaneswar, Odisha, India Bhubaneswar, Odisha, India

Andra Likhith Sai Atharv Vanjari


VIT-AP University Vishwakarma Institute of Technology
Amaravathi, Andhra Pradesh, India Pune, Maharashtra, India

Namoju Prudhvi Sai Visveshwarappa Mahesh


VIT-AP University Jain (Deemed-to-be University)
Amaravathi, Andhra Pradesh, India Bangalore, Karnataka, India
Preface
In the era of Industry 4.0, the convergence of artificial intelligence (AI) and the
Internet of Things (IoT) has ushered in a transformative wave across industrial land-
scapes. This book delves into the intricacies of AI-driven IoT systems, exploring
their applications, benefits, and implications for the fourth industrial revolution. It
embarks on a journey through the synergy of intelligent algorithms and intercon-
nected devices, unraveling the potential to enhance efficiency, productivity, and
decision-making in industrial settings. From predictive maintenance to smart manu-
facturing, this book provides insights into the dynamic realm of AI-driven IoT, offer-
ing readers a comprehensive understanding of the technologies shaping the future of
industries. As professionals, researchers, and enthusiasts navigate the complexities
of this digital frontier, the chapters unveil practical use cases, challenges, and inno-
vations, aiming to equip readers with the knowledge needed to navigate and contrib-
ute to the evolving landscape of Industry 4.0.

xv
1 A Novel Hybrid
Approach Based
on Attribute-Based
Encryption for Secured
Message Transmittal
for Sustainably
Smart Networks
Sucheta Panda, Sushree Bibhuprada B.
Priyadarshini, Biswa Mohan Acharya,
Tripti Swarnkar, and Sachi Nandan Mohanty

1.1 INTRODUCTION
Nowadays, data security plays a crucial role in every domain of life. In this
context, cryptography is a strategy for securing the information and communi-
cation of the same with the help of certain codes, so that the destined user can
read it and do the further processing at his own end. The term cryptography
means hidden writing. Cryptography provides a strong base for keeping the data
confidential while verifying data integrity. Asymmetric key cryptography and
symmetric key cryptography are basically two types of algorithms used in the
cryptographic process. In this research, asymmetric algorithm has been used,
which is more secure and authentic than symmetric algorithm [1–7]. In this con-
text, Rivest, Shamir, Adleman (RSA) algorithm has been employed as it comes
under the umbrella of asymmetric algorithm that satisfies the integrity, confiden-
tiality, and authenticity of data along with non-repudiation in case of electronic
communication. Both the public and the private keys can encrypt a message
in RSA cryptography. Here the inverse of the key is used to decrypt it [7–12].
Moreover, the ABE tool is used in cryptography where the shared file can only
be encrypted once with the given policy and it can then be decrypted by any
recipient who meets the requirement.

DOI: 10.1201/9781003432319-1 1
2 AI-Driven IoT Systems for Industry 4.0

ABE is used in the cloud data security solution, which is a novel approach to
create a secure cryptosystem. This method was discussed in [1] as a novice strat-
egy for controlling encrypted access. Key-policy attribute-based encryption (KPAE)
and cipher text-policy attribute-based encryption (CPAE) are the common forms of
ABE. According to an access tree and data that has been encrypted over a number
of attributes, KPAE generates users’ secret keys. For providing flexibility in building
a cryptosystem, four strategies are used in this encryption technique: (i) the public
(PK) and master (MK) keys are generated during the setup phase, (ii) the session
key (SK, also known as a private key) is generated in the key generation step, (C)
the encryption phase, in which the access structure (policy), message, and public
key (PK) are used to construct the cipher text (CT), and (D) the decryption phase, in
which the cipher text, private key, and public key are used to decrypt the encrypted
text (i.e. CT). To finish the policy update, Sahai and Waters [13] used the cipher text
authorisation approach.
However, because the key update and cipher text update are constrained by the
previous access policy, such approaches cannot meet the integrity and security
requirements. Setup, KeyGen, Encrypt, and Decrypt are the four basic algorithms
that make up the CPAE encryption method. The most essential benefit of RSA [3]
is that the private key remains secret because it is not communicated or revealed to
another user. By combining ABE method with RSA algorithm, we propose a model
known as the Hybrid Asymmetric Approach based on Attribute-Based Encryption
(HAA-ABE), through which we get the higher level of data security in public cloud.
The first system with constant-size cipher texts was suggested by J. Herrnaz [4];
where users with at least a few qualities in a given number of attributes, for a thresh-
old “T,” get determined with the help of sender, followed by decryption according to
our proposed approach. A weighted threshold decryption policy can be added as an
addition to our proposed strategy.

1.2 LITERATURE REVIEW


The idea of ABE has been discussed in the literature by Sahai and Waters [13].
The ABE scheme is the first attribute encryption scheme to allow single threshold
gate rules. Furthermore, Teng et al. [14] proposed the Hierarchical Attribute Set-
Based Encryption (HASBE) model which is a merger of Hierarchical Identity-based
Encryption (HIBE) and CPAE. The user hierarchy in the HASBE paradigm is hier-
archical, where access control method is created with a hierarchical assembly of role
based on their attribute values and a preset secure key distribution process. Many
domain masters follow the root master at the top of HASBE structure. Every domain
user will have a unique group of users, each with a unique set of characteristics [6].
To keep sensitive data safe from snoopers, it is encrypted, and only approved users
have access to the decryption keys.

1.2.1 Public Key Cryptography


Public key cryptography (PKC) is another name for asymmetric key encryption. It
encrypts and decrypts data using public and private keys [7]. The keys consist of two
Attribute-Based Encryption for Secured Message Transmittal 3

massive numbers that have been matched but are not identical (in case of asymmetric
key cryptography). A public key is a key which is shared with anyone. The private
key, which is the second key, is kept hidden. A message can be encrypted with any of
the keys; the decryption is the inverse of the one used to encrypt the message. Even
for key exchange, without utilising a hidden or secret route, PKC is widely utilised to
secure electronic communication over open networked environments like the inter-
net. According to Zhou et al., the CPAE technique encrypts data using a portion of
access tree. Afterwards, it finishes the encryption using the other access policy tree
[8]. According to the authors, an access policy’s right sub tree is ordinarily smaller
than its left sub tree.
The owner of the data can then connect their CT to the proxy server’s cipher text
after encrypting their data with the relevant component. This method pre-supposes
that the access policy’s root node is an “AND” gate. Alternative solutions to this
constraint problem were later presented, such as encrypting data using only one
property at first (called a dummy attribute). Borgh et al. [9] developed a strategy that
can be used in two cases. Firstly, when the devices do not have enough resources,
the method first symmetrically encrypts the data before using the user access policy
for symmetric key encryption. Figure 1.1 describes the PKC concept, where the
plain text is converted into cipher text at the sender’s site, and with the receiver’s
private key, the encrypted text is once more transformed at the receiver’s site back
into plain text.

FIGURE 1.1 Public key cryptography.


4 AI-Driven IoT Systems for Industry 4.0

1.2.2 RSA Algorithm
The most widely used asymmetric-key algorithm is RSA [10], which was first
announced in 1977 having two common applications such as key distribution and
digital signatures. An exponential expression is used in this technique. In the paper,
Shireen Nisha [7] intends to evaluate RSA, conjointly examine its merits and loop-
holes, and provide fresh strategies to address the weaknesses in her work. RSA is a
cryptographic method that assures secure network communication, which is illus-
trated in Figure 1.2, and its detailed working is depicted as shown in Figure 1.3.
Some of the advantages of RSA are outlined as follows:

i. The public key of the recipient is necessary for RSA encryption; also we
don’t have to reveal any secret keys in order to hear from other people.
ii. Encryption is faster than the Digital Signature Algorithm (DSA).
iii. Data in transit will be tamper-proof since tampering with the data will
change the keys’ usage. The recipient will also be made aware of the change
because the material cannot be decrypted using the private key.
iv. RSA algorithm uses complicated mathematics to keep its users safe and
secure.
v. It includes the factorisation of complex numbers to factorise prime numbers.
vi. This algorithm is tough to crack.
vii. RSA algorithm encrypts data using a public key that is known by everyone,
and exchanging the public key is simple.

1.2.3 Elliptic Curve Cryptography


Elliptic curve cryptography (ECC) was proposed in [12] as an alternative to other
well-known public key cryptosystems such as RSA and Elgamal cryptosystems. ECC

FIGURE 1.2 RSA algorithm.


Attribute-Based Encryption for Secured Message Transmittal 5

FIGURE 1.3 Working of RSA algorithm.

offers stronger security and more effective performance than other first-generation
public key cryptosystems since its mathematical foundation is more complicated.
One of the main benefits of using elliptic curves is based on the difficulties in factor-
ing or calculating discrete logarithms over integers as compared to other methods.
New public key encryption technology called ECC [15] is more effective, quicker,
and more compact than earlier systems. ECC exploits the properties of the elliptic
curve equation to obtain keys instead of the conventional method, which involves
multiplying very large prime integers. The following equation describes an elliptic
curve over a finite field:

E : x 3 + ax + b = 0 mod p (1.1)

The ECC points are produced in Equation (1.1) by changing the values of x and y.
E p (a, b) stands for the set of all elliptic curve points, which is defined as follows:

E p ( a, b ) = {( x, y ) : y 2
}
= x 3 + ax + b mod p (1.2)

Elliptic curve discrete logarithm problem difficulty determines how secure ECC
is (ECDLP).
On an elliptic curve, suppose P and Q represent two points where Q = α P, α is
a scalar.
If α is large enough, obtaining α from P and Q is computationally impossible. α
represents Q’s discrete logarithm to base P. Thus, point multiplication is the primary
6 AI-Driven IoT Systems for Industry 4.0

operation in ECC. In other words, multiplying a scalar α by any point P on the curve
will result in another point Q on the curve.

1.2.4 Attribute-Based Encryption Scheme


ABE is a system that combines identity-based encryption (IBE) and PKC. The
encryptor can create an encryption technique based on the information’s character-
istics and the receivers’ attributes, and the users having attributes that are consistent
with the encryption policy can decode the encrypted text that results. In a distributed
environment, the ABE technique effectively achieves the access control that is not
interactive and considerably boosts the encryption scheme’s flexibility. Pirretti et al.
[6] recommended using an extra expiration data feature to minimise the key usage.
The key artefacts that match the qualities are no longer issued when they are modi-
fied. This technique, on the other hand, is inefficient and unable to meet the criteria
of practical implementation. For example, when determining the attribute validity
period, the encryption parties should collaborate with the organisation. The demand
for the key update mechanism during the update procedure varies in direct propor-
tion to the count of users in the framework, and every end user must have a secure
route to the key update mechanism.
Bethencourt et al. [16] developed the mercurial ABE system, which is built
on binary trees, to decrease the workload on the key renewal agency and interac-
tions among encryption parties and organisations. Because each user is linked
to the binary tree’s leaf node, the count of users is co-related to a number of key
updates. The key is divided into two sections: private keys and key updates. In
the ABE scheme, attributes play a very important role to control user access.
This is a kind of PKC where public key and private key are used. Public keys
are available publicly, and private keys are only known to the owner. Senders
are encrypted using the recipients’ public key, and on the other end, the receiver
decrypts the sender’s message using her private key. Katz et al. [17] developed
an inner product encryption system based on a predicate encryption scheme.
In the Fuzzy IBE scheme, only the user should be able to decrypt the CT when
the properties of the CT [17] overlap with the attributes connected to the user’s
private keys.
ABE has two main characteristics: (a) it has complex access control capacity and
(b) the scheme does not allow to remember the number of users to access the docu-
ment. An important policy of ABE scheme is that it satisfies the collusion resistance.
It could be used for many tasks, including distributed file management and third-
party data storage [13]. The capacity of two or more users with distinct sets of keys
to jointly decrypt the encrypted text is referred to as collusion resistance. These can
only be successful if one of the users can do the decoding on their own. It means
that if numerous users conspire, until one of them is able to do so, they should not
be able to decipher the encrypted context. This feature ensures that only users with
the correct key have access to the data. ABE can be classified into two categories
based on access policies like CPAE [15] and KPAE. Figure 1.4 details the working
of ABE scheme.
ABE is categorised into two types as follows.
Attribute-Based Encryption for Secured Message Transmittal 7

FIGURE 1.4 Attribute-based encryption (ABE) algorithm steps.

1.2.4.1 KPAE Algorithm


It’s a variant of ABE’s encryption method, which is a one-to-many communication
system that allows for fine-grained encryption and data sharing. Every cipher text
is given a set of descriptive qualities in this system. The access structure, which is
recorded using the user’s private key, is initiated by a reliable source. The idea of dis-
guising the cipher text’s access structure was presented by Nishide et al. [18]. If any
properties in the access structure are missing, it signifies that they are not accessible.
Users do not need to have secret keys for certain structures if they have access to
them. Wang and Luo [19] proposed a new KPAE with a predetermined cipher text’s
size. Any monotone access structure with our implementation can be declared as the
access policy. The amount of cipher text attributes has no effect on the cipher text’s
size, and a fixed number of bilinear pairing assessments are performed [18–20]. The
four strategies that make up the KPAE scheme are depicted as shown in Figure 1.5,
and the working of it is portrayed in Figure 1.6.
The first KPAE design was proposed by Goyal et al. [1]. Using the Bilinear Diffie-
Hellman assumption, it was found that the system was only partially secure. The
KPAE scheme was improved by integrating revocation procedures. The KPAE sys-
tem, presented by Ostrovsky et al. [21], enables private keys to express any formula
that allows access to attributes containing non-monotone ones. The KPAE concept
8 AI-Driven IoT Systems for Industry 4.0

FIGURE 1.5 KPAE algorithm.

FIGURE 1.6 Working of KPAE.

is depicted in Figure 1.6, where each data comprises attributes and users have key to
an access tree that may identify them (such as name, location, or service type). Only
when data can meet the access tree’s requirement, it can then be decrypted using
the access tree. An access tree would look like this: 〈 MumbaiOR〈 RajeshANDClerk 〉〉 .
Each of the three data sets has three characteristics that correspond to the name,
location, and service type. The user can only decrypt data 3 as shown in Figure 1.7
Attribute-Based Encryption for Secured Message Transmittal 9

FIGURE 1.7 Key-policy attribute-based encryption (KPAE).

based on his location (i.e. Mumbai). On the other hand, as the user tree is an access
tree, KPAE is unable to monitor data access [21]. Users bear accountability, accord-
ing to Yu et al. [22], and senders conceal some properties. On the basis of DBDH
and D-Linear assumptions, the anti-key abuse KPAE was presented (as in case of
AFKPAE). User’s private key contains a property that is associated with their unique
identity. The tracking algorithm links the suspicious identifier’s relevant properties
to the cipher text [21–26].

1.2.4.2 CPAE Algorithm


Today, the CPAE is the most useful algorithm. It accepts arbitrary schemes and
attributes, as well as numerical key attributes. It has the ability to organise attri-
butes into sets, as well as a frame policy that restricts the decrypting key selectively.
IBE can be considered a generalisation of CPAE. The length of encrypted text is
dictated by the number of characteristics in previous ABE schemes. K. Emura [25]
suggested a novel (CPAE) with fixed cipher text length in his study. A CPAE tech-
nique with comparable flexibility and a disguised access structure was suggested
in [25]. Though such a system does not prohibit users’ secret key collusion, several
users might co-operate and combine their secret keys to decode the cipher text as
long as the cipher text can’t be cracked by a single person. There are four strategies
that make up the CPAE scheme, as shown in the form of the algorithm in Figure 1.8.
10 AI-Driven IoT Systems for Industry 4.0

FIGURE 1.8 CPAE algorithm.

The CPAE method resembles standard role-based access control more conceptu-
ally. The original CPAE technique was introduced by Bethencourt et al. [16]. Despite
the fact that policies are limited to a single AND gate, Cheung and Newport [27]
developed a CPAE structure based on Bilinear Diffie-Hellman assumption. Goyal
et al. later suggested a generic transformational technique for turning a KPAE sys-
tem into a CPAE system while utilising a universal access tree. Their security argu-
ment is dependent on the Decisional Bilinear Diffie-Hellman assumption, and their
construction provides for the representation of an access structure with threshold
gates acting as nodes in a restricted-size access tree. In [28], Waters proposed the
most efficient CPAE algorithm in terms of length of cipher text and expressivity with
the cipher text length. In contrast to KPAE, cipher text has an access tree, and the
user key contains information about the user such as name, position, and location [14,
28–30]. The encrypted information can be unlocked, if the user attribute fulfils the
requirements of the access tree for the cipher text.
A lightweight CPAE method was presented by Touati et al. [30]. They plan to
lower CP-computing ABE’s cost in the restricted system by delegating some cryp-
tographic tasks to assist devices with more resources than others. In the proposed
method, a symmetric key is utilised to initially encrypt the data. After that, using
CPAE, only this symmetric key is encrypted, as proposed in [9, 30]. Borgh et al.
[9] developed a strategy that might be applied to two different scenarios. Figure 1.9
shows an illustration of CPAE. Each user is endowed with a unique set of character-
istics. Because the features of User 3 match the criteria of the cipher text access tree
〈 MumbaiOR〈 RajeshANDClerk 〉〉 , only User 3 has the ability to decode the data. The
ability to limit data access is the major difference between CPAE and KPAE. Data
in KPAE simply comprises user attributes, and anyone can access it.
Attribute-Based Encryption for Secured Message Transmittal 11

FIGURE 1.9 Working of CPAE.

On the other side, CPAE can regulate data access due to the encryption of the
access tree. To increase security and privacy, an ABE technique was offered by
Susilo et al. [31] as a decentralised cipher text strategy that protects privacy. In cloud
computing, Feng et al. [32] suggested a system for sharing health records that is
privacy-conscious. To protect the user’s attribute, this is the first paper to use com-
mitment and zero knowledge-proof techniques. To accomplish user accountability,
it employs the multi-authority CPAE, which employs disguised access policies to
safeguard users’ privacy. The paper [32] encrypts PHR files with priority level-
based encryption (PLBE) technology in order to offer flexible data control and finely
grained access control.

1.2.5 Comparison between CPAE and KPAE


1.2.5.1 Alikeness

i. Complex access rules such as threshold, “AND,” “OR,” and “NOT” gates
can be handled by both the KPAE and the CPAE.
ii. For precise access management in cloud storage systems, Liu Wang et al.
[33] developed hierarchical attribute-based encryption. With the associa-
tion of HIBE system and the CPAE system, the performance is traded for
expressivity, and the suggested system is then subjected to proxy and lazy
re-encryption. As a result of which, they were able to ensure the transmis-
sion and consolidation of medical data [33–37].
12 AI-Driven IoT Systems for Industry 4.0

1.2.5.2 Variation
KPAE is better for scenarios where the sender specifies the technique for obtaining
cipher text, like social sites and Electronic Medical Systems (EMS) [13], whereas
CPAE is better for scenarios where the sender specifies the technique for obtaining
cipher text, such as pay television system and database access.

1.3 PROPOSED HYBRID ASYMMETRIC APPROACH BASED


ON ATTRIBUTE-BASED ENCRYPTION (HAA-ABE)
1.3.1 Proposed Algorithm Design
The access policy does not have to be sent with the cipher text in this suggested
paradigm, which allows us to keep the encryptor’s privacy. Our technique allows
for the secure storing of safekeeping of encrypted data despite a dubious storage
server. In our proposed method, the concepts of RSA and ABE have been used. Our
method consists of four stages: setup, key generation, encryption, and decryption.
In the first phase, that is the setup phase, the security parameters G , h, f , and e are
taken as input, and public key (PK) and master key (MK) are obtained through
output. In the second phase, which is the key generation phase, a group of attributes
(S) and master key (MK) are taken as the inputs, which result in a private key (SK).
Similarly in the encryption phase, which is the third phase of the model, message
(M), public key (PK), and access structure (T) are taken as inputs and in result
cipher text (CT) is found. In the final stage of the model (decryption phase), master
key (MK), CT, and private key (SK) are taken as inputs, which give the decrypted
message M ′ in output. The complete flow chart of the proposed model is illustrated
below in Figures 1.10 and 1.11.

FIGURE 1.10 A visual illustration of the setup and key generation phase.
Attribute-Based Encryption for Secured Message Transmittal 13

FIGURE 1.11 Pictorial representation of encryption and decryption phases.

1.3.1.1 Setup Phase of HAA-ABE


Suppose G0 which represents a bilinear group of prime order p and g which rep-
resents generator are chosen by the setup process. Thereafter, two random expo-
nents will be chosen, µ Z p . In the following format, the public key is published as
( )
PK = G0 , h = gµ , f = g1/µ, e( g, g)λ , and the master key MK µ , g λ , where f is used
for delegation, ( µ ,λ ) are two random exponents, and g is the generator. Figure 1.12
illustrates the system architecture of the proposed HAA-ABE scheme, and the work-
ing steps of the setup phase are included in Figure 1.13.

1.3.1.2 KeyGen ( MK,S ) in HAA -ABE


The KeyGen method takes a collection of attribute S and generates the key that
defines the set. For each attribute j ∈ S, the technique first chooses an arbitrary
r ∈ Z p . After that it evaluates the key as follows: SK = (D = g(α + r ) /β , ∀JS : DJ = g r.
H ( j ) j , DJ′ = g rj ). Figure 1.14 shows the key generation procedure for our proposed
r

algorithm.

1.3.1.3 Encrypt (PK, M, T)


The message encryption process for the message M and access structure T is used by
the encryption algorithm, as depicted in Figure 1.15. This process initiates by choos-
ing a polynomial qx for every nodex in tree T (addingtheleaves). Starting with the root
nodeR, these polynomials are selected in the sequence listed below in a top-down
fashion. For every nodex in the tree, set the degree d x of the polynomial qx to 1 less
than the node’s threshold value k x, i.e., d x = k x − 1. The algorithm selects a random
14 AI-Driven IoT Systems for Industry 4.0

FIGURE 1.12 System architecture of the proposed HAA-ABE scheme.

FIGURE 1.13 Working steps involved in proposed HAA-ABE.


Attribute-Based Encryption for Secured Message Transmittal 15

FIGURE 1.14 Key generation in proposed HAA-ABE.

number s Z p and qR ( 0 ) = s, starting with the root node R. To define polynomial qR


entirely, it chooses d R at other locations at random. It sets qx ( 0 ) = q( parent ( x ) (index ( x )
for any other node x. Suppose the set of leaf nodes is Y for the tree T . By using access
structure T, encrypted text is created, which is given in the following computation:

CT = T , C ~ = Me ( g, g ) , C = h s , ∀yY : C y = g qy ( 0 ) , C y′ = H ( att ( y ) )
as qy ( 0 )
.

FIGURE 1.15 Encryption in HAA-ABE.


16 AI-Driven IoT Systems for Industry 4.0

FIGURE 1.16 Decryption in HAA-ABE.

1.3.1.4 Decrypt (CT, SK )


The other way for the decryption process is a recursive process. The decryption
procedure of HAA-ABE is illustrated in Figure 1.16. At the outside we create an
algorithm (recursive) DecryptNode (CT , SK , x ), which takes input as cipher text
CT = (T , C ~ , C , ∀yY : C y , C y′ ), a private SK connected with a specific set of
traits S and a nodex from T . If leaf node is x, then we define i = att ( x ) as below:

DecryptNode (CT , SK , x ) = e ( Di , C x ) / ( Di′, C x′ ) , if i ∈ S.

( ) (
= e g r . H ( i )ri , g qx( 0 ) /e g ri , H ( i )qx( 0 ) )

1.4 IMPLEMENTATION OF HAA-ABE USING RSA AND ABE


1.4.1 Experimental Setup
We have implemented our proposed algorithm employing the concepts used in both
RSA and ABE using MATLAB. Table 1.1 illustrates how the suggested HAA-ABE
scheme would function as a whole.
The results attained are depicted in Figures 1.17–1.19. Figure 1.17 shows the varia-
tion of time taken to generate the private key with random attributes taken for the
private key. With the rise in the values of attributes, the time taken to generate the
private key rises. Figure 1.18 illustrates the variation of time to encrypt with the rise
in the count of leaf nodes (nl). With the hike in nl, the time taken to encrypt rises.
That means the encryption time for the proposed approach is nearly linear at par
with the count of leaf nodes in the access policy, as shown in Figure 1.18. Measuring
Attribute-Based Encryption for Secured Message Transmittal 17

TABLE 1.1
Factors in the Proposed Model
Phases Input Parameters Output Parameters

Key setup G, h, f, e PK = G0 , h = gµ , f = g1/µ , e ( g, g ) , MK µ , g λ


λ
( )
(α + r ) / β
, ∀ S : DJ = g r . H ( j ) , DJ′ = g rj
rj
Key generation Master key, set of SK = ( D = g
attributes (S)
Encryption Message (M), public CT = T , C ~ = Me ( g, g )α s, C = h s , ∀yY : C y = g qy ( 0 ) ,
key (PK), access
( )
qy ( 0 )
C y′ = H att ( y ) .
structure (T)
Decryption Master key (MK), DecryptNode(CT , SK , x )
cipher text (CT), = e ( Di , C x ) /( Di′, C x′ ), if i ∈ S .
session key (SK) ( ) (
= e g r . H ( i ) , g qx (0) /e g ri , H (i)qx (0)
ri
)

FIGURE 1.17 Key generation graph.

the performance of the proposed approach is a little more difficult, because decryp-
tion time might vary significantly depending on the access trees and the group of
attributes that participated, as shown in Figure 1.19. We used decryption to decrypt a
set of cipher texts that had been decrypted using a variety of randomly created policy
trees. Both key generation and encryption programmes take a predictable amount
of time. The access tree of the encrypted text, as well as the properties given in the
private key, determines the decryption performance.
18 AI-Driven IoT Systems for Industry 4.0

FIGURE 1.18 Encryption time graph.

FIGURE 1.19 Decryption time graph.


Attribute-Based Encryption for Secured Message Transmittal 19

1.4.2 Result Analysis
Figure 1.17 demonstrates that as the count of qualities rises, the time to generate key
also rises accordingly. But the time required to generate the key takes less time than
the conventional model. Similarly, in encryption graphs portrayed in Figure 1.18
when the number of leaf nodes increases, encryption time also rises, but it takes
comparatively less time than the model exits previously. In the decryption graph,
it has been noticed that the running time is not varying linearly. The specific access
trees and collection of attributes used can have a big impact on how long it takes to
decrypt data. The length of time required to decode a message also varies on the spe-
cific properties that are available; we uniformly selected a key fulfilling the policy at
random for each run of the decryption time graph. To achieve this, random subsets
of the qualities included in tree leaves were examined, and attributes that did not
satisfy it were periodically eliminated. The running times shown in Figure 1.19 were
obtained by a series of decryption runs carried out in this manner. The efficiency of
the decryption graph is dependent on the particular cipher text access tree and the
attributes provided by the private key.

1.5 SUMMARY AND FUTURE SCOPE


In the context of secured data transmission, the encryptor can fix the policy by
employing the CPAE approach followed by the decryptor that decrypts the encrypted
message. Here attributes can be deployed to create any sort of policy. The access
policy is sent with the CT in case of CPAE. There are lots of methods for sending
messages securely. In this connection, enhancing the data transfer across a public
network stands as a crucial aspect. Our proposed scheme is employed to provide
a cloud depending on access control mechanism for attribute-based media access.
Moreover, data security techniques are provided via cryptography. Some experts
advocate ABE and access control in cloud computing settings as a way to enrich
the security of internet application. In the current HAA-ABE approach, public key
encryption schemes such as RSA, ABE, and ECC are studied and implemented using
MATLAB. It solves the problem of accessing the data from an untrusted server. In
this connection, the data owner (i.e. encryptor) defines policy and decryptor needs to
satisfy the policy to get cipher text message from the storage node. In our proposed
approach, a more secure and improved proposed scheme is described with its system
that can be further extended to block unauthorised users, and it can also be extended
to encrypt and decrypt the spatial data. In this study, RSA and ABE have been com-
bined to provide a hybrid technique. Our aim in this case is to use the CP-ABE to
improve cloud security and privacy issues.
The proposed solution is more reliable and secure than the old one since it makes
use of RSA. The proposed scheme is particularly appropriate for meeting the high
storage needs of the cloud environment, according to the performance analysis. The
suggested method will be used in further future work to be tested in actual IoT sce-
narios, which asserts as a practical strategy in the context of safe internet data trans-
mittal. We will look into the idea of employing intelligent cryptography in ensuing
days to increase security and efficiency as a direction to our further study.
20 AI-Driven IoT Systems for Industry 4.0

REFERENCES
1. V. Goyal, O. Pandey, A. Sahai, and B. Waters, “Attribute-based encryption for fine-
grained access control of encrypted data”, In Proc. of CCS’06, Alexandria, Virginia,
USA, pp. 1–28, 2006.
2. Z. Wan, J. Liu, and R. H. Deng, “HASBE: A Hierarchical Attribute-Based Solution
for Flexible and Scalable Access Control in Cloud Computing”, IEEE Transaction on
Information Forensic and Security, Vol. 7, Iss. 2, pp. 743–754, 2012.
3. K. Yang and X. Jia, “Data Storage Auditing Service in Cloud Computing: Challenges,
Methods and Opportunities”, World Wide Web, Vol. 15, Iss. 4, pp. 409–428, 2012.
4. J. Herrnaz, F. Laguillaumie, and C. Rafols, “Constant size cipher texts in threshold
attribute-based encryption”, In PKC,LNCS 6056, Springer-Verlag, pp. 19–34, 2010.
5. V. Kamliya and A. Rajnikanth, “A Survey on Hierarchical Attribute Set-Based
Encryption (HASBE) Access Control Model For Cloud Computing”, International
Journal of Computer Applications(0975–8887), Vol. 112, Iss. 7, pp. 4–7, 2015.
6. M. Pirretti, P. Traynor, P. McDaniel, and B. Waters, “Secure Attribute-Based Systems”,
Journal of Computer Security, Vol. 18, Iss. 5, pp. 799–837, 2010.
7. S. Nisha and M. Farik, “RSA Public Key Cryptography Algorithm – A Review”,
International Journal of Scientific & Technology Research, Vol. 6, Iss. 7, pp. 187–191,
2017.
8. Z. Zhou and D. Huang, “Efficient and secure data storage operations for mobile
cloud computing”, In Network and Service Management (CNSM), 8th International
Conference and 2012 Workshop on Systems Virtualization Management (SVM), IEEE,
pp. 37–45, 2012.
9. J. Borgh, E. Ngai, B. Ohlman and A. M. Malik. “Employing Attribute-Based
Encryption in Systems with Resource Constrained Devices in an Information Centric
Networking Context”, Global Internet of Things Summit (GIoTS), pp. 1–6, 2016. doi:
10.1109/GIOTS.2017.8016277
10. R. Rivest, A. Shamir, and L. Adleman, “A Method for Obtaining Digital Signatures
and Public-Key Cryptosystems”, ACM Transaction on Communications, Vol. 21,
pp. 120–126, 1978.
11. N. Koblitz, “Elliptic Curve Cryptosystems”, Mathematics of Computation, Vol. 48,
Iss. 177, pp. 203–209, 1987.
12. K. Rabah, “Security of the Cryptographic Protocols Based on Discrete Logarithm
Problem”, Journal of Applied Sciences, Vol. 5, pp. 1692–1712, 2005.
13. A. Sahai and B. Waters, “Fuzzy identity based encryption”, In Advances in Cryptology –
Eurocrypt, Springer, Vol. 3494, pp. 457–473, 2005.
14. W. Teng, G. Yang, Y. Xiang, and D. Wang, “Attribute-Based Access Control with
Constant-Size Cipher Text in Cloud Computing”, IEEE Transactions on Cloud
Computing, Vol. 5, Iss. 4, pp. 617–627, 2017.
15. A. Miyaji, A. Nomura, K. Emura, K. Omote, and M. Soshi, “A cipher text-policy attri-
bute-based encryption scheme with constant cipher text length”, In ISPEC, LNCS 5451,
Springer-Verlag, Vol. 5451, pp. 13–23, 2009.
16. J. Bethencourt, A. Sahai, and B. Waters, “Cipher text-policy attribute-based encryp-
tion”, In IEEE Symposium on Security and Privacy, SP 2007, IEEE, pp. 321–334, 2007.
17. J. Katz, A. Sahai, and B. Waters, “Predicate encryption supporting disjunctions,
polynomial equations, and inner products”, In Smart, N.P. LNCS, Springer, Vol. 4965,
pp. 146–162, 2008.
18. T. Nishide, K. Yoneyama, and K. Ohta, “Attribute-based encryption with partially hidden
encryptor-specified access structures”, In Proceedings of the 6th International Conference
on Applied Cryptography and Network Security, New York, pp. 111–129, 2008.
Attribute-Based Encryption for Secured Message Transmittal 21

19. C.-J. Wang and J.-F. Luo, “A key-policy Attribute-based encryption scheme with
constant size cipher text”, In Eighth International Conference on Computational
Intelligence and Security, 2012.
20. Y. Yan, M. B. M. Kamel, and P. Ligeti, “Attribute-based encryption in cloud com-
puting environment”, In 2020 International Conference on Computing, Electronics &
Communications Engineering (ICCECE), IEEE, pp. 63–68, 2020.
21. R. Ostrovsky, A. Sahai, and B. Waters, “Attribute-based encryption with non-mono-
tonic access structures”, In Proceedings of the 14th ACM Conference on Computer and
Communications Security, pp. 195–203, 2007.
22. S. Yu, K. Ren, J. Li, and W. Lou, “Defending against key abuse attacks in KPAE
enabled broadcast systems”, In Proc. of the Security and Privacy in Communication
Networks, pp. 311–329, 2009.
23. A. R. Nimje, V. T. Gaikwad, and H. N. Datir, “Attribute-Based Encryption Techniques
in Cloud Computing Security: An Overview”, International Journal of Computer
Trends and Technology, Vol. 4, Iss. 3, pp. 419–423, 2013.
24. J. Leea, S. Oha, and J. Janga, “A work in progress: Context based encryption scheme
for internet of things”, In The 10th International Conference on Future Networks and
Communications (FNC 2015) Procedia, pp. 271–275, 2015.
25. K. Emura, A. Miyaji, A. Nomura, K. Omote, and M. Soshi, “A cipher text-policy attri-
bute-based encryption scheme with constant cipher text length”, In ISPEC, Springer-
Verlag, pp. 13–23, 2009.
26. P. P. Tsang, S. W. Smith, and A. Kapadia, “Attribute-based publishing with hidden
credentials and hidden policies”, In Proc. Network & Distributed System Security
Symposium (NDSS), pp. 179–192, 2007.
27. L. Cheung and C. Newport, “Provably secure cipher text policy ABE”, In Proceedings
of the 14th ACM Conference on Computer and Communications Security, pp. 456–465,
2007.
28. V. Goyal, A. Jain, O. Pandey, and A. Sahai, “Bounded cipher text policy attribute
based encryption”, In Proceedings of the 35thinternational colloquium on Automata,
Languages and Programming, (ICALP ‘08), Springer, Vol. 5, Iss. 125, pp. 579–591,
2008.
29. B. Waters, “Cipher text-policy attribute-based encryption: An expressive, efficient,
and provably secure realization”, In Proceedings of the International Conference
on Practice and Theory in Public Key Cryptography, Springer, Vol. 65, pp. 53–70,
2011.
30. L. Touati, Y. Challal, and A. Bouabdallah, “C-CPAE: Cooperative cipher text policy
attribute-based encryption for the internet of things”, In International conference on
Advanced Networking Distributed Systems and Applications (INDS), IEEE, pp. 64–69,
2014.
31. W. Susilo, J. Han, and Y. Mu, “Improving Privacy & Security in Decentralized
Cipher Text-Policy Attribute-Based Encryption”, IEEE, Vol. 10, Iss. 3, pp. 665–678,
2015.
32. F. Feng, Y. Xhafal, and Zhang, “Privacy-Aware Attribute-Based PHR Sharing With
User Accountability in Cloud Computing”, Journal of Super Computing, Vol. 71, Iss. 5,
pp. 1607–1619, 2015.
33. Q. Liu, G. Wang, and J. Wu, “Hierarchical attribute-based encryption for fine-grained
access control in cloud storage services”, In Proc. of the 17thACM Conference on
Computer and Communications Security, pp. 735–737, 2010.
34. D. Sangeetha and V. Vijayakumar, “Enhanced security of PHR system in cloud using
prioritized level based encryption”, In Proc. of International Conference on Security in
Computer Networks and Distributed Systems, pp. 57–69, 2014.
22 AI-Driven IoT Systems for Industry 4.0

35. S. B. Priyadarshini, A. B. Bagjadab, and B. K. Mishra, “Digital Signature and Its Pivotal
Role in Affording Security Services”, In eBusiness Security, Auerbach Publications,
New York, pp. 422–442, 2018, ISBN 9780429468254.
36. A. Sahani, J. Arya, A. Patro, and S. B. B. Priyadarshini, “Blockchain: Applications and
Challenges in the Industry”, Intelligent and Cloud Computing, Proceedings of ICICC,
Vol. 1, pp. 813–818, 2019.
37. M. P. Nath, S. B. B. Priyadarshini, and D. Mishra, “A Comprehensive Study on Security
in IoT and Resolving Security Threats Using Machine Learning (ML)”, Advances in
Intelligent Computing and Communication, Vol. 5, pp. 545–553, 2021.
2 Object Detection Using
Deep Learning (DL) and
OpenCV Approach
Ajit Kumar Mahapatra, Sushree Bibhuprada B.
Priyadarshini, Lokesh Kumar Nahata, Smita Rath,
Nikhil Singh, Shatabdi Chakraborty,
Jyotirmayee Pradhan, Sachi Nandan Mohanty,
and Prabhat Sahu

2.1 INTRODUCTION
2.1.1 Background Study
The hand is a human organ used to manipulate physical objects. For this reason, the
hand is most commonly used by humans to communicate and operate machines.
The process of comprehending and classifying significant hand movements made
by humans is known as hand gesture recognition. The mouse and keyboard are the
basic inputs and outputs of a computer, and you need to use your hands to use both of
these devices. The most important and immediate exchange of information between
humans and machines takes place through visual and audio aids, but this communi-
cation is one-way. Hand gestures support everyday communication to clearly convey
our message [1–5].
Hand gestures are essential for sign language communication, as the hand is of
paramount importance to silent and hearing-impaired people who communicate
with the hand using gestures. If a computer has the ability to translate and under-
stand hand gestures, it will be an advancement in human-computer interaction. The
dilemma here is that modern images are informative and require extensive process-
ing to perform this task. Each gesture has some characteristics that make it different
from other gestures. The HU invariant moment is used to extract these features from
the gesture and classify them using the K-Nearest Neighbour (KNN) algorithm. The
actual applications of gesture-based human-computer interactions are interaction
with virtual objects, control of robots, translation of body and sign language, and
control of machines by gestures [6, 7]. Hand detection and recognition is one of the
technologies that can recognize hand motions using real-time video. Hand gestures
fall within a certain category of interest. The design of hand gesture recognition in
this study is a challenging task that combines two key problems. Manual detection
is first. Making a character that can be used with one hand at a time is another issue.
Through the use of challenging elements including stance, orientation, position, and
DOI: 10.1201/9781003432319-2 23
24 AI-Driven IoT Systems for Industry 4.0

scale variation, this research focuses on how the system detects, recognizes, and
interprets hand gestures.
To get good results in the development of this project, various kinds of ges-
tures such as numbers and sign language need to be created in this system. The
image is taken from our own hands to detect the gesture [8, 9]. In this project,
the detection of hands will be done using Python programming and TensorFlow
libraries. Applying the ideas of hand classification and the hand detection system
will allow for the development of hand gesture recognition using Python and
OpenCV [7, 10–16].

2.1.2 Importance of Object Detection


This project titled “Object Detection Using TensorFlow” comes under an AI-based
project. TensorFlow, OpenCV, LabelImg, Python programming, and other tools were
used to develop this project. There are approximately 1.3 million hearing-impaired
and mute people, and only 250 interpreters are available in our country. This low
percentage results in the majority of these people facing communication problems
of all kinds. Talking to people, being educated, and seeing a doctor are all hard
for them. Our project will overcome the daily hurdles of the audience while com-
municating with each other. The lack of free access to other important sources will
provide a foothold for those involved in this area [17].

2.1.2.1 Motivation
We can easily figure out what objects are present in an image. With the least con-
scious effort, the human visual framework can finish complex works such as the
identification of many objects. It is faster and more accurate. We can now eas-
ily train computers to ensnare and classify multiple hand gestures that are being
used in our daily lives with an image with high accuracy. Object detection using
TensorFlow is a computer vision technique. As the name implies, it aids us in
detecting, finding, and identifying an object in an image or video. In video call-
ing systems, we can add subtitles based on the recognition of hand gestures, and it
makes communication easier.

2.1.2.2 Object Detection Objective


We are all going to showcase how hand gestures can be detected with the help of
a dataset trained and performing a test on a predefined image to recognize what
the gesture symbolizes. Our purpose is to provide communication tools to people
with disabilities. By creating an application with the idea of our project, they will
be able to communicate by themselves using gestures. The growth of the education
sector related to hearing-impaired and mute people is very low, and our app aims to
increase education. Physicians face many ultimatums while interacting with these
people so our app can help them too [18–20].
The motto of our research is to create a system for gesture recognition frame-
work that can swiftly identify motions in settings with natural lighting. To do this,
a gesture-based system is developed to identify the motion of hands. It develops a
technique for identifying hand motions based on lots of factors. The main objective
Object Detection Using Deep Learning (DL) and OpenCV Approach 25

of this system is to be simple, uncomplicated, and user-friendly without needing


any specialist gear. One PC can be employed for all computations. Here specialized
appliances can be applied to digitize the picture (digital camera). The main motto is
to develop a framework to ensnare, recognize, and interpret hand gestures by com-
puter vision. Furthermore, we aim to provide a new efficient system with a high-
speed and gesture detection mechanism that can be implemented as an application
for a variety of use cases [21–38].
The minimum requirements of hardware to execute our project are as follows:

• Processor of 1.6 GHz or faster processor


• RAM – 8 GB
• System Type – 64-bit operating system, x64-based processor
• GPU – NVIDIA GeForce GTX 800 or higher
• Processor used – Intel® Core™ i5-8250U CPU @1.60Hz
• Operating system – Windows 10
• Programming language – Python 4.5.5.62
• Platform used – Jupyter notebook, Colab
• Dataset training – LabelImg
• Library/framework – TensorFlow, OpenCV

2.1.3 Assumptions and Constraints Considered for Object Detection


The assumptions taken and constraints are outlined, as illustrated in Tables 2.1 and 2.2
successively.

TABLE 2.1
Assumptions Taken
Sl. No. Assumption
1. We assume that the photos taken from the camera are of high quality and are capable of
detecting the hand gestures with high accuracy.

TABLE 2.2
Constraints Taken
Sl. No. Assumption
1. It is very important for hand gesture to be detected precisely and understood by the
system.
2. It is necessary for hand gesture to be present in the existing database else the conversion
to corresponding text will not take place.
3. We need to take each gesture’s image from various different angles to make the system
more accurate.
26 AI-Driven IoT Systems for Industry 4.0

2.2 OBJECT DETECTION AND HAND GESTURE RECOGNITION


Research on hand gesture recognition falls into three categories. In the first step of
“Glove-based Analysis,” gloves are fitted with a mechanical or optical sensor that
measures hand posture by converting finger flexibility into electrical impulses. An
additional sensor measures the hand position. Typically, this sensor is a magnetic or
acoustic device attached to the glove. To detect hand posture, several applications
employ a look-up table software toolset. The second method is called “Vision-based
Analysis,” and it’s probably the trickiest one to put into practice properly because it
implies that people get their knowledge from their surroundings. Many implemen-
tations have been evaluated thus far. Using a 3-D model of the human hand is one
method. Several cameras are connected to this model to obtain parameters related to
matching images of the hand, palm position, and joint angles for hand gesture clas-
sification [21–23].
The third implementation uses the stylus as an input method for “Analysis of
Drawing Gesture.” These drawing studies result in the identification of written
content. Mechanical sensing work has been used extensively for hand gesture
identification in both direct and virtual environmental interactions. Mechanically
detecting hand posture presents a number of issues, including electromagnetic
noise, dependability, and accuracy. Gesture interaction can be made potentially
feasible by visual sensing [3, 24, 25], but it is the most difficult challenge for
machines to solve.
A physical item is mapped to a specific segmented region in the image in com-
puter vision, from which object characteristics or features can be generated. Any
measurable attribute of a picture or region therein is referred to as a feature.

2.2.1 Hand Gesture Recognition and Detection Utilizing


Simple Heuristic Rules
The combination of features may be regarded as a pattern, and objects with similar
features can be categorized into classes. It is possible to think of object recognition
as the process of categorizing items based on their unique patterns. The software
used to complete this task is referred to as a classifier [2, 26–28]. The summary of
the general pattern recognition steps is portrayed in Figure 2.1.
Coaches, broadcasters, and sports fans are among the people who employ object
recognition in the realm of sports. This chapter aimed to find the most recent open-
source-based solutions for object detection in sports, specifically soccer players.
Using the TensorFlow Object Detection API, an open-source framework for tasks
related to object recognition, we trained and evaluated a single-shot multibox (SSD)
detector for MobileNet models.

FIGURE 2.1 Pipeline in general pattern recognition.


Object Detection Using Deep Learning (DL) and OpenCV Approach 27

TABLE 2.3
A Comparative Analysis of Different Gesture Recognition Models
Background Additional
Primary Methods of to Gesture Markers Required Number of
Recognition Images (Like Wrist Band) Training Images Frame Rate
Hidden Markov models General Multicoloured gloves 7-hour signing –
Hidden Markov models General No 400 training 10
sentences
Linear approximation to Blue screen No 7441 images –
linear point distribution
on models
Finite state machine/model Static Markers on glove 10 sequences of 10
machine 200 frames each
Hand gesture recognition General No – –

The model was tested as follows:

a. pre-trained and
b. fine-tuned using a dataset consisting of images extracted from video foot-
age of two soccer matches.

The following hypothesis has been tested:

1. Pre-trained models will not work without fine-tuning the data.


2. The fine-tuned model works pretty well, given the data.

Table 2.3 represents the various gesture recognition systems.

2.3 PROPOSED WORK OUTLINE


Object detection by recognizing hand gestures is carried out by the following steps:

a. The user clicks the image of his hand gesture.


b. The captured gestures are processed after processing and matched from an
existing database.
c. Only the text that corresponds to the gesture is displayed.
d. If the gesture performed does not exist in the database, the user will be
prompted to try again.

Today, people use Google Assistant and Siri to get answers to their questions,
but the same is difficult for people with disabilities, mostly hearing-impaired and
silent. Therefore, with the help of object recognition, they can do the same with
hand movements. And that would be of great benefit to them. Our project is useful
for people with disabilities, doctors, and educational institutions dealing with people
28 AI-Driven IoT Systems for Industry 4.0

with disabilities. They no longer need to carry a card or a human interpreter. The
project aims to create a medium for communication between people who are both
deaf and mute. This would as well bring rescue to doctors and also help in increasing
the education rate of our country [1, 7, 8].

2.3.1 Methods/Technologies Observed
The following models came across during the literature survey:

i. Data analytics – Data analytics is the phenomenon of investigating the


raw data to explore patterns and attaining summary pertaining to the data
implemented in an algorithmic or mechanical strategy to extract insights. It
is difficult to convert the data files to understandable form as they capture
different forms of hand gestures.
ii. Machine learning – It is essential in the peregrination of computer pro-
grams capable of accessing information and using it to learn on their own.
When the information is extracted from raw data, it is fed into the best
suited machine learning framework while affording the desired accuracy.
The aim of the project is to test the different models and try to effectively
hike the accuracy of the model [5, 9–13].
iii. Deep learning – Deep learning is an important model of machine learn-
ing that learns from data. It can be either supervised, semi-supervised,
or unsupervised. Deep learning models get applied in neural networks
and also conjointly embedded with natural language processing (NLP)
etc. [3].

2.3.2 Software Requirements


Software used to execute this project are as follows:

• Anaconda – Anaconda is an open-source Python and R programming lan-


guage distribution. Information science, machine learning, deep learning,
and other fields use it. With more than 300 libraries for information systems
available, working on an anaconda for information science – like using a
Jupyter notebook for our project – proves to be actually suitable for any
developer.
• Jupyter notebook – A web-based interactive design ambience for code,
data, and notebooks is JupyterLab.
• Colab – Colab represents a cloud-based Jupyter notebook ambience that is
free to use. Most significantly, there is no setup required, and any user can
update the notebooks.
• Python – Python is a general-purpose, interpreted programming language.
It supports a variety of programming paradigms, including procedural,
object-oriented, and functional programming as well as structured pro-
gramming (especially).
Object Detection Using Deep Learning (DL) and OpenCV Approach 29

• LabelImg – A tool for visual image annotation is called LabelImg.


• TensorFlow – TensorFlow is an open-source library of software for data
flow and differential programming for various tasks. Similarly, TensorFlow
is used in machine learning by neural networks.
• OpenCV – The vast open-source library known as OpenCV is used for
computer vision, machine learning, and image processing. It currently
plays a significant part in real-time operation, which is crucial in modern
systems.

Figure 2.2 represents the brief stages involved in our proffered work, and
Figure 2.3 represents the testing of images where the image can either match
or mismatch. Figure 2.4 illustrates the diagrammatic representation of how the
object detection model works. Similarly, Figure 2.5 represents the working steps
involved in our proposed approach [20–32].

FIGURE 2.2 Brief stages followed in the proposed framework.


30 AI-Driven IoT Systems for Industry 4.0

FIGURE 2.3 Testing of image.

FIGURE 2.4 Applying object recognition model.

2.3.3 Working Steps Involved in Proposed Approach

Step 1. Import Dependencies


Step 2. Define the Images to Collect, for e.g. labels = [‘thumbsup’, ‘thumbsdown’, ‘thankyou’,
‘livelong’]
Step 3. Set up folders IMAGES_PATH and capture Images.
Step 4. Do the Image Labelling as LABELIMG_PATH.
Step 5. Compress the dataset for Colab Training.

FIGURE 2.5 Working steps involved in the proposed approach. (Continued)


Object Detection Using Deep Learning (DL) and OpenCV Approach 31

Step 6. Set up the paths and import the OS.


Step 7. Move them into a Training and Testing Partition as TRAIN PATH, TEST PATH, ARCHIVE_
PATH, and expand the dataset for the model.
Step 8. Download TF Models, Pretrained Models from TensorFlow Model Zoo and subsequently,
Install TFOD
Step 9. Use OpenCV 4.5.5.62.
Step 10. Create Label Map labels = [{‘name’:‘thumbsup’, ‘id’:1}, {‘name’:‘thumbsdown’, ‘id’:2},
{‘name’:‘ok’, ‘id’:3}, {‘name’:‘victory’, ‘id’:4}, {‘name’:‘namaste’, ‘id’:5}]
Step 11. Succeeding, we create TensorFlow records and copy Model Config to the Training Folder.
Step 12. Update the Config For Transfer Learning.
Step 13. Train the model with 2000 steps.
Step 14. Load the Train Model From Checkpoint.
Step 15. Detect from an Image which hand gesture it is.

FIGURE 2.5 (Continued)

2.4 PERFORMANCE EVALUATION


2.4.1 Experimental Setup and Training Dataset
In our chapter, we identified a small number of gestures as illustrated in subsequent
figures. The database’s image will be able to compare the query image, which was
taken from a different perspective. The database contains various images in a variety
of orientations. We just have a few gesture images in our database in order to main-
tain performance. The database can be expanded with new gesture photos without
any pre-processing. The training stage uses a dataset that includes variants. There
are variations for each gesture in the training dataset, which consists of different
gestures. In order to train the system to perform more accurately with different ver-
sions of the same motion, this makes it easier to detect the gesture in various settings.
A few samples from the suggested dataset are shown in the form of figures [32–34].

2.4.2 Raw Images and Various Gestures


The first stage in creating our project is gathering images from various sources. In
order for the detection to function under various lighting and angle settings, we made
sure that images were collected from a variety of angles, brightness levels, scales,
etc. Overall, 100–150 images were experimented. The first taken gestures was the
Namaste gesture, as illustrated in Figure 2.6. Similarly, OK gesture was taken as
seen in Figure 2.7. Likewise, Figures 2.8 and 2.9 illustrate the thumbs down and
thumbs up gestures. Figure 2.10 states the gestures for victory.

2.4.2.1 Labelling the Image


The images were annotated using labelling. The Pascal Visual Object Class(VOC)
format is used to create annotations, which is advantageous in the future. It uses
Qt as its interface and is developed in Python. I used Python 3 and Qt 5 with-
out any issues. This is an example of an image with annotations. In essence, we
32 AI-Driven IoT Systems for Industry 4.0

FIGURE 2.6 Namaste gesture.

FIGURE 2.7 OK gesture.

FIGURE 2.8 Thumbs down gesture.


Object Detection Using Deep Learning (DL) and OpenCV Approach 33

FIGURE 2.9 Thumbs up gesture.

FIGURE 2.10 Victory gesture.

calculate the object’s xmin, ymin, xmax, and ymax values and send them, along
with the image, to the model for training. The following steps are involved for
this purpose:

• The folder containing the images we need to label can be found by clicking
“Open Dir” and choosing the folder.
• Then select the location where we want to save the label file by clicking
on “Change Save Dir”. The image directory should not be the same as this
directory.
• We can now draw boxes over the photos using the “Create Rectbox” command.
• To save, click the Save button. A file containing the box coordinates will
be created.
34 AI-Driven IoT Systems for Industry 4.0

Finally, we will have a folder with data that will have the same name as the image
as an image label. We can now do object detection training with the data. After add-
ing annotations to the image, we construct a label map with the item name, ID, and
display name; one label is created for each object.

2.4.2.2 Raw Images and XML Files


Figure 2.11 displays each and every image kept in the test and train folders. These
pictures aid in maintaining and evaluating the object. Figure 2.12 illustrates the
changing of kernel to virtual environment.

FIGURE 2.11 Images kept in the test and train folders.

FIGURE 2.12 Changing kernel to virtual environment.


Object Detection Using Deep Learning (DL) and OpenCV Approach 35

2.4.3 TensorFlow Object Detection Walk-Through


during Implementation

The steps involved in TensorFlow object detection are shown in Figure 2.13.
With the help of images of hands and under various circumstances, the hand ges-
ture detection system has been checked. This section goes into detail on the system’s

FIGURE 2.13 Sequence of steps followed during object detection using TensorFlow.
36 AI-Driven IoT Systems for Industry 4.0

comprehensive performance. Both situations highlighting the system’s shortcomings


and examples of accurate detection are provided, along with providing insight into
the system’s advantages and disadvantages.
When utilized against a plain background, the gesture identification system was
reliable and functioned well. Irrespective of the backdrop colour, this accuracy was
maintained as long as it was a plain, solid colour background free of any irregu-
larities. Such an understanding of the system’s boundaries provides a hint to focus
on further growing movement. The hand gesture recognition for a single gesture
irrespective of background gives high accuracy in the detection of the gestures. The
accuracy achieved is higher than 80 per cent. Figures 2.14–2.18 represent the object

FIGURE 2.14 Object detection in Namaste gesture.

FIGURE 2.15 Object detection in OK gesture.


Object Detection Using Deep Learning (DL) and OpenCV Approach 37

FIGURE 2.16 Object detection in thumbs down gesture.

FIGURE 2.17 Object detection in thumbs up gesture.

detection outputs in the case of Namaste, OK, thumbs down, thumbs up, and victory
gestures, respectively.

2.5 CONCLUSION AND FUTURE SCOPE


In this chapter, we planned, built, and constructed a system for action recognition
to operate various usability controls. During the analysis phase, we gathered data
on the several gesture recognition systems that are now in use, the methods and
pre-trained models from TensorFlow Zoo they used, and the success/failure rate
38 AI-Driven IoT Systems for Industry 4.0

FIGURE 2.18 Object detection in victory gesture.

of these systems. As a result, we carefully compared various solutions and evalu-


ated their effectiveness. During the design process, we created the project require-
ments images and the data flow diagram. We investigated and examined the various
procedures, then evaluated the TensorFlow Zoo models for the same. During the
implementation stage, we programmed the programme to identify hand gestures
and, in accordance, map the motions to particular system actions. To improve project
performance, we carried out non-functional testing during the testing phase. Since
we didn’t use the conventional database mapping, the system was faster and more
effective. More gesture photographs for the algorithm to recognize might be added
to the database or improved. The dataset may be carefully trained to recognize sev-
eral gestures separately and to determine whether a background person is making
the movements.
The project may simply enhance its utilization by including more effective tech-
niques because it has a broad scope across several domains. These are a few of the
areas where our project can be applied. In medical diagnosis, we can use object
detection and recognition in order to find brain cancers in X-ray reports. Similarly,
we can recognize a shape from an entire region in an image using such a technique.
Moreover, map conception, creation, dissemination, and analysis are all topics cov-
ered in the field of cartography. Likewise, in robotics, the movement of bodily parts
and motion sensors are used in object detection. The operation of other system apps,
such as Explorer and Media Player, can also be managed via hand gesture detection
technology. Using the logic of gesture recognition in sensitive workplaces like hospi-
tals, where maintaining sterility between equipment and people is essential and can
be carried out by object detection strategy.
To make the deaf and mute aware of their needs and what they are missing from
the modern world, we are concentrating on adopting various techniques. In future,
our focus is on creating speech from the letters that are shown on the screen. We’re
Object Detection Using Deep Learning (DL) and OpenCV Approach 39

also aiming to move the system onto Android software since we want our product
to be distributed for the benefit of the world’s people with disabilities. Researching
cutting-edge mathematical techniques for image processing and looking into various
hardware options that would lead to more precise hand detections would be ideal.
This study illustrated the possibilities for streamlining user interactions with desktop
computers and system software in addition to illustrating the various gesture opera-
tions that users may perform.

REFERENCES
1. https://github.com/itseez/opencv (Accessed: 13 Oct 2017).
2. https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/
tf2_detection_zoo.md.
3. https://iopscience.iop.org/article/10.1088/1757-899X/1045/1/012043.
4. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=object+detecting+with+
tensorFlow&btnG=.
5. S. M. M. Roomi, R. J. Priya, and H. Jayalakshmi, “Hand gesture recognition for human-
computer interaction”, Journal of Computer Science, Vol. 6, Iss. 9, pp. 1002–1007,
2010.
6. S. Sharma and S. Singh, “Vision-based hand gesture recognition using deep learning
for the interpretation of sign language”, Expert Systems with Applications, Vol. 182,
pp. 115–157, 2021.
7. P. S. Neethu, R. Suguna, and D. Sathish, “An efficient method for human hand gesture
detection and recognition using deep learning convolutional neural networks”, Soft
Computing, Vol. 24, pp. 15239–15248, 2020.
8. S. Hussain, R. Saxena, X. Han, J. Khan, and H. Shin, “Hand gesture recognition using
deep learning”, 2017 International SoC Design Conference (ISOCC), IEEE, pp. 48–49,
2017.
9. P. Parvathy, K. Subramaniam, P. Venkatesan, et al., “Development of hand gesture
recognition system using machine learning”, Journal of Ambient Intelligence and
Humanized Computing, Vol. 12, pp. 6793–6800, 2021.
10. M. Al-Hammadi, G. Muhammad, W. Abdul, M. Alsulaiman, M. A. Bencherif, T.
S. Alrayes, and M. A. Mekhtiche, “Deep learning-based approach for sign language
gesture recognition with efficient hand gesture representation”, IEEE Access, Vol. 8,
pp. 192527–192542, 2020.
11. P. Trigueiros, F. Ribeiro, and L. P. Reis, “A comparison of machine learning algorithms
applied to hand gesture recognition”, 7th Iberian Conference on Information Systems
and Technologies (CISTI 2012), IEEE, pp. 1–6, 2012.
12. A. Mujahid, M. J. Awan, A. Yasin, M. A. Mohammed, R. Damaševičius, R. Maskeliūnas,
and K. H. Abdulkareem, “Real-time hand gesture recognition based on deep learning
YOLOv3 model”, Applied Sciences, Vol. 11, Iss. 9, pp. 41–64, 2021.
13. G. Devineau, F. Moutarde, W. Xi, and J. Yang, “Deep learning for hand gesture rec-
ognition on skeletal data”, 13th IEEE International Conference on Automatic Face,
Gesture Recognition (FG 2018), pp. 106–113, 2018.
14. A. A. Q. Mohammed, J. Lv, and M. S. Islam, “A deep learning-based end-to-end
composite system for hand detection and gesture recognition”, Sensors, Vol. 19, Iss. 23,
pp. 52–82, 2019.
15. R. M. Gurav and P. K. Kadbe, “Real time finger tracking and contour detection for
gesture recognition using OpenCV”, 2015 International Conference on Industrial
Instrumentation and Control (ICIC), IEEE, pp. 974–977, 2015.
40 AI-Driven IoT Systems for Industry 4.0

16. V. Harini, V. Prahelika, I. Sneka, and P. Adlene Ebenezer, “Hand gesture recogni-
tion using OpenCv and Python”, New Trends in Computational Vision and Bio-
inspired Computing: Selected works presented at the ICCVBIC. Springer, Coimbatore,
pp. 1711–1719, 2020.
17. R. Shrivastava, “A hidden Markov model based dynamic hand gesture recognition sys-
tem using OpenCV”, 3rd IEEE International Advance Computing Conference (IACC),
IEEE, pp. 947–950, 2013.
18. D. H. Pal and S. M. Kakade, “Dynamic hand gesture recognition using Kinect sensor.
2016 International Conference on Global Trends in Signal Processing, Information
Computing and Communication (ICGTSPICC), IEEE, pp. 448–453, 2013.
19. M. Suresh, A. Sinha, and R. P. Aneesh, “Real-time hand gesture recognition using
deep learning”, IJIIE-International Journal of Innovations and Implementations in
Engineering ISSN 2454-3489, Volume 1, 2019 December Edition.
20. T. Sharma, S. Kumar, N. Yadav, K. Sharma, and P. Bhardwaj, “Air-swipe gesture rec-
ognition using OpenCV in Android devices”, International Conference on Algorithms,
Methodology, Models and Applications in Emerging Technologies (ICAMMAET),
IEEE, pp. 1–6, 2017.
21. J. Shukla and A. Dwivedi, “A method for hand gesture recognition”, Fourth
International Conference on Communication Systems and Network Technologies,
IEEE, pp. 919–923, 2014.
22. A. S. Ghotkar and G. K. Kharate, “Study of vision based hand gesture recognition
using Indian sign language”, International Journal on Smart Sensing and Intelligent
Systems, Vol. 7, Iss. 1, pp. 96–115, 2014.
23. V. V. Krishna Reddy, et al., “Hand gesture recognition using convolutional neural net-
works and computer vision”, Cognitive Informatics and Soft Computing: Proceeding
of CISC 2021. Springer Nature Singapore, Singapore, pp. 583–593, 2021.
24. N. H. Dardas and N. D. Georganas, “Real-time hand gesture detection and recognition
using bag-of-features and support vector machine techniques”, IEEE Transactions on
Instrumentation and Measurement, Vol. 60, Iss. 11, pp. 3592–3607, 2011.
25. A. Chaudhary, J. L. Raheja, K. Das, and S. Raheja, “Intelligent approaches to interact
with machines using hand gesture recognition in natural way: A survey”, International
Journal of Computer Science and Engineering Survey, Vol. 1303, pp. 22–92, 2013.
26. A. Mujahid, M. J. Awan, A. Yasin, M. A. Mohammed, R. Damaševičius, R. Maskeliūnas,
and K. H. Abdulkareem, “Real-time hand gesture recognition based on deep learning
YOLOv3 model”, Applied Sciences, Vol. 11, Iss. 9, pp. 41–64, 2021.
27. S. S. Kakkoth and S. Gharge, “Survey on real time hand gesture recognition”,
International Conference on Current Trends in Computer, Electrical, Electronics and
Communication (CTCEEC), IEEE, pp. 948–954, 2017.
28. S. Sharma and S. Jain, “A static hand gesture and face recognition system for blind
people”, 6th International Conference on Signal Processing and Integrated Networks
(SPIN), IEEE, pp. 534–539, 2019.
29. L. Brethes, P. Menezes, F. Lerasle, and J. Hayet, “Face tracking and hand gesture
recognition for human-robot interaction”, International Conference on Robotics and
Automation, IEEE, Vol. 2, pp. 1901–1906, 2004.
30. M. P. Nath, S. B. B. Priyadarshini, D. Mishra, and S. Borah, “A comprehensive study of
contemporary IoT technologies and varied machine learning (ML) schemes, soft com-
puting techniques and applications”, Advances in Intelligent Systems and Computing,
Vol. 1248. Springer, Singapore, 2021. https://doi.org/10.1007/978-981-15-7394-1_56
31. S. B. B. Priyadarshini, A. Mahapatra, S. N. Mohanty, et al., “myCHIP-8 emulator:
An innovative software testing strategy for playing online games in many platforms”,
EAI/Springer Innovations in Communication and Computing. Springer, Cham, 2022.
https://doi.org/10.1007/978-3-031-07297-0_9
Object Detection Using Deep Learning (DL) and OpenCV Approach 41

32. D. Xu, X. Wu, Y. L. Chen, and Y. Xu, “Online dynamic gesture recognition for human
robot interaction”, Journal of Intelligent Robotic Systems, Vol. 77, Iss. 3–4, pp. 583–
596, 2015.
33. S. Rath, S. B. B. Priyadarshini, et al., “A real-time hybrid YOLOV4 approach for multi-
classification and detection of objects”, Journal of Theoretical and Applied Information
Technology, Vol. 101, Iss. 11, pp. 4314–4225, 2023.
34. J. Zhang, et al., “Multi-class object detection using faster R-CNN and estimation of
shaking locations for automated shake-and-catch apple harvesting”, Computers and
Electronics in Agriculture, Vol. 173, pp. 53–84, 2020.
35. S. H. Lee, C. H. Yeh, T. W. Hou, and C. S. Yang, “A lightweight neural network based
on AlexNet-SSD model for garbage detection”, Proceedings of the 2019 3rd High
Performance Computing and Cluster Technologies Conference, pp. 274–278, 2019.
36. K. Chaitanya and G. Maragatham, “Object and obstacle detection for self-driving cars
using GoogLeNet and deep learning”, Artificial Intelligence Techniques for Advanced
Computing Applications: Proceedings of ICACT. Springer, Singapore, pp. 315–322,
2021.
37. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep con-
volutional neural networks”, Communications of the ACM, Vol. 60, Iss. 6, pp. 84–90,
2017.
38. S. M. Abbas and S. N. Singh, “Region-based object detection and classification using
faster R-CNN”, 4th International Conference on Computational Intelligence &
Communication Technology (CICT), IEEE, pp. 1–6, 2018.
3 Enhancing Industrial
Operations through
AI-Driven Decision-
Making in the Era
of Industry 4.0
Prakruthi R Rai, Preethi Nanjundan,
and Jossy Paul George

3.1 INTRODUCTION
The dawn of the Fourth Industrial Revolution, colloquially termed Industry 4.0,
marked a pivotal juncture within the annals of commercial development. The sim-
ulation of human intelligence in machines, called artificial intelligence (AI), has
risen and plays a vital position in the new era. AI refers to a broad field of science
encompassing PC technological know-how, psychology, philosophy, linguistics, and
other areas [1]. In a trendy and hastily evolving technological panorama, Industry
4.0 has emerged as an innovative idea integrating modern-day technologies along
with AI, big statistics analytics, the Internet of Things (IoT), and cloud computing.
This fourth commercial revolution aims to transform traditional manufacturing pro-
cedures by leveraging smart structures to enhance productiveness, performance, and
profitability [2]. Rooted in a synergy of present-day technologies, Industry 4.0 pro-
mulgates an imaginative and prescient of interconnected and intelligent production
systems. At the nexus of this revolution lies the all-powerful realm of AI, orchestrat-
ing a paradigm shift in how industries function, evolve, and decide [3].
Historically, business evolutions had been catalyzed with the aid of technological
breakthroughs from the mechanization wrought with the aid of steam engines during
the First Industrial Revolution to the automation delivered with the aid of electronics
and IT all through the Third. However, Industry 4.0 is awesome, blurring the traces
between the bodily and digital spheres. This amalgamation is embodied with the aid of
cyber-physical systems, decentralized decision-making frameworks, and the ever-pres-
ent IoT. With their torrent of non-stop information streams, such systems necessitate
advanced computational mechanisms for interpretation, processing, and reaction [3].
Enter AI. With its large arsenal of machine learning (ML) algorithms, neural net-
works, and records analytics equipment, AI has emerged as the linchpin for Industry
4.0’s aspirations. It paves the way for systems that don’t simply reply but predict,

42 DOI: 10.1201/9781003432319-3
Enhancing Industrial Operations through AI-Driven Decision-Making 43

adapt, and optimize [4]. Whether forecasting machines put on-and-tear to address
it preemptively, optimizing supply chains with the use of actual-time international
data, or customizing production strategies to fulfill transferring needs, AI-driven
selections are getting the lifeblood of cutting-edge business ecosystems [2]. This
bankruptcy seeks to clarify the tricky interaction of AI technology in the overarch-
ing framework of Industry 4.0. We aim to spotlight AI’s transformative position
in guidance business selection-making tactics toward remarkable performance and
precision via a mix of technical insights, case research, and ahead-looking analyses.
At the coronary heart of modern industry’s rapid transformation is AI—a techno-
logical juggernaut reshaping the contours of production, operation, and management
[5]. The infusion of AI into commercial practices has revolutionized selection-mak-
ing, transitioning from traditional, often reactive strategies to predictive, proactive,
and particularly optimized techniques. AI has a profound effect on current industrial
choice-making [3].

3.2 BACKGROUND
Within the paradigm of Industry 4.0, AI stands as the computational nucleus, anal-
ogous to the mind’s role in complicated organisms [1]. This industrial panorama
is inundated with large datasets from an intricate mesh of sensors, interconnected
IoT gadgets, and cyber-physical systems. AI’s prowess in real-time facts process-
ing, pattern reputation, and predictive analytics will become integral, remodeling
this uncooked data deluge into actionable intelligence [3]. Moreover, the AI-driven
decentralized decision-making algorithms empower person components within these
surroundings to autonomously derive and execute choices, making sure an orches-
trated and cohesive operation goes with the flow. Cognitive automation, underpinned
via AI, seamlessly melds device performance with human-like reasoning, enabling
nuanced judgments past conventional automation paradigms [6].
Furthermore, intuitive human-system interfaces, bolstered by way of superior
natural language processing (NLP) frameworks, strengthen the symbiotic interplay
between human information and automated systems [3]. Concurrently, AI-pushed
security protocols protect the commercial community, ensuring information integ-
rity and thwarting potential breaches in this hyper-linked environment. As Industry
4.0 evolves, device mastering mechanisms allow non-stop adaptive learning, opti-
mizing operations, and refining decision matrices. AI helps and governs the elaborate
interaction of components within Industry 4.0, orchestrating a destiny characterized
by means of unprecedented performance and adaptability [7].

3.2.1 Role of AI in Processing Vast Amounts of Data


from Interconnected Devices

In today’s age characterized by means of relentless digital integration and pervasive


connectivity, a burgeoning array of interconnected gadgets, which encompasses
industrial sensors, complex biomedical video display units, and multifaceted IoT
nodes, continuously feeds into an overwhelming reservoir of multi-modal records
[3]. This ceaseless surge of statistics, even as beneficial, introduces formidable
44 AI-Driven IoT Systems for Industry 4.0

demanding situations, especially in terms of the correct assimilation of records,


its nuanced evaluation, and the imperative of creating well-timed, strategic deci-
sions. AI has ascended because of the pivotal force that interfaces with this
information cascade [2]. It methodically orchestrates state-of-the-art excessive-
throughput statistics ingestion pipelines, adeptly handling the inflow and ensuring
a coherent aggregation of variegated statistics modalities. Beyond mere ingestion,
AI streamlines the essential preprocessing segment with unparalleled automa-
tion—encompassing obligations consisting of meticulous information cleansing,
normalization, and the complicated extraction of functions, all foundational for
subsequent analysis [2]. When faced with the sizable landscapes of this pro-
cessed information, AI, fortified by way of superior ML architectures, pushes
beyond the boundaries of conventional cognitive skills [4]. It discerns and deci-
phers the labyrinthine styles woven inside these full-size information reposito-
ries. Through this lens of advanced cognition, AI will become instrumental in
detecting anomalies, acting as vigilant sentinels that alert industries to potential
operational aberrations, security breaches, or the emergence of novel, previ-
ously uncharted information trends. In an international pushed via immediacy,
the imperative for real-time information evaluation is paramount [8]. Here, AI
frameworks, renowned for their prowess in parallel processing, upward thrust
to the venture, facilitating analyses that are not simply spark-off but profoundly
insightful, thereby shaping fast reaction techniques and driving the instantia-
tion of automated actions. Beyond the present, the domain of AI dazzles in its
foresight [4]. Through predictive analytics, AI fashions, sculpted and delicate
through historical information, solid their gaze ahead, prognosticating draw-
ing close activities—from equipment requiring upkeep in business settings to
evolving patron possibilities in dynamic markets [9]. Yet, as statistics volumes
swell, so does the need for efficient statistics control. AI meets this require-
ment head-on, introducing nuanced information discount techniques like char-
acteristic choice and dimensionality reduction, optimizing the very essence of
information illustration. This guarantees now not simply efficient storage but
also expedited processing, crucial in actual-time packages [10]. Additionally,
in a technology teeming with textual facts, AI’s linguistic capabilities, rooted
in NLP, undertake the semantic dissection of voluminous textual datasets [3].
This capacity underpins myriad programs from sentiment evaluation to nuanced
entity recognition and contextual categorization. In tandem with these capabili-
ties, AI showcases its brilliance in statistics management. It does not just keep
records but also optimizes its storage footprint and retrieval mechanisms [8].
By producing sophisticated information utilization fashions, AI ensures that
storage architectures align seamlessly with usage styles, making sure that the
best records are available. In encapsulation, AI’s centrality in processing and
harnessing the torrential outpourings from interconnected devices is both trans-
formational and unprecedented [7]. It stands because the cognitive epicenter,
intricately weaving, collectively records fusion, analytical depth, and predictive
foresight, and, in doing so, steers cutting-edge records-extensive ecosystems in
the direction of a horizon replete with insights and marked through operational
zeniths [6].
Enhancing Industrial Operations through AI-Driven Decision-Making 45

3.2.2 The Shift from Reactive to Proactive (predictive


and Prescriptive) Decision-Making

The underpinnings of selection-making methodologies have passed through a trans-


formative evolution, marking a distinct pivot from traditionally reactive fashions to
decisively proactive frameworks. Historically rooted in a reactive stance, industries
could predominantly depend on submit-occasion analyses, often due to suboptimal
useful resource allocations and ability latency in redressing problems, akin to post
hoc treatments instead of preemptive interventions. However, with the advent and
convergence of sophisticated statistics analytics, AI, and elaborate devices, learn-
ing architectures, a paradigm emerged in which styles distilled from historical facts
ought to prognosticate future situations [2]. This predictive modality lets in for antic-
ipatory strategies, exemplified in situations like predictive renovation in manufactur-
ing domains, wherein capability machinery downtimes are preemptively identified,
mitigating unscheduled operational halts. Yet, the zenith of proactive choice-mak-
ing is epitomized in prescriptive analytics [11]. Transcending mere forecasts, this
advanced analytical model synthesizes facts, algorithmic models, and computational
heuristics to prognosticate and delineate optimum guides of motion tailor-made to
those predictions. Such prescriptive methodologies, for instance, should dynami-
cally optimize delivery chain approaches with the aid of realigning sources in reac-
tion to expected demand surges, making sure performance and resilience [12]. The
metamorphosis from reactive responses to a symbiosis of predictive and prescriptive
techniques indicates a generation in which industries are not merely responsive but
strategically poised, harnessing information-driven insights to navigate complexities
with unparalleled precision and foresight [13].

3.3 BENEFITS OF AI-DRIVEN DECISION-MAKING


IN INDUSTRY 4.0
Within the evolving panorama of Industry 4.0, characterized by the way of integrat-
ing cyber-physical structures, IoT networks, and complex virtual dual frameworks,
the infusion of AI into decision-making techniques emerges as a transformative lever
[3]. This AI-augmented decision-making gives a plethora of multifaceted blessings.
Central to this is the remarkable enhancement in operational performance—AI algo-
rithms, adept at actual-time records assimilation and processing, empower industries
to seamlessly optimize operations from automating recurring duties to dynamically
modulating production schedules based on tricky demand predictions [7]. This
functionality extends to predictive preservation, wherein sophisticated ML models
meticulously analyze sensor-derived facts to prognosticate equipment health, finally
ushering in a drastic discount in unanticipated downtimes and concomitant preserva-
tion expenditures [2]. Furthermore, the confluence of AI with advanced photograph
recognition technologies revolutionizes quality control tactics, ensuring meticulous
defect detection with better precision than traditional human-led inspections. AI’s
profound impact on delivery chain management is manifested through its capabil-
ity to research giant records streams—ranging from logistical nuances to uncooked
cloth metrics—allowing real-time modifications and strategic forecasting to ensure
46 AI-Driven IoT Systems for Industry 4.0

fluidity in delivery chain operations [8]. Moreover, with their inherent adaptabil-
ity, AI-driven structures pivot in the direction of electricity-efficient methodologies,
calibrating consumption primarily based on actual-time wishes and consistently pin-
pointing wastage vectors, aligning business tactics with sustainable imperatives [2].
One of the more nuanced advantages lies in AI’s ability for bespoke production vari-
ations, wherein algorithms, attuned to market oscillations and client remarks, tai-
lor production paradigms, ensuring alignment with dynamic market demands. This
AI-driven foresight in addition encompasses danger mitigation, offering preemptive
strategies primarily based on historic information trajectories, modern operational
tendencies, and external geopolitical or marketplace stimuli [8]. From a safety atti-
tude, deploying AI-empowered robot entities in excessive-threat business domains
minimizes human exposure to risks. At the same time, actual-time monitoring algo-
rithms stand sentinel, supplying immediate signals for potential protection breaches.
Beyond the tangible operational facets, AI integration gives a cerebral gain, distill-
ing good-sized record repositories into actionable strategic insights, informing and
refining excessive-stage organizational selections [6]. Ultimately, these advantages
culminate into tangible fee efficiencies, bolstering the financial robustness of indus-
tries navigating the complexities of Industry 4.0. The infusion of AI-pushed selec-
tion-making within the business sector indicates a paradigmatic shift, intertwining
intelligence with operational substrates and placing the trajectory for an adaptive,
green, and growth-centric commercial destiny [14].

3.3.1 Real-Time Data Processing and Instant Decision-Making


In the contemporary digital arena, characterized by ceaseless data proliferation and
heightened demands for instantaneous responses, the synthesis of real-time data pro-
cessing with immediate decision-making emerges as a pivotal axis [5]. Leveraging
the prowess of edge computing, advanced in-memory databases, and sophisticated
stream processing platforms like Apache Kafka and Apache Flink, the processing
paradigm has evolved to accommodate the immediate assimilation and interpretation
of data as it inundated systems [6]. This instantaneous processing is further bolstered
by the computational capabilities of cutting-edge microprocessors, which parse
and analyze vast data streams with unparalleled alacrity. Parallel to this, decision-
making has witnessed a paradigmatic shift driven by intricate ML models and AI
algorithms [10]. These decision matrices, honed on vast historical data repositories,
are adept at extrapolating insights from real-time processed data, facilitating split-
second determinations. For instance, these algorithms dynamically assess market
nuances to execute timely trades in high-stakes stock trading [6]. At the same time,
in the healthcare sector, continuous patient monitoring systems leverage this imme-
diacy to dispatch critical alerts. This symbiotic confluence of real-time processing
and instant decision-making augments operational efficiencies by eliminating sys-
temic lags and enhances decision accuracy, underpinned by the comprehensive and
contemporaneous nature of the data being processed [11]. Furthermore, such a syn-
ergistic approach imbues organizations with unprecedented agility, allowing them to
navigate the complexities of the digital landscape with enhanced responsiveness. As
the digital epoch evolves, the intertwined trajectories of real-time data processing
Enhancing Industrial Operations through AI-Driven Decision-Making 47

and instantaneous decision-making stand at the forefront, setting the gold standard
for operational efficiency, precision, and adaptability [13].

3.3.2 Predictive Maintenance and Reduced Downtimes


Amidst the proliferation of Industry 4.0 methodologies, the realm of equipment
maintenance has undergone a profound transformation, pivoting sharply from tradi-
tional reactive models to the advanced paradigm of predictive maintenance. At the
core of this evolution lies the amalgamation of sophisticated machine-learning algo-
rithms, intricate sensor technologies, and voluminous historical datasets [2]. Sensors,
seamlessly integrated into machinery, incessantly capture many operational metrics
from temperature fluctuations and vibration patterns to acoustic emissions and elec-
trical behaviors. This real-time data is continually funneled into advanced analytical
frameworks, which juxtapose it against vast historical data repositories, meticulously
charting equipment’s life cycle and performance nuances. ML algorithms trained on
these datasets exhibit the capability to discern subliminal patterns and anomalies,
effectively prognosticating potential equipment failures or degradation long before
they manifest overtly [2]. Such preemptive insights allow organizations to schedule
maintenance activities proactively, squarely aligning them with predicted equipment
health trajectories rather than reacting post-failure. The tangible consequence of
this predictive stance is a marked reduction in unscheduled downtimes, ensuring
continuous operational flow and minimizing resource wastage [2]. Furthermore, the
ability to preemptively address wear and tear enhances the longevity of equipment,
ensuring sustained optimal performance. In summation, the integration of predictive
maintenance methodologies in modern industrial ecosystems epitomizes the conver-
gence of data analytics and operational efficiency and stands as a testament to the
industry’s commitment to minimizing downtimes, optimizing resource allocation,
and fostering prolonged equipment efficacy [9].

3.4 CASE STUDIES


3.4.1 Smart Manufacturing—How AI Optimizes
Production Lines in Real Time
In the dynamic arena of global automotive manufacturing, the integration of AI into
production line management has heralded a transformative shift in operational effi-
ciency and product quality [6]. A case in point is a leading automotive conglomerate’s
ambitious endeavor to harness AI for real-time optimization of its intricate produc-
tion processes [12]. Equipped with a sprawling manufacturing ecosystem character-
ized by multifaceted assembly lines, the brand grappled with challenges ranging
from fluctuating demand alignment to resource allocation inefficiencies. Their
pioneering solution entailed the deployment of a dense sensor network across the
production continuum, generating a ceaseless stream of data encompassing param-
eters such as equipment status, throughput rates, and intermediate product quality
[15]. This voluminous real-time data, when channeled into a centralized AI frame-
work fortified with advanced ML algorithms, birthed an adaptive system capable
48 AI-Driven IoT Systems for Industry 4.0

of dynamic recalibrations [16]. Trained on vast repositories of historical production


data, the AI system exhibited a nuanced capability to not only predict imminent
bottlenecks and maintenance requisites but also to orchestrate on-the-fly adjustments
to the production parameters [8]. The automotive titan’s strategic integration of AI
into its manufacturing processes crystallizes the profound potential of melding com-
putational intelligence with industrial operations, signposting a future where real-
time AI-driven interventions become the bedrock of manufacturing excellence [8].

3.4.2 Case Study 2: Predictive Maintenance—Using AI to Anticipate


and Prevent Machinery Failures

In the intricate tapestry of modern industrial operations, where equipment efficacy


is paramount, the role of predictive maintenance, powered by AI, has burgeoned as
a strategic linchpin [3]. A salient example is a renowned global steel manufactur-
ing company, historically beleaguered by substantial machinery downtimes, which
not only eroded productivity but also inflated operational costs. Confronted by the
inherent limitations of traditional maintenance routines—typically characterized
by scheduled check-ups or reactionary repairs—the company sought a more antici-
patory solution. Their chosen strategy revolved around harnessing AI’s prowess
in data-driven predictive modeling [10]. Initiating this transformative journey, the
company outfitted its machinery with a comprehensive array of sensors, continually
monitoring variables such as temperature, vibration spectra, and acoustic signatures
[12]. This incessant stream of real-time data was funneled into a sophisticated AI
system, equipped with ML algorithms trained on extensive historical maintenance
and failure records. The resultant model was adept at distilling patterns and nuances
from the sensor data, effectively prognosticating potential equipment malfunctions
with remarkable precision [16]. With the integration of this system, the steel manu-
facturer could preemptively identify wear and tear, subsequently orchestrating tar-
geted maintenance endeavors well before any catastrophic failure.

3.4.3 Case Study 3: AI in Quality Control—Ensuring Product


Quality through AI-Powered Visual Inspections
Within the multifaceted arena of modern manufacturing, quality assurance remains
an indomitable cornerstone, with ramifications spanning brand reputation, customer
satisfaction, and operational efficiency [17]. A compelling illustration of innovation
in this domain emerges from a top-tier electronics manufacturing firm, perennially
grappling with the monumental challenge of ensuring product uniformity and quality
across millions of units. Traditional quality control mechanisms, predominantly reli-
ant on human inspectors or rudimentary automated systems, exhibited constraints
in scalability, accuracy, and consistency [18]. Recognizing the imperatives of tech-
nological advancement, the firm embarked on integrating AI into its quality control
procedures, specifically targeting visual inspections. The bedrock of this initiative
was the deployment of high-resolution cameras across production lines, meticulously
capturing detailed images of each product in real time [18]. These images were then
relayed to a centralized AI system, underpinned by sophisticated deep-learning
Enhancing Industrial Operations through AI-Driven Decision-Making 49

models. Trained rigorously on vast datasets comprising both flawless and defective
units, the AI system evolved to recognize an expansive spectrum of anomalies from
minute cosmetic blemishes to intricate functional discrepancies. The electronics
firm’s strategic pivot to AI-empowered visual inspections epitomizes the confluence
of technology with quality assurance, delineating a future where product perfection
is not an aspirational ideal but a consistently realized standard [17].

3.5 CHALLENGES AND CONSIDERATIONS IN THE


INTERSECTION OF AI AND INDUSTRY 4.0
The convergence of AI and Industry 4.0, emblematic of the virtual evolution within
the twenty-first century, holds the potential for a paradigm shift. This fusion envi-
sions a landscape characterized by interconnected structures, smart automation,
and remarkable operational efficiencies. However, this transformative prospect is
entwined with a tapestry of ambitious demanding situations and difficult consider-
ations that demand thorough examination [5]. These hurdles span from the moral
quandaries posed by way of autonomous selection-making to the tricky mission of
harmonizing legacy systems with modern AI frameworks. While the trajectory to
realize an adept, AI-powered commercial region is promising, its course is illumi-
nated via the complexities inherent in this amalgamation. This discourse navigates
those challenges, dropping light on the multifaceted elements that enterprise leaders,
technologists, and policymakers need to adeptly navigate as they steer the economic
sphere in the direction of this horizon of each promise and complexity [19].
In the realm of Industry 4.0, in which the orchestration of cyber-physical struc-
tures, IoT devices, and superior computational infrastructures is predicted, interop-
erability emerges as a pivotal but formidable obstacle [2]. Interoperability essentially
refers to the ability of disparate structures, devices, and programs to seamlessly
speak, collaborate, and characteristic cohesively, no matter their divergent origins
and underlying architectures. In the pursuit of virtual transformation, industries
often encounter a tricky mosaic of legacy structures, proprietary platforms, and
modern-day devices, each characterized by using its particular conversation proto-
cols, statistics systems, and operational intricacies. The integration of this inherent
heterogeneity turns into a problematic mission [10]. For instance, a legacy sensor
gadget entrenched in a production facility may appoint communication protocols
incongruous with the ones of a brand-new AI-pushed analytics platform, invariably
main to remote records silos and operational inefficiencies [19]. Similarly, gadgets
emanating from disparate manufacturers can also adhere to divergent statistics
transmission requirements or safety protocols, thereby impeding the seamless flow
of data and augmenting vulnerability to security breaches. Furthermore, the absence
of standardized norms in IoT device fabrication and software program improve-
ment exacerbates those disparities, culminating in complex device integrations that
regularly necessitate intermediary answers or gateways to bridge the communique
voids [14]. This engenders escalated complexity and economic overheads in integra-
tion efforts, at the same time simultaneously introducing latency and susceptibility
to factors of failure. Amidst the overarching vision of Industry 4.0, which envis-
ages harmonized interconnectivity, the underpinning challenges associated with
50 AI-Driven IoT Systems for Industry 4.0

interoperability underscore the imperativeness of steadfast enterprise requirements,


cooperative processes, and adaptive middleware solutions to authentically material-
ize a cohesive business ecosystem [15].
In the sweeping landscape of Industry 4.0, characterized by ubiquitous con-
nectivity and statistics-pushed operational paradigms, the cardinal concerns of
statistics safety and integrity become critical imperatives [19]. Within this hyper-
related milieu, described by using the complex interaction of devices, structures,
and systems, the canvas for potential cyber threats expansively widens, culminat-
ing in a spectrum of multifarious vulnerabilities. The intricate internet of sensors,
IoT devices, cloud architectures, and aspect computing nodes, even facilitating
instantaneous data change and analytical prowess, simultaneously bequeaths an
array of potential access factors for malicious entities [6]. Data in transit, travers-
ing this expansive digital material, stays at risk of interception, eavesdropping, and
man-in-the-middle attacks, for this reason undermining both its confidentiality and
integrity. Moreover, the heterogeneous spectrum of gadgets within such surround-
ings, each harboring distinct firmware, running systems, and communication proto-
cols, engenders challenges in implementing uniform security standards and timely
patching mechanisms [16]. Data at rest is further imperiled; with decentralized stor-
age answers gaining traction, the guarantee of information repository sanctity and
immutability attains paramount importance. Beyond outside threats, the paradigm
of hyper-connectivity amplifies concerns linked to records integrity, wherein inad-
vertent anomalies, misconfigurations, or systemic breakdowns can proliferate, cul-
minating in data aberrations or loss [2]. In this context, the call for sturdy encryption
mechanisms, adaptive threat detection frameworks, non-stop vulnerability critiques,
and rigorous admission to control intensifies. Furthermore, an insistent want for the
formulation of rigorous enterprise benchmarks and best practices surfaces, making
sure a unified method to safeguard the expansive and tricky expanse of the hyper-
related industrial ecosystem [14]. In summation, even as the virtual convergence
heralded by way of Industry 4.0 promises operational transcendence, it concurrently
mandates an augmented and sophisticated vigilance closer to the nation-states of
records protection and integrity [3].

3.6 THE ROAD AHEAD: FUTURE TRENDS AND PROSPECTS


As the contours of Industry 4.0 continue to evolve, shaping the nexus of technol-
ogy and industrial operations, the horizon is illuminated with a plethora of trends
and potentialities that promise to redefine the very fabric of manufacturing and
innovation [5]. Foremost among those tendencies is the deepening integration of
cyber-physical structures, in which the demarcation between digital simulations and
physical operations will become increasingly blurred, fostering real-time comment
loops and adaptive adjustments. Augmented reality (AR) and virtual reality (VR) are
poised to turn out to be crucial equipment, revolutionizing education, maintenance,
and layout tactics through supplying immersive, interactive environments that bridge
the space between virtual designs and tangible merchandise [8].
The idea of the virtual twin, a real-time digital reproduction of physical prop-
erty, will similarly mature, permitting industries to run simulations, expect put on
Enhancing Industrial Operations through AI-Driven Decision-Making 51

and tear, and optimize performance without physical interventions [2]. Moreover,
the inclusion of facet computing is predicted to grow, decentralizing information
processing and putting it in the direction of the source, ensuring reduced latency
and greater actual-time decision-making [8]. As quantum computing moves from
the realm of theoretical to practical, its profound computational competencies could
probably revolutionize areas from fabric technological know-how to optimization
issues, heralding breakthroughs previously deemed impossible [11]. Yet, as those
technological marvels unfold, a similarly large trend will be the heightened rec-
ognition of sustainability and moral concerns. Circular financial system concepts,
emphasizing resource efficiency, waste discount, and sustainable manufacturing,
will become valuable tenets of Industry 4.0. Concurrently, with AI playing a piv-
otal function, ethical AI frameworks addressing biases, transparency, and selection
responsibility will gain paramount importance [9].

3.6.1 Evolution of AI Algorithms for Even More


Efficient Decision-Making
In the sprawling tapestry of AI that has been unfolding for a long time, the algo-
rithms at its center have steadily metamorphosed, evolving in complexity, adapt-
ability, and performance. This evolutionary trajectory is specifically palpable within
the realm of decision-making, a site essential to industries, organizations, or even
societal frameworks [2]. Historically, AI choice-making was rooted in rule-based
total systems, where choices had been an immediate outcome of predefined, tough-
coded guidelines. However, the appearance of gadgets gaining knowledge ushered
in a generation in which algorithms should learn from records, refining their choice-
making processes based on patterns and correlations. Neural networks, stimulated
via the structure of the human mind, in addition, extend this paradigm, introduc-
ing layers of interconnected nodes capable of processing and mastering significant
datasets [2]. The latest surge in deep learning, a subset of system studying, has been
mainly transformative. Complex systems like convolutional neural networks (CNNs)
for picture recognition or recurrent neural networks (RNNs) for collection records
have enabled AI to make choices based on complex records systems, from visual
content to time collection facts [5]. Furthermore, the emergence of reinforcement
gaining knowledge, wherein algorithms iteratively learn through interacting with an
environment and receiving remarks, has unlocked exceptional stages of adaptability
and performance in decision-making, be it in game playing, robotics, or optimization
tasks [15]. Yet, the horizon beckons a similar evolution. Transfer gaining knowl-
edge of, in which fashions skilled on one challenge are tailored for a distinctive, yet
related challenge, promises to lessen computational expenses and accelerate deploy-
ment. Quantum gadget learning, though nascent, holds the potential to process and
analyze statistics at scales currently unimaginable, thereby refining decision-making
strategies at unheard-of granularities [20]. Additionally, explainable AI (XAI) is
paving the way for algorithms that not only make green selections but also elucidate
the reasoning behind them, bridging the accept as true with gaps regularly associ-
ated with AI. In essence, as AI algorithms keep their relentless march of evolution,
the future of selection-making stands on the cusp of a revolution [20]. A confluence
52 AI-Driven IoT Systems for Industry 4.0

of efficiency, adaptability, and transparency is poised to redefine how decisions, both


granular and macroscopic, are orchestrated across myriad domain names.

3.6.2 Integration of Augmented Reality (AR) and Virtual Reality (VR)


in Industry 4.0 Driven by AI

In the epoch of Industry 4.0, where the convergence of era and business operations is
rewriting traditional paradigms, the integration of AR and VR stands as a transforma-
tive brand [19]. The profound fusion of these immersive realities is similarly ampli-
fied through the riding force of AI, shaping a new paradigm that redefines education,
layout, preservation, and collaboration within commercial ecosystems. Augmented
truth, which overlays virtual facts onto the real-world environment, and VR, which
creates completely virtual immersive environments, have traditionally been perceived
as distinct realms [7]. However, AI serves because of the cohesive detail that harmo-
nizes these realities, rendering them now not just complementary but also synergistic.
AI-driven PC imaginative and prescient algorithms enable AR to apprehend and inter-
act with actual global items, facilitating context-conscious fact overlays. Moreover,
AI-powered NLP empowers AR and VR interfaces to interpret and respond to spoken
instructions, fostering intuitive human-PC interactions [2]. In the realm of schooling
and education, this amalgamation shines. AI-stronger VR simulations permit train-
ees to immerse themselves in realistic scenarios, learning how to function complex
machinery, troubleshoot troubles, or navigate complicated manufacturing lines—all
inside a safe, controlled environment. The adaptive talents of AI-pushed VR make
certain that the schooling evolves alongside the trainee’s development, optimizing the
studying curve [9]. For renovation and design, the integration is equally profound. AR
interfaces equipped with AI-pushed item popularity can guide technicians via complex
restore methods, superimposing step-by-step commands into real gadgets. In design
approaches, AI algorithms can interpret consumer gestures and sketches, transforming
hard principles into detailed 3D models within a VR environment, catalyzing innova-
tion and new releases. Collaboration, too, is redefined [13]. Geographically allotted
groups can certainly converge more advantageous VR spaces inside AI, interacting
with dynamic 3D models, prototypes, and simulations, fostering collaborative ideation
and problem-solving. However, demanding situations persist [16]. Achieving seam-
less synchronization between AI-pushed real-time statistics, AR/VR interfaces, and
industrial structures calls for meticulous engineering. Privacy issues, ethical consider-
ations, and facts security in AI-mediated immersive environments additionally neces-
sitate stringent protocols [15]. In essence, the mixing of AI-driven AR and VR inside
Industry 4.0 is a harmonious symphony of generation, redefining human-device inter-
play, know-how dissemination, layout ideation, and collaborative innovation, shaping
a future where reality and virtuality are indistinguishably woven [12].

3.6.3 Ethical Considerations as AI Takes a Central Role in


Decision-Making Processes
The relentless ascendancy of AI in the contemporary technological panorama,
mainly its deepening roots in choice-making approaches throughout sectors, brings
Enhancing Industrial Operations through AI-Driven Decision-Making 53

to the fore a complex tapestry of moral issues [6]. As algorithms begin to hold sway
over areas traditionally reserved for human discretion, from scientific diagnoses
and monetary lending to recruitment and criminal justice, the imperatives of ethi-
cal, transparent, and responsible AI have by no means been extra said. Central to
these concerns is the task of bias and equity [2]. ML fashions, the heart of many AI
structures, are skilled on giant datasets, frequently reflective of historical or societal
biases. Without meticulous scrutiny and correction, those biases can be unwittingly
perpetuated and amplified by way of AI, mainly to skewed, unjust, or discrimi-
natory decisions [10]. For instance, a recruitment AI trained on historically biased
facts may inadvertently desire certain demographics over others, exacerbating exist-
ing disparities. Transparency and interpretability form another important axis. The
upward thrust of complex deep gaining knowledge of fashions, regularly termed
“black packing” due to their opaqueness, raises issues about the comprehensibility
of AI decisions [7]. Stakeholders, from affected people to policymakers, require
comprehensible causes for AI-driven selections, mainly in excessive-stakes scenar-
ios like healthcare or felony settings. The burgeoning subject of XAI objectives to
deal with this, striving to make AI fashions extra interpretable and their decisions
more elucidated [2]. Furthermore, the autonomy endowed upon AI in selection-
making amplifies worries about duty. In eventualities wherein AI-driven decisions
cause unfavorable or dangerous results, assigning duty turns tough. Is the onus on the
builders, the operators, the records curators, or the AI itself? Regulatory frameworks
and robust governance models make it hard to delineate clean lines of responsibil-
ity. Lastly, privacy and information safety are paramount [7]. AI’s voracious urge
for food for facts, often non-public or touchy, necessitates rigorous statistics safety
measures. Moreover, as AI systems begin to make predictions or inferences about
people, moral quandaries about consent, facts ownership, and the right to explana-
tion rise up [10]. As AI’s position in selection-making burgeons, a harmonized dance
of technological innovation and ethical diligence becomes vital. Balancing the trans-
formative potential of AI with the imperatives of justice, transparency, accountabil-
ity, and human dignity will form not simply the future of AI but also the very fabric
of our information-driven society.

3.7 CONCLUSION
The ascent of AI inside the digital revolution brings a huge capacity and important
demanding situations, particularly within the context of Industry 4.0. This segment
of business evolution, marked by way of interconnected cyber-physical systems and
the IoT, sees AI as the driving force at the back of unheard-of variations. AI’s cogni-
tive abilities, fueled by elaborate algorithms and information-driven methods, bridge
the space between uncooked records and actionable insights [16]. This is exempli-
fied by neural networks and deep gaining knowledge of, permitting real-time evalu-
ation of sensor statistics to optimize techniques and predict equipment screwups.
Additionally, AI’s function in logistics optimizes supply chains even as generative
AI fashions personalize product design [11].
However, knowing this ability requires addressing inherent complexities. The
need for comprehensible AI selections necessitates XAI methods, especially in
54 AI-Driven IoT Systems for Industry 4.0

important eventualities [2]. Moreover, data safety and privacy ought to be upheld via
robust encryption and moral frameworks. This transformative trajectory embodies
now not simply technical evolution but a new generation of industrial performance,
adaptability, and innovation underpinned by way of both computational intelligence
and moral considerations [7]. In this complicated landscape, stakeholders keep the
key to shaping a harmonious destiny. Industry leaders, policymakers, workers, and
consumers all play pivotal roles. Embracing AI’s benefits requires openness to inno-
vation, upskilling, and adapting business models. Yet, demanding situations like eth-
ics, activity displacement, and information security demand equal attention. Industry
leaders can set up ethical AI committees and retraining programs [6]. Policymakers
must create agile policies that defend rights and privateness at the same time as
promoting innovation. Frontline workers must embrace upskilling for the evolving
activity market. Ultimately, Industry 4.0’s AI-pushed panorama requires proactive
collaboration [2]. By capitalizing on AI’s blessings even as actively addressing its
challenges, stakeholders can ensure balanced growth, inclusivity, and ongoing inno-
vation in this new industrial generation.

REFERENCES
1. Noreen, U., Shafique, A., Ahmed, Z., & Ashfaq, M. A. (2023). Banking 4.0: Artificial
Intelligence (AI) in Banking Industry & Consumer’s Perspective. Sustainability, 15,
3682. 10.3390/su15043682
2. Kagermann, H., Wahlster, W., & Helbig, J. (2013). Recommendations for implement-
ing the strategic initiative INDUSTRIE 4.0: Final report of the Industrie 4.0 Working
Group. Forschungsunion, 1–44.
3. Crowston, K., Allen, E. E., & Heckman, R. (2012). Using Natural Language Processing
Technology for Qualitative Data Analysis. International Journal of Social Research
Methodology, 15(6), 523–543.
4. Arinez, J. F., Chang, Q., Gao, R. X., Xu, C., & Zhang, J. (Nov. 2020). Artificial
Intelligence in Advanced Manufacturing: Current Status and Future Outlook.
ASME Journal of Manufacturing Science and Engineering, 142(11), Art. no. 110804.
10.1115/1.4047855
5. Lai, Z.-H., Tao, W., Leu, M. C., & Yin, Z. (Apr. 2020). Smart Augmented Reality
Instructional System for Mechanical Assembly Towards Worker-Centered Intelligent
Manufacturing. The Journal of Manufacturing Systems, 55, 69–81.
6. Wu, D., Jennings, C., Terpenny, J., Gao, R. X., & Kumara, S. (2017). A Comparative
Study on Machine Learning Algorithms for Smart Manufacturing: Tool Wear Prediction
Using Random Forests. The Journal of Manufacturing Science and Engineering,
139(7), 1–10.
7. Brynjolfsson, E., & McAfee, A. (2017). The Business of Artificial Intelligence. Harvard
Business Review, 95(1), 62–72.
8. Brynjolfsson, E., & McAfee, A. (2011). Race against the machine: How the digital
revolution is accelerating innovation, driving productivity, and irreversibly transform-
ing employment and the economy. Digital Frontier Press.
9. Li, X., Tao, F., & Zhang, L. (2018). A Survey of Artificial Intelligence for Industrial Big
Data Analytics. Journal of Manufacturing Systems, 48, 144–156.
10. Chen, Y., & Xie, J. (2018). Industry 4.0 and the Industrial Internet of Things: A Survey.
International Journal of Academic Research in Business and Social Sciences, 8(11),
2222–6990.
Enhancing Industrial Operations through AI-Driven Decision-Making 55

11. Chen, Y., Li, X., & Zhang, X. (2018). A Survey on Artificial Intelligence for Decision-
Making in Industry 4.0. Journal of Intelligent Manufacturing, 29(2), 411–422.
12. Resnik, D. B. (2011). What Is Ethics in Research and Why Is It Important. National
Institute of Environmental Health Sciences, 1(10), 49–70.
13. Wang, L., & Wang, X. (2018). A Survey on Industrial Artificial Intelligence for Industry
4.0. Journal of Manufacturing Systems, 48, 144–156.
14. Rojek, I., Jasiulewicz-Kaczmarek, M., Piechowski, M., & Mikołajewski, D. (2023). An
Artificial Intelligence Approach for Improving Maintenance to Supervise Machine
Failures and Support Their Repair. Applied Sciences, 13, 4971. 10.3390/app13084971
15. Libby, K. (2019). This Bill Hader Deepfake video is amazing. It’s also terrifying for our
future. Popular Mechanics.
16. Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) Ethics: Ethics of AI and
Ethical AI. Journal of Database Management, 31, 74–87. 10.4018/JDM.2020040105
17. Almetwally, A. A., Bin-Jumah, M., & Allam, A. A. (2020). Ambient Air Pollution and
Its Influence on Human Health and Welfare: An Overview. Environmental Science and
Pollution Research, 27, 24815–24830. 10.1007/s11356-020-09042-2
18. Ghorani-Azam, A., Riahi-Zanjani, B., & Balali-Mood, M. (2016). Effects of Air
Pollution on Human Health and Practical Measures for Prevention in Iran. Journal of
Research in Medical Sciences, 21, 189646. 10.4103/1735-1995.189646
19. Wuest, T., Weimer, D., Irgens, C., & Klaus, D. T. (2016). Machine Learning in
Manufacturing: Advantages, Challenges, and Applications. Production & Manufacturing
Research, 4(1), 23–45.
20. Wang, W., & Siau, K. (2019). Artificial Intelligence, Machine Learning, Automation,
Robotics, Future of Work and Future of Humanity: A Review and Research Agenda.
Journal of Database Management, 30(1), 61–79.
4 Acne Detection Using
Convolutional Neural
Networks and Image-
Processing Technique
Premanand Ghadekar, Aniket Joshi,
Atharv Vanjari, Mohammad Raza,
Shubhankar Gupta, and Anagha Gajaralwar

4.1 INTRODUCTION
A lot of people have acne, which is a common skin ailment, especially throughout
adolescence. Dermatologists may spend a lot of time identifying and analysing acne
lesions, which has sparked an increase in interest in creating automated approaches
for acne identification. Convolutional neural networks (CNNs) have become an effec-
tive method for image processing in the medical field. By accurately recognizing and
categorizing acne lesions in pictures, CNNs may be used for acne detection in this
context. Using the structure of the human visual system as inspiration, CNNs are a
sort of deep learning algorithm. Convolutional layers, which are made up of several
layers of coupled neurons, are what they use to extract information from the input
pictures. Fully linked layers that conduct categorization using the characteristics that
were retrieved are placed after these layers. Following are generally the processes
involved in acne detection using CNNs:

Data collection: Images with acne lesions are gathered into a dataset. These
photos might come from different sources or be gleaned via dermatological
research.
Data pre-processing: Before being entered into the CNN, the gathered pic-
tures undergo pre-processing to make sure they are acceptable. Resizing
the photos, normalizing the pixel values and enhancing the dataset by per-
forming transformations like rotation, scaling or flipping are examples of
pre-processing operations.
Training the model: Using the pre-processed dataset as training data, a CNN
model is built. Using optimization methods like gradient descent, the net-
work’s weights are adjusted after the pictures are fed through the network,
the loss (error) is calculated and the process is repeated. In order to teach
the CNN, the characteristics and patterns that separate acne lesions from
healthy skin, this phase will first need to be completed.
56 DOI: 10.1201/9781003432319-4
Acne Detection Using Convolutional Neural Networks 57

Model evaluation: Using a different dataset of labelled pictures not utilized


during training, the trained CNN model is examined. The efficiency of the
model for detecting acne is calculated using parameters such as accuracy,
F1 score, recall and precision.
Model prediction and deployment: After the CNN model has been suc-
cessfully programmed and assessed, it can be used to make predictions
about the existence of acne lesions in fresh, unused photos. The model
takes an input picture, runs it through a neural network and outputs a
probability or classification result that indicates whether or not acne is
present.

Dermatologists may be able to save time and labour by utilizing CNNs for acne
detection rather than manually examining a lot of photos. Healthcare practitioners
can benefit from automated methods for detecting acne since they can diagnose and
arrange treatments more quickly.
It is important to note that by creating accurate CNN-based acne, the identifi-
cation system calls for rigorous model building, model optimization and a varied,
well-annotated dataset. For the representation to work more accurately and respond
to changing trends in acne presentation, ongoing updates and enhancements may
also be required.
Medical vulgaris which is also called acne is a type of skin disorder which hap-
pens when the pores of the skin get blocked and get filled with deceased skin cells
and oil. It is most commonly seen in teenagers and also seen in some aged people.
Acne gets produced when an oily substance which is also known as sebum that
lubricates human skin and hairs and dead skin cells plug hair follicles. Increase
of bacteria can multiply the number of pimples and acne on the face, and it might
also increase the inflammation and infections. Acne results in inflammation and
produces larger and dark red pimples. Medically and technically the check-up and
the assessment of acne is done by the respective dermatologist, and it requires a
clinical environment. The given prescriptions of the doctor are then followed by
the patient over a long period of time to heal the spots, and it requires a good capi-
tal. The medical field is growing tremendously as this problem is very common in
most of the people. From the acne patients who need a day-to-day treatment or a
long-time treatment has to follow the dermatologist and have to visit the clinics fre-
quently just for their basic check-up and doctor’s further prescription. According
to a survey, the average time worldwide for a patient to wait for getting an appoint-
ment with a dermatologist is 32 days [1]. This leads to big frustration to the acne
patients as it affects their day-to-day life cycle, their food intake, their schedule
etc…. To fill this gap model is been created which will detect the acne by their
images. The basic aim of this model is to (1) detect the acne through images and
then also tell, amount of acne through the worldwide data; (2) accurately assess the
acne of face and other dermal issues; (3) give the predicted and possible image if
plastic surgery or medical treatment is to be done in future; (4) to give a possible
range of amount required for the treatment; (5) to classify between the acne and
non-acne images; (6) to suggest the required medical treatments according to the
type of dermal issue.
58 AI-Driven IoT Systems for Industry 4.0

4.2 LITERATURE REVIEW


Acne is a common skin state that happens when dry hair follicles become blocked
under the skin. Acne is an inflammatory blockage of the skin that has sebaceous
glands (oil painting) connected to hair follicles containing fine hairs. In healthy skin,
sebaceous glands produce sebum, which separates hair follicles to reach the surface
of the skin [1].
A dermatologist may recommend a treatment that mixes one or more of the fol-
lowing: oral antibiotics, topical antibiotics, topical retinoids, peroxide etc.: being
hooked into whether the medical substance sometimes used for amusement is gen-
eral or trademark name, amount and the way many small vessels you get, the worth
for antibiotics ranges between $10 and $73. Prices are supported means discovered
on a place in the net and will different by stores for medical substances placing. A
dosage of five to seven days of antibiotic 1 is suggested to men and women. This
is often supported by a research paper that makes clear that no important point or
amount different in outcomes between three and seven days of antibiotics made a
comparison of seven days or longer. Benzoyl peroxide keeps the skin surface clean
and limits the number of germs on the top of the skin. It is usually one of the first
ways of treating mild-to-medium red marks on the face. It is available as a soft paste
or face wash having 5% benzoyl peroxide. Benzac Ac 2.5% Gel 20 g is a topical
antibiotic available in India for Rs. 83: triple antibiotics (bacitracin, neomycin and
polymyxin B for topical use) may be a part of a group of topical antibiotics that have
an effect on the senses part and are commonly used for bacterial skin infection [2].
Topical retinoids help to regain normal desquamation by inhibiting keratinocyte
that increases greatly and giving a higher position to making clear as being different.
Retinoids are of current interest as they engage with several important inflammatory
pathways that activate red marks on the face. The general price of tretinoin 0.025%
cream is $105.60 for 1.20 g. You may be able to use tretinoin cream from SingleCare,
but the price of tretinoin is $11.64 for 1.20 g of 0.025% tube of cream. SingleCare’s
free medical products are sometimes available in exchange of coupons by taking
part in the store for medical products. Deep learning has established itself as a really
potent tool over the previous few decades because of its capability to handle huge
amounts of information. Hidden-layer technology performs far better than standard
ways, significantly for pattern recognition. CNNs are one of the widely used deep
neural networks. Now while thinking about a neural community matrix, multiplica-
tions are considered; however, that is not always the case with ConvNet. It makes use
of a unique approach known as convolution [3].
Before understanding the working of CNNs, let us cowl the fundamentals consist-
ing of what is a picture and the way is it represented. An RGB picture is nothing but
a matrix of pixel values having three planes, while a greyscale picture is similar but
has a single plane. Examine this picture to recognize more. Let us take a filter/kernel
(3×3 matrix) and use it on the sample photo to get the convolved function. This con-
volved function is inserted directly to the subsequent layer [4].
Haar cascade is a set of rules that can be applied on the items in images, regard-
less of their scale in the pictures and locations. This set of rules is not always so
complicated and may run in real time. Haar-cascade detector is trained to work on
Acne Detection Using Convolutional Neural Networks 59

diverse items like cars, bikes, buildings and fruits. Haar cascade makes use of cas-
cading windows and attempts to compute the capabilities in each window and clas-
sify whether or not it can be an object. Haar cascade works as a classifier. It classifies
nice statistics points – which are part of a model that detects item – and bad statistics
points – which don’t comprise model item. The set of rules may be defined in four
steps: featuring calculation by Haar, integral image creation, AdaBoost algorithm
implementation and then remembering the importance this set of rules calling for
numerous high-quality photographs of facial and futile photographs of non-facial
things to train the classifier, much like different gadgets getting trained by the mod-
els [5].
For a model, the first goal is to get the hair feature landmarks. The Haar function
is actually a calculation performed on a contiguous rectangular region of selected
neighbours within the detection. The mathematical operations involve adding the
intensities of the pixels from each region and then deviation calculation between
the sums. Here are some examples of hair features. Without going too deep into the
background calculations (see docs if interested), keyshots significantly speed up the
calculation of these hair features. Instead of doing the math for each pixel, create
sub-rectangles as proxies and create array references to each sub-rectangle. Used to
calculate hair properties, AdaBoost selects important features by default and trains
a classifier to use them. A set of “vulnerable classifiers” are adopted to design a
“strong classifier” that can be used by a set of guidelines to hit upon the elements. A
vulnerable newbie is created by means of passing a window over the entered picture
and computing the Haar function for each subregion of the photograph. This distinc-
tion is compared to a certain threshold that separates non-articles from articles [6].
By identifying the friends in the detection window, the calculation entails sum-
ming the pixel intensities of each vicinity and calculating the deviation among the
sums. Right here are a few examples of hair features. Because they are “weak classi-
fiers”, they require a large set of Haar functions to ensure accuracy in order to form
reliable classifiers. A cascade classifier consists of several layers, each of which is a
group of weak novices. Weak novices have been trained to use boosting, so implic-
itly predicting each weak novice produces a very accurate classifier. Based on this
prediction, the classifier decides whether it should look for an element (positive) or
go directly to the next area (poor) [7].
In this paper, Mr. Tingting Zhao, Hang Zhang and Jacob Spoelstra have dealt with
Nestle Skin Health SHIELD. Their main aim was to develop a model using deep
learning so that it will help to detect the acne from the self-taken images. To deal
with the geographical sensitivity of the images, they used CNN model for the train-
ing and testing of the images. They had used ResNet 152 pre-trained model. And
they had outperformed a human dermatologist on test images [8].
A general-purpose facial recognition library with mobile applications is described
in the paper Open facial. Mr. Brandon Amos and his team have researched the com-
bined field of IoT and deep learning. They have introduced the OpenFace library to
fill the gap between the public and the private face recognition systems. This paper is
meant to be for the non-experts in the field of face and pattern recognition. Moreover,
it helps to get familiar with the various deep learning techniques that that have been
used [9].
60 AI-Driven IoT Systems for Industry 4.0

In this paper, they focused and worked on the analysis of retinal diseases. They
had used the CNNs and MatConvNet for automated recognition of various retinal
diseases with fundus photos. They had built the dataset on ten different categories:
one for normal retina and nine for different retina diseases. The outcomes were
based on VGG-19 architecture. They got an overall accuracy of 52%, and on the
multi-categorical classifier, they got a precision of 72.8%. Furthermore, it can be
improved with the help of better algorithms [10].
In this paper, they have given an idea of cost of surgery for removing acne from
the face. In the Chinese market, a treatment called chemical peel (CP) is widely
accepted as it is used to remove the hyperpigmentation and scarring. The factors that
were affecting the willingness to pay were identified using the general linear models,
and hence, an approximate value had been provided with that. They got a response
rate of almost 96% among the 476 patients. And as a conclusion, they got positive
results after three sessions of CP treatment for US$383.4 [11].
In this paper, they focused and studied to develop a system that detects acne on
the face. This system generally detects shapes and amounts of acne present on the
face. They have used the image-processing module of the MATLAB program. First,
the system converts RGB colour images to greyscale, then absolute maximum value
is calculated, and then the next step is to normalize the greyscale images by reducing
the data into its simplest form. Brightness extraction is done; after that image sub-
traction is done for the region of Interest. Then the unwanted features, like spots and
noise, are eliminated. The sensitivity, precision and accuracy numbers calculated
by this system are all typically high, with the exception of accuracy, which requires
improvement [12].
They provide an in-depth description of acne vulgaris (AV) in this work. AV is a
disorder of the pilosebaceous unit that causes both inflammatory and noninflamma-
tory lesions on the skin, including papules, pustules and nodules, as well as scarring
in varying degrees. AV is a relatively common condition that mostly affects teens,
with a lifetime incidence rate of roughly 85%. Acne frequency among women aged
20–29 was 50.9%, whereas it was 26.3% among people from 40 to 49, indicating
that AV can persist into adulthood. Women make for two-thirds of all dermatologi-
cal office visits for acne, with women over the age of 25 accounting for one-third of
these sessions [13].
In this paper, Viola and Jones introduced a system to directly and fleetingly descry
faces within an image. This system can be acclimated to directly descry facial fea-
tures. Still, the area of the image being anatomized for a facial point needs to be
regionalized to the position with the loftiest probability of containing the point. By
regionalizing the discovery area, false cons are excluded and the speed of discovery
is increased due to the reduction of the area examined [14].
Skin acne is a persistent inflammatory condition caused by the pilosebaceous
gland producing a greater amount of sebum than expected as a result of androgenic
hormones stimulation. Uncertainty still exists regarding the causes of acne and how
therapy influences how the condition develops. The treatment that works best is oral
isotretinoin, which is administered in the early stages of serious complaints [15].
In this paper, the authors describe the problems related to acne in detail; accord-
ing to their research, the global population is believed to be impacted by acne with
Acne Detection Using Convolutional Neural Networks 61

an estimated 9.4% prevalence, making it the eighth most common disease in the
world. Adolescents post puberty are the most commonly affected group, with teen-
age boys being particularly prone to more severe forms of acne. This review provides
an updated understanding of the prevalence of acne worldwide, where general and
institutional studies show a consistent prevalence globally, except in certain popu-
lations which will be discussed. However, there is a need for a standard, credible
assessment scale for acne, as the studies use a range of disparate measures. The
review will also delve into special populations, such as those who don’t have acne,
and the effect of potential determinants of acne on disease epidemiology [16].
In this paper, the authors mention that precise grading of the severity of skin con-
ditions is essential for effective patient treatment, and this is particularly true for AV,
the most prevalent skin disease in adolescence. Medical professionals typically use
a combination of lesion counting and experience-based global estimation to grade
acne. However, this can be challenging due to the similarity in appearance between
different severity levels of acne. To address this issue, this study explores the use of
label distribution learning (LDL) to accurately grade and count acne. The authors
propose a framework that takes into account the relationship between the number
of lesions and the severity of acne and optimize it using multi-task learning loss. A
new dataset, ACNE04, has been created and made publicly available, along with the
code, to evaluate the proposed framework [17].
A growing number of people are turning to computer-assisted diagnosis since it is
efficient as well as effective. Although deep learning has made great strides in the iden-
tification of acne, a number of issues still need to be resolved, including colour shifts
brought on by erratic illumination, size fluctuations and tightly packed breakouts. We
suggest an acne detection network that combines composite material, highlights refine-
ment, innovates the context of enhancement and provides Mask-Aware Multi-Attention
transformers to address these issues. To enhance the feature representation and lessen
the negative effects of unbalanced illumination, the composite feature refinement com-
bines high-level semantic information with minute details. To adjust to changes in size
and improve contextual information, the dynamic context enhancement makes use of
multi-scale characteristics. By repressing of no significance regions and stressing pos-
sible acne spots, the Mask-Aware Multi-Attention improves in the detection of highly
concentrated and superficial manifestations of acne. On the ACNE04 acne picture data
source and the PASCAL VOC 2007 organic photo collection of data, their technique
achieves cutting-edge technology results, and on PASCAL VOC 2007, it performs
similarly to earlier state-of-the-art methods [18, 19].

4.3 APPROACH
Here in this model a practical way of acne detection through images by using CNN-
based transfer learning regression model is being discussed. Here the performance
of this model is not less than a special trained dermatologist and then it will help to
get a better prediction of the total revenue required for the treatment of such acne and
also for plastic surgery if done [8]. The framework that is being suggested here com-
bines the facial landmark model with the One Eye OpenCV system to retrieve skin
zones from multiple facial locations and eliminate unwanted noise. Addressing the
62 AI-Driven IoT Systems for Industry 4.0

limited availability of labelled training data requires image scrolling as an innova-


tive approach to data enhancement. The result is that the transformational regression
model with corresponding data augmentation is one of the most effective methods to
train a model, regardless of the size of the training set.

4.3.1 Haar Cascade
Haar cascade was used in the pre-processing of the image data to crop out faces from
the images. It was observed that some of the images failed to detect the faces; hence,
the model was training with both the faces and background as the noise. It was rec-
ommended to use other deep learning model for face detection in the pre-processing
stage to improve the quality of the model (Figure 4.1). As for the dataset, there is
an imbalance data on the skin tones in the dataset as there is lesser dark skin tone in
the dataset.
The study from the University of California San Francisco suggests that dark
skin has a stronger skin barrier; hence, this may suggest why there are lesser acne
dark skin photo found. This is the limitation of the model. Alternatively, a proposed
model is to increase synthetic data for dark skin in order to make up for the imbal-
ance dataset. The model helps the company to identify acne skin consumer. This
can be deployed on the website to classify the consumer skin to better recommend
a more appropriate skincare product online without the salesperson. Moving for-
ward, the shopping experience can be improved to have live predictions via video
instead of images. The electronic device can be placed in stores and can be helpful

FIGURE 4.1 Working of Haar Cascade. (a) Edge features. (b) Line features. (c) Four-
rectangle features [20].
Acne Detection Using Convolutional Neural Networks 63

FIGURE 4.2 Pre-processing on image.

to the consumers especially when the salesperson is away during the store operations
(Figure 4.2).

4.4 WORKFLOW
Here a dataset is filled with total 2156 images, out of which 1010 images contain
acne, whereas 1150 do not. The dataset is then divided into 60-20-20 ratio for train-
ing, validating and testing, respectively (see Table 4.1).
The general flow of the project is as follows:

1. Data collection and pre-processing


2. Analysing preliminary information
3. Modelling and evaluation
4. Conclusion and recommendation
5. Limitation and future development

As Figure 4.3 shows, raw images will be taken as an input which will be pre-
processed, and after that, a series of deep learning algorithms will come into action

TABLE 4.1
Distribution of Images in Dataset
Folder Variable Type No. of Images
Train Non-acne 690
Train Acne 610
Validation Non-acne 230
Validation Acne 200
Test Non-acne 230
64 AI-Driven IoT Systems for Industry 4.0

FIGURE 4.3 Proposed flow diagram of acne detection.


Acne Detection Using Convolutional Neural Networks 65

like CNN and Haar cascade through which the unnecessary parts of the images will
be removed, and the images can directly go for training and testing, and the end the
final evaluation will be done.

4.5 RESULTS
In the model, the Haar cascade classifier is used. It’s an algorithm used in the field of
object detection no matter what size the object has (see Table 4.2).
Less complexity of the algorithm makes it more friendly to use in real time.
The algorithm might be trained to recognize a wide range of items, including auto-
mobiles, faces, animals, humans and vehicles. Here is a model that is used for the
removal of unnecessary part of the face, which is basically the face cropping done
here (Figures 4.4 and 4.5).
Basically here once the training and testing of the dataset of 2156 images is
done, the model first gets the classification of acne (Figure 4.6) and non-acne
images (Figure 4.7), and then the model pre-processes the data and then for
removing the unnecessary background, it crops the images by using Haar cascade
algorithm.

TABLE 4.2
Accuracy Table
Precision Recall F1 Score Support
Non-acne 0.95 0.97 0.96 230
Acne 0.97 0.94 0.95 200
Accuracy – – 0.96 430
Nacro avg 0.96 0.96 0.96 430
Weighted avg 0.96 0.96 0.96 430

FIGURE 4.4 (a) Input dataset image (I). (b) Input dataset image (II).
66 AI-Driven IoT Systems for Industry 4.0

FIGURE 4.5 (a) Output results (I). (b) Output results (II).

4.6 CONCLUSION
The model has an accuracy of 97% on validation data and 98% on test data while
the F1 scores for validation data and test data are 98%. Compared to the previous
models, this is the highest precise results acquired. The model has a train score of
99% for both accuracy and F1 score. In order to prevent overfitting, dropout ratio of
50% was introduced in the CNN model.
Acne Detection Using Convolutional Neural Networks 67

FIGURE 4.6 Acne images in dataset.

FIGURE 4.7 Non-Acne images in dataset.

Haar cascade was used in the pre-processing of the image data to crop out faces
from the images. It was observed that some of the image failed to detect the faces;
hence, the model was training with both the faces and background as the noise. It
was recommended to use other deep learning model for face detection in the pre-
processing stage to improve the quality of model. The model helps the company
to identify acne skin consumer (Table 4.3). This can be deployed on the website to

TABLE 4.3
Accuracy per Dataset

True (+VE) False (+VE) True (−VE) False (−VE)


Dataset Positive Positive Negative Negative
Train 686 4 606 4
Test 225 5 195 6
Validation 228 2 192 8
68 AI-Driven IoT Systems for Industry 4.0

classify the consumer skin to better recommend a more appropriate skincare product
online without the salesperson.
In conclusion, the model has high accuracy and F1 score; however, it is not able to
work well with darker skin tones which can be further improved.

4.7 FUTURE SCOPE


For the further extension of this project, we are aiming to develop a bill prediction
system which will estimate the total money required to remove acne by means of
plastic surgery. This will require some changes in the previous dataset so that the
model will be able to distinguish the difference between natural non-acne faces and
after plastic surgery non-acne faces.

REFERENCES
1. Joshi, A., Khosravy, M., and Gupta, N. (eds) Machine Learning for Predictive
Analysis. Lecture Notes in Networks and Systems, vol 141. Springer, Singapore. doi:
10.1007/978-981-15-7106-0_52.
2. Singh, V., Shokeen, V., and Singh, B. (2013). Face detection by Haar Cascade clas-
sifier with simple and complex backgrounds images using openCV implementation.
International Journal Advanced Technology Engineering Science 1, no. 12:33–38.
3. Padilla, R., Filho, C., and Costa, M. (2012 April). Evaluation of Haar Cascade
Classifiers for Face Detection. Conference: ICDIP: International Conference on Digital
Image Processing At: Venice, Italy.
4. Suva, M. (2015). A brief review on acne vulgaris: Pathogenesis, diagnosis and treat-
ment. Research & Reviews Journal of Pharmacology 4:1–12.
5. Chovatiya, R. (2021). Acne treatment. JAMA 326, no. 20:2087. doi: 10.1001/jama.
2021.16599
6. Pathak, A. R., Pandey, M., and Rautaray, S. (2018). Application of deep learning
for object detection. Procedia Computer Science 132, no. 6:1706–1717. doi: 10.1016/
j.procs.2018.05.144
7. Chandan, G., Jain, A., and Mohana, M. (2018). Real Time Object Detection and Tracking
Using Deep Learning and OpenCV. 1305–1308. doi: 10.1109/ICIRCA.2018.8597266.
8. Zhao, T., Zhang, H., and Spoelstra, J. (2019). A computer vision application for assess-
ing facial acne severity from selfie images. arXiv preprint arXiv:1907.07901
9. Amos, B., Ludwiczuk, B., and Satyanarayanan, M. (2016). Openface: A general-
purpose face recognition library with mobile applications. CMU School of Computer
Science.
10. Choi, J. Y., Yoo, T. K., Seo, J. G., Kwak, J., Um, T. T., and Rim, T. H. (2017). Multi-
categorical deep learning neural network to classify retinal images: A pilot study
employing small database. PLoS One 12, no. 11:e0187336.
11. Xiao, Y., Chen, L., Jing, D., Deng, Y., Chen, X., Su, J., and Shen, M. (2019 Feb 22).
Willingness-to-pay and benefit-cost analysis of chemical peels for acne treatment
in China. Patient Prefer Adherence 13:363–370. doi: 10.2147/PPA.S194615. PMID:
30863024; PMCID: PMC6391120.
12. Chantharaphaichi, T., Uyyanonvara, B., Sinthanayothin, C., and Nishihara, A. (2015).
Automatic Acne Detection for Medical Treatment, 2015 6th International Conference
of Information and Communication Technology for Embedded Systems (IC-ICTES).
IEEE, 1–6.
Acne Detection Using Convolutional Neural Networks 69

13. Tan, A. U., Schlosser, B. J., and Paller, A. S. (2017 Dec 23). A review of diagnosis
and treatment of acne in adult female patients. The International Journal of Women's
Dermatology 4, no. 2:56–71. doi: 10.1016/j.ijwd.2017.10.006. PMID: 29872679;
PMCID: PMC5986265.
14. Wilson, P. I. and Fernandez, J. (2006). Facial feature detection using Haar classifiers.
Journal of Computing Sciences in Colleges 21, no. 4:127–133.
15. Williams, H. C., Dellavalle, R. P., and Garner, S. (2012). Acne vulgaris. The Lancet
379, no. 9813:361–372.
16. Tan, J. K. and Bhate, K. (2015 Jul). A global perspective on the epidemiology of acne.
British Journal of Dermatology 172, no. Suppl 1:3–12. doi: 10.1111/bjd.13462. PMID:
25597339.
17. Wu, X. et al. (2019). Joint Acne Image Grading and Counting via Label Distribution
Learning, 2019 IEEE/CVF International Conference on Computer Vision (ICCV),
Seoul, Korea (South). pp. 10641–10650, doi: 10.1109/ICCV.2019.01074
18. Min, K., Lee, G.-H., and Lee, S.-W. (2021). ACNet: Mask-Aware Attention with
Dynamic Context Enhancement for Robust Acne Detection. 2021 IEEE International
Conference on Systems, Man, and Cybernetics (SMC).
19. Ghadekar, P., Bongulwar, A., Jadhav, A., Ahire, R., Dumbre, A., and Ali, S. (2023).
Ensemble Approach to Solve Multiple Skin Disease Classification Using Deep
Learning, International IEEE Conference on Device Intelligence, Computing and
Communication Technologies, March 17–18, 2023, Dehradun India.
20. Haar Cascade Classifier Image. https://www.google.com/search?q=haar+cascade+
classifier+image&source=lnms&tbm=isch&sa=X&v.
5 Key Driving Technologies
for Industry 4.0
Anesh D Sundar ArchVictor and
C. Emilin Shyni

5.1 INTRODUCTION
The Fourth Industrial Revolution, commonly referred to as Industry 4.0, is a revo-
lutionary period in manufacturing and production where cutting-edge technologies
converge to produce intelligent, linked, and extremely efficient systems. Industry
4.0’s core technologies, known as Key Driving Technologies, give organizations the
tools they need to increase productivity, streamline operations, achieve previously
unheard-of levels of automation, and make data-driven decisions.
The driving force behind Industry 4.0 as shown in Figure 5.1 is the seamless
integration of physical and digital systems, blurring the lines between the physical
and virtual worlds. A wide array of cutting-edge technologies converge to enable
this convergence, enabling industries to usher in a new era of manufacturing and
production.
Industry 4.0’s core technologies, known as key driving technologies, give organi-
zations the tools they need to increase productivity, streamline operations, achieve
previously unheard-of levels of automation, and make data-driven decisions.
Internet of Things (IoT) enables smart manufacturing processes, predictive mainte-
nance, and real-time monitoring, leading to increased efficiency and reduced down-
time. IoT forms the backbone of Industry 4.0, interconnecting devices and machines
to collect and exchange data. Recent research in the Journal of Manufacturing
Systems by Li et al. [1] discusses the application of IoT in industrial environments,
highlighting its role in enabling predictive maintenance, real-time monitoring, and
process optimization. The massive amounts of data generated by IoT and other
digital sources can be harnessed using big data analytics. The vast amount of data
generated by IoT and other sources necessitates advanced analytics for meaning-
ful insights. In the book “Big Data Analytics for Smart Manufacturing,” published
in 2021, Goh et al. [2] explore the application of big data analytics in Industry
4.0, covering topics such as data management, machine learning (ML), and predic-
tive maintenance. Advanced data analysis tools and algorithms assist businesses
in extracting useful insights, trends, and correlations from data, allowing them to
make data-driven decisions and optimize operations for improved performance and
cost-effectiveness. Artificial intelligence (AI) and ML are critical components of
Industry 4.0 because they enable machines and systems to learn from data and
improve their performance over time. AI-powered systems can automate complex
tasks, support predictive maintenance, optimize production schedules, and enhance

70 DOI: 10.1201/9781003432319-5
Key Driving Technologies for Industry 4.0 71

FIGURE 5.1 Industry revolution.

quality control, ultimately driving higher productivity and resource efficiency. AI


and ML are pivotal in Industry 4.0, powering autonomous decision-making and
process optimization. In the Journal of Manufacturing Systems, Rauschecker [3]
demonstrate how AI-driven predictive maintenance can enhance equipment reli-
ability and reduce operational costs in manufacturing plants. Cyber-physical
systems (CPS) refers to the integration of physical elements with computational
and communication systems. This seamless fusion enables real-time interactions
between the physical and digital realms, creating agile and adaptive manufactur-
ing environments. CPS enables autonomous systems, collaborative robotics, and
flexible production processes that respond dynamically to changing demands. CPS
integration bridges the gap between physical and digital systems, enabling real-time
interactions. In the book “Cyber-Physical Systems for Industry 4.0,” published in
2022, Azevedo et al. [4] explore the role of CPS in smart manufacturing, discussing
key concepts, challenges, and practical implementations.
Additive manufacturing is revolutionizing traditional manufacturing processes.
Additive manufacturing revolutionizes traditional production processes, allowing for
greater design flexibility and customization. A recent study in the journal Additive
Manufacturing by Gao et al. [5] presents advancements in 3D printing technology
and its integration into Industry 4.0. This technology allows the creation of intricate,
customized products by adding material layer by layer. It promotes rapid prototyping,
reduces material waste, and offers greater design freedom, allowing businesses to
manufacture products with higher complexity and precision. Augmented reality (AR)
72 AI-Driven IoT Systems for Industry 4.0

and virtual reality (VR) technologies enhance human-machine interactions and


enable immersive training and visualization. AR and VR enhance human-machine
interactions and training. In their recent book “AR and VR in Industry 4.0,” pub-
lished in 2023, Ivezic et al. [6] explore the uses of AR and VR in manufacturing,
maintenance, and training processes. AR can provide real-time information to work-
ers on the factory floor, while VR can simulate complex scenarios for training pur-
poses. These technologies improve workforce efficiency, reduce errors, and facilitate
maintenance and troubleshooting tasks. Cloud computing provides scalable storage
and processing capabilities, making it easier to implement the huge amount of infor-
mation provided by Industry 4.0 technologies. Cloud and edge computing provide
the computational power required for data processing and storage in Industry 4.0.
A recent article in the journal Robotics and Computer-Integrated Manufacturing by
Guo et al. [7] examines the integration of cloud and edge computing in smart manu-
facturing. Furthermore, edge computing brings computing capacity closer to the data
source, allowing for real-time processing and lowering latency, which is crucial for
applications that require quick replies.
Industry 4.0 relies on a diverse set of key driving technologies, as shown
in Figure 5.2, that work in synergy to transform the manufacturing landscape.
The convergence of IoT, big data analytics, AI, CPS, additive manufacturing,
AR/VR, and cloud/edge computing enables businesses to achieve new levels
of productivity, efficiency, and competitiveness, driving the Fourth Industrial
Revolution forward. As industries continue to embrace and integrate these tech-
nologies, they position themselves to thrive in a digitally interconnected and
data-centric future.

FIGURE 5.2 Industry 4.0 technologies.


Key Driving Technologies for Industry 4.0 73

5.2 COMPONENTS AND TYPES OF KEY


ENABLING TECHNOLOGIES
5.2.1 Internet of Things
The IoT is a ground-breaking idea that refers to the linking of everyday physical
things and devices to the Internet, allowing them to gather, exchange, and act on
data. These objects, dubbed “smart” devices, are outfitted with sensors, software,
and communication capabilities that enable them to interact with the physical world
and communicate with one another over the internet. The goal of IoT is to build a
network of networked devices that collaborate to improve efficiency, automation,
and convenience in all parts of our lives.

5.2.1.1 IoT Architecture


Figure 5.3 depicts the IoT architecture’s three primary layers: the perception layer,
the network layer, and the application layer.
Perception layer: Sensors, actuators, and other devices that collect data from the
physical world are included in this layer. Sensors detect environmental changes (such
as temperature, humidity, and light) and turn them into digital signals.
Network layer: The data from the perception layer is transmitted through various
communication protocols (Wi-Fi, Bluetooth, Zigbee, etc.) to the cloud or other con-
nected devices for processing and storage.
Application layer: At the top layer, data is analyzed, processed, and acted upon to
deliver meaningful insights and actions. Applications and services use the processed
data to provide user-centric services and automated responses.

FIGURE 5.3 IoT architecture.


74 AI-Driven IoT Systems for Industry 4.0

5.2.1.2 How Is IoT Related to Industry 4.0?


IoT is critical to realizing the goal of Industry 4.0.
Here’s how IoT and Industry 4.0 are related:

Sensors and connected devices: Manufacturing equipment, machinery, and


tools are outfitted with IoT-enabled sensors and gadgets in Industry 4.0.
These sensors collect massive amounts of data in real time, monitoring
characteristics like temperature, pressure, humidity, vibration, and more.
These devices are internet-connected, allowing for easy data transmission
and communication.
Data collection and analysis: Data collected by IoT sensors is sent to cen-
tralized data repositories or cloud platforms. Advanced analytics and ML
algorithms are applied to process and analyze this data. Insights gained
from data analysis help manufacturers identify patterns, anomalies, and
opportunities for optimization.
Real-time monitoring and control: The IoT enables manufacturers to moni-
tor and control processes in real time. This real-time visibility allows for
faster problem resolution, less downtime, and improved overall operational
efficiency. For example, if a machine’s sensors detect an abnormal vibration
pattern, the system can automatically trigger a maintenance request before
a breakdown occurs.
Predictive maintenance: Predictive maintenance facilitated by IoT is a critical
component of Industry 4.0. Manufacturers can forecast when machinery or
equipment may fail by studying data from sensors. This enables scheduled
maintenance before a breakdown happens, minimizing disruptions and
reducing maintenance costs.
Supply chain optimization: IoT is used to track and monitor goods as
they move through the supply chain. Sensors on shipments can provide
real-time location, temperature, and condition data. This helps stream-
line logistics, reduce delays, prevent damage, and ensure the quality of
products.
Customization and flexibility: IoT technologies facilitate greater customiza-
tion and flexibility in manufacturing. Smart factories can quickly recon-
figure production lines and adjust processes based on changing demands
or product variations. This concept, known as “batch size of one,” allows
manufacturers to produce customized products efficiently.
Human-machine interaction: IoT devices and technologies enhance
human-machine interaction. Workers can use wearable devices or hand-
held terminals connected to IoT systems to receive real-time instruc-
tions, access information, and collaborate with machines in a more
intuitive manner.
Energy efficiency and sustainability: IoT allows manufacturers to improve
energy usage and cut expenses by monitoring energy consumption in real
time. Manufacturers can also uncover possibilities to improve sustainabil-
ity and reduce environmental effects by evaluating data from sensors and
processes.
Key Driving Technologies for Industry 4.0 75

Cybersecurity and data privacy: As Industry 4.0 relies heavily on IoT devices
and data exchange, cybersecurity and data privacy are critical concerns.
Ensuring the security of IoT devices, networks, and data transmission is
essential to preventing cyber threats and protecting sensitive information.

In essence, IoT is a core technology that enables Industry 4.0’s interconnection,


data-driven decision-making, and automation. It enables producers to construct more
agile, efficient, and adaptable manufacturing systems, resulting in increased pro-
ductivity, quality, and competitiveness in the continually changing global industrial
landscape.

5.2.1.3 IoT Integration in the Context of Industry 4.0


Implementing IoT in the context of Industry 4.0 has various advantages, but it also
has a number of obstacles that must be addressed. Here are some of the major prob-
lems of deploying IoT in Industry 4.0:

Security concerns: The networked nature of IoT devices expands attackers’


potential attack surface. IoT device vulnerabilities can be used to obtain
unauthorized access to sensitive data, disrupt operations, or launch cyber-
attacks. It is critical to implement strong cybersecurity measures such as
encryption, authentication, and regular software updates.
Data privacy: IoT generates vast amounts of data, including sensitive informa-
tion. Collecting, storing, and transmitting this data can raise privacy con-
cerns, especially when personal or proprietary data is involved. Striking
a balance between data collection for operational insights and respecting
individual privacy rights is a challenge.
Interoperability: Industry 4.0 often involves a mix of legacy systems and new
IoT devices. Ensuring seamless communication and data exchange among
different devices, protocols, and platforms can be complex. Lack of stan-
dardized protocols and compatibility issues can hinder interoperability.
Complexity and integration: Implementing IoT solutions requires integrating
hardware, software, networking components, and data analytics platforms.
This complexity can lead to challenges in system design, deployment, and
maintenance. Integrating IoT with existing processes and systems might
also take time and resources.
Reliability and stability: IoT devices need to function reliably in various
environmental conditions and over extended periods. System failures or
malfunctions can disrupt operations and lead to downtime. Ensuring the
reliability and stability of IoT devices is crucial for maintaining continuous
operations.
Scalability: As IoT deployments expand, managing a large number of devices
and data streams becomes more challenging. Scalability issues can affect
data processing, analytics, and overall system performance. Planning for
scalability from the outset is essential to avoid future bottlenecks.
Data management and analytics: The IoT creates vast amounts of data that
must be collected, processed, and evaluated in real time. This necessitates
76 AI-Driven IoT Systems for Industry 4.0

strong data management and analytics capabilities. Organizations need


to invest in suitable infrastructure and technologies to derive meaningful
insights from the data.
Costs and return on investment (ROI): While the IoT has the potential to pro-
vide benefits, the initial investment in IoT infrastructure, devices, software,
and training can be substantial. Calculating and achieving a positive ROI
from these investments can be challenging, especially in the short term.
Regulatory compliance: Different industries and regions have varying regu-
lations regarding data privacy, security, and environmental standards.
Ensuring compliance with these regulations while implementing IoT solu-
tions can be complex and time-consuming.
Skill gap: Industry 4.0 and IoT necessitate the availability of a professional
workforce capable of designing, implementing, and maintaining IoT systems.
Because of the quick pace of technological change, there may be a scarcity of
professionals with the requisite competence, resulting in a skills gap.
Change management: Introducing IoT-driven changes to existing workflows
and processes can meet resistance from employees accustomed to tradi-
tional methods. Managing this change and ensuring that the workforce
embraces and adapts to new technologies is a challenge.
Environmental impact: As the number of IoT devices grows, concerns about
electronic waste (e-waste) and the environmental impact of manufacturing,
using, and disposing of these devices become more relevant. Implementing
sustainable practices in IoT design, production, and disposal is crucial.

According to statistics [8], as shown in Figure 5.4, IoT device utilization will be
very high in 2025 when compared to non-IoT devices.

FIGURE 5.4 Comparison of IoT and Non-IoT devices.


Key Driving Technologies for Industry 4.0 77

Addressing these challenges requires careful planning, collaboration, investment,


and a proactive approach to risk management. Industry stakeholders must work
together to develop solutions that ensure the IoT’s successful and responsible inte-
gration in the context of Industry 4.0.

5.2.2 Big Data Analytics


The process of evaluating and analyzing vast and complex datasets to reveal impor-
tant insights, patterns, trends, and correlations that can inform business decisions,
strategies, and actions is referred to as big data analytics. The term “big data” refers
to the large volume, diversity, and velocity of data generated by numerous sources
such as social media, sensors, mobile devices, websites, and others. To process, man-
age, and analyze these large quantities, big data analytics employ advanced technol-
ogy and algorithms.
The process of big data analytics typically involves the following steps:

1. Data collection: Data is gathered from a variety of sources, both struc-


tured (organized in databases) and unstructured (such as text, photos, and
videos).
2. Data storage: The collected data is stored in data warehouses or data
lakes, which are capable of handling large volumes of data and various
data types.
3. Data processing: To make the data usable for analysis, it undergoes data
preprocessing, which involves cleaning, filtering, and transforming the data
to ensure its quality and consistency.
4. Data analysis: To analyze the data, uncover patterns, trends, and correla-
tions, and extract relevant insights, several big data analytics approaches
are applied. This can involve statistical analysis, ML algorithms, data min-
ing, and other analytical methods.
5. Data visualization: The insights gained from data analysis are presented
through data visualization tools, such as charts, graphs, dashboards, and
reports. Visualization helps stakeholders understand complex data patterns
more effectively.
6. Decision-making: The resulting insights are utilized to create data-driven
decisions, optimize operations, improve customer experiences, and develop
corporate growth strategies.

Big data analytics provides advantages such as improved decision-making,


increased operational efficiency, better customer understanding, increased revenue,
and a competitive advantage. However, handling big data comes with challenges
such as data security, privacy concerns, and the need for specialized skills and infra-
structure to manage and analyze large datasets effectively. According to statistics
[9], big data industry revenue was 22.6 billion US dollars in 2015, and it is predicted
to reach 90 billion US dollars by 2025, as illustrated in Figure 5.5.
78 AI-Driven IoT Systems for Industry 4.0

FIGURE 5.5 Big data market revenue.

5.2.2.1 Significant Applications of Big Data Analytics in Industry 4.0


Big data analytics is critical to Industry 4.0 because it harnesses the potential of
massive amounts of data created by contemporary manufacturing processes and
systems. It enables businesses to get useful insights, optimize operations, improve
decision-making, and drive innovation. Here are some important big data analytics
applications in Industry 4.0:

Predictive maintenance: Big data analytics can predict maintenance needs by


analyzing real-time sensor data from machinery and equipment. Companies
can improve maintenance schedules, decrease expenses, and increase over-
all equipment effectiveness (OEE) by identifying possible issues before
they cause downtime.
Quality control and defect detection: Analyzing data from production pro-
cesses helps in the real-time detection of defects or deviations from qual-
ity standards. This ensures that products meet specifications and reduces
waste, rework, and recalls.
Supply chain optimization: By analyzing data from numerous sources such as
suppliers, logistics, and inventories, big data analytics may optimize supply
chain operations. As a result, demand forecasting, inventory management,
and logistical efficiency improve.
Energy management: Data analytics may track and analyze energy consump-
tion patterns in real time to find energy savings potential and manage
energy usage across production plants [10].
Key Driving Technologies for Industry 4.0 79

Process optimization: By analyzing data from various manufacturing pro-


cesses, companies can identify bottlenecks, inefficiencies, and opportuni-
ties for optimization. This can lead to increased production efficiency and
reduced cycle times.
Customization and personalization: Big data analytics enables companies to
analyze customer preferences and behavior to offer customized and person-
alized products and services, driving customer satisfaction and loyalty [11].
Real-time monitoring and control: Big data analytics facilitates real-time
monitoring and control of production processes, allowing manufacturers
to make quick adjustments based on data insights to ensure quality and
efficiency.
Workforce management: Data analytics can help in optimizing workforce
scheduling, performance management, and training by analyzing employee
data, leading to improved productivity and job satisfaction.
Risk management: Big data analytics may identify and reduce operational,
financial, and supply chain risks by analyzing data from many sources,
including historical and real-time data.
Innovation and product development: Analyzing customer feedback and mar-
ket trends using big data analytics can drive innovation in product develop-
ment and help companies stay competitive by creating products that meet
evolving customer needs.
Waste reduction and sustainability: By analyzing data related to resource con-
sumption and waste generation, organizations can identify opportunities
for waste reduction and sustainability improvements in their manufacturing
processes.
Collaborative robotics and automation: Big data analytics can improve
human-robot collaboration in smart industries, allowing for safer and more
efficient work automation.
Digital twins: Creating digital twins of physical assets or processes and ana-
lyzing their data can lead to insights that help improve performance, main-
tenance, and design.

According to the survey [12], global data volume usage in 2015 was 15.5
zettabytes, and this figure is expected to rise to 185 zettabytes by 2025, as shown
in Figure 5.6.
Overall, big data analytics enables firms in Industry 4.0 to make more informed
decisions, improve operational efficiency, improve customer experiences, and drive
innovation, resulting in enhanced competitiveness and growth.

5.2.3 Artificial Intelligence and Machine Learning


By enabling automation, optimization, predictive analytics, and intelligent deci-
sion-making, AI and ML play critical roles in the evolution of Industry 4.0. These
technologies are revolutionizing industries by leveraging the power of data and algo-
rithms to develop more efficient, flexible, and inventive processes. Here are some
significant AI and ML applications in Industry 4.0:
80 AI-Driven IoT Systems for Industry 4.0

FIGURE 5.6 Volume of data usage worldwide.

Human-robot collaboration: AI enables safe and effective collaboration


between humans and robots, enhancing productivity and workplace safety.
Personalized production: ML algorithms analyze customer data to customize
and personalize products, leading to improved customer satisfaction.
Health and safety monitoring: AI monitors worker safety using sensor data,
identifying potential risks and ensuring compliance with safety regulations.
Supply chain transparency: Blockchain, coupled with AI, ensures trans-
parent and traceable supply chains, reducing fraud and ensuring product
authenticity.
Smart maintenance: AI-powered predictive analytics optimize maintenance
schedules, reducing costs and downtime.
Real-time decision-making: AI processes real-time data to support quick and
informed decision-making, enhancing operational efficiency.
Digital twin technology: AI-driven digital twins simulate real-world pro-
cesses, enabling optimization and experimentation without affecting physi-
cal systems.
Smart product design: AI and ML assist in designing products by analyzing
customer feedback, market trends, and simulation data.
Data analytics and insights: AI-powered analytics reveal hidden patterns and
insights from massive datasets, aiding strategic planning and innovation.

These examples demonstrate the transformational power of AI and ML in


Industry 4.0. They enable organizations to achieve higher levels of efficiency, pro-
ductivity, and innovation, leading to competitive advantages in today’s rapidly evolv-
ing technological landscape.
Key Driving Technologies for Industry 4.0 81

5.2.3.1 How AI and ML Serve as Cornerstones of Industry 4.0


AI and ML serve as the cornerstones of Industry 4.0 by providing the intelligence,
automation, and adaptability needed to drive the transformation of modern industries.
They enable data-driven decision-making, predictive capabilities, and enhanced pro-
cess optimization, which are essential for realizing the full potential of Industry 4.0.
Here’s how AI and ML act as the cornerstones of Industry 4.0:

Data-driven insights: In real time, AI and ML algorithms process and evalu-


ate massive volumes of data created by sensors, devices, and systems. This
data-driven approach provides actionable insights that guide operational
improvements and strategic decision-making.
Personalization and customization: AI and ML enable mass customization by
analyzing customer data and preferences. This allows industries to produce
tailored products and services, enhancing customer satisfaction and loyalty.
Automation and autonomous systems: AI-powered automation and robotics
enable the development of autonomous systems capable of performing tasks
without the need for human involvement. This leads to higher production
efficiency, lower error rates, and safer operations.
Continuous learning and adaptation: ML algorithms learn from data and
adapt to changing conditions over time. This adaptability is crucial for
industries that need to respond quickly to market fluctuations, production
variations, and unforeseen challenges.
Enhanced human-machine collaboration: AI and ML facilitate seamless
collaboration between humans and machines. Workers can interact with
intelligent systems, receiving real-time guidance, insights, and recommen-
dations for better decision-making.
Real-time decision-making: AI processes data rapidly and generates insights
in real-time, enabling quick and informed decision-making. This is particu-
larly valuable in dynamic manufacturing environments.
Energy efficiency and sustainability: AI and ML optimize energy consump-
tion by analyzing data patterns and recommending energy-saving strate-
gies, contributing to sustainability goals.
Innovation and product development: AI and ML assist in product design and
innovation by analyzing market trends, customer feedback, and simulation
data, leading to the creation of new and improved products.
Remote monitoring and control: AI and ML enable remote monitoring and
control of industrial processes, allowing for more flexible and efficient
operations.

AI and ML empower Industry 4.0 by providing the intelligence and automa-


tion necessary to create adaptive, data-driven, and efficient manufacturing systems,
as shown in Figure 5.5. They enable organizations to harness the full potential of
interconnected technologies and drive innovation across various industrial sectors.
According to statistics [13], the AI market will be worth 17,267.75 million US dol-
lars in 2022, and it is expected to grow to 80,847.26 million US dollars by 2024, as
illustrated in Figure 5.7.
82 AI-Driven IoT Systems for Industry 4.0

FIGURE 5.7 AI and ML market.

5.2.4 Cyber-Physical System (CPS)


CPS are an essential component of Industry 4.0, combining physical processes with
computing, communication, and control. CPS enables seamless interaction between
the physical and digital worlds, transforming industries through real-time monitor-
ing, control, and optimization.
CPS combine physical components (devices, machines, sensors, actuators) with
computational elements (software, algorithms, networks) to create intelligent sys-
tems capable of interacting with the physical world. In Industry 4.0, CPS enables
the confluence of technologies such as IoT, cloud computing, data analytics, and AI,
resulting in smarter and more efficient industrial processes [13].
According to the survey [14], in 2015, 8.8 billion people were linked to the
Internet, increasing the risk of cyberattacks. As illustrated in Figure 5.8, the num-
ber of Internet-connected devices will reach 75.3 billion by 2025. As a result, more
cyberattacks are possible.
Key Characteristics of CPS in Industry 4.0:

Real-time interaction: CPS enable real-time monitoring, control, and deci-


sion-making by continuously collecting data from physical processes and
responding with appropriate actions.
Interconnectivity: CPS are connected through communication networks,
enabling seamless data exchange and coordination between physical and
digital components.
Autonomy and adaptation: CPS can adapt to changing situations indepen-
dently and make judgments based on data analysis, eliminating the need
for human interaction.
Key Driving Technologies for Industry 4.0 83

FIGURE 5.8 Connected devices on the Internet.

Physical feedback: CPS provides feedback from the physical world to digital
systems, allowing them to respond to real-world events and conditions.
Predictive capabilities: CPS uses data analytics and AI to predict and prevent
potential issues, optimizing processes and improving efficiency.
Safety and security: CPS employs security measures to safeguard the integrity,
confidentiality, and availability of data and control systems, hence reducing
cybersecurity threats.

5.2.4.1 CPS Applications in Industry 4.0


Smart manufacturing: CPS monitor production lines, adjust parameters, and
optimize operations to ensure quality, reduce defects, and enhance overall
efficiency.
Energy management: CPS regulates energy consumption based on real-time
demand and supply data, contributing to energy efficiency and cost savings.
Autonomous vehicles and drones: CPS enables self-driving vehicles and drones
to navigate and interact with their surroundings safely and efficiently.
Healthcare monitoring: CPS facilitates remote patient monitoring and per-
sonalized healthcare solutions by collecting and analyzing vital signs and
patient data.
Smart grids: CPS manages and optimizes energy distribution in smart grids,
responding dynamically to changing demand and supply conditions.

Based on survey data [15], Figure 5.9 depicts the origin of the greatest number
of cyberattacks. The values of the highest number of cyber assaults per country are
84 AI-Driven IoT Systems for Industry 4.0

FIGURE 5.9 Geographical areas of cyber threats.

shown in Table 5.1. Physical systems are a key component of Industry 4.0, allowing
for the seamless integration of physical processes with digital information. They
enable industries to develop more responsive, efficient, and inventive systems, result-
ing in higher production, sustainability, and competitiveness.

5.2.5 Robotic Process Automation (RPA)


Robotic process automation (RPA) is a game-changing technology that automates
repetitive, rule-based operations and processes in Industry 4.0. RPA entails the
use of software robots, or “bots,” to do regular tasks, allowing human workers to

TABLE 5.1
Origin of the Highest Number of
Cyber Threats
Origin for the Highest
Countries Number of Cyberattacks
Netherlands 2.2
Indonesia 2.41
Russia 2.46
Thailand 2.5
Vietnam 4.23
Germany 5.1
India 5.33
Brazil 5.63
USA 17.05
Key Driving Technologies for Industry 4.0 85

focus on higher value activities. Here are some examples of how RPA contributes to
Industry 4.0:

Efficiency and accuracy: RPA automates manual and repetitive operations


with great precision, lowering the possibility of human error. This results
in increased process efficiency, decreased rework, and improved data
accuracy.
Cost savings: By automating tasks that would otherwise require human effort,
RPA helps organizations achieve cost savings by optimizing resource utili-
zation and improving productivity.
Data integration: RPA can integrate and synchronize data across various sys-
tems and applications, ensuring seamless data exchange and consistency
within the digital ecosystem of Industry 4.0.
Scalability: RPA bots may be quickly scaled up or down to fit changing
business needs, making it a versatile tool for dealing with workload
variations.
24/7 Operations: RPA bots can operate around the clock without the need for
breaks, leading to continuous and uninterrupted process execution.
Process optimization: RPA provides data-driven insights into process bot-
tlenecks and inefficiencies, enabling organizations to identify areas for
improvement and optimization.
Human-robot collaboration: RPA works alongside human employees, allow-
ing them to offload mundane tasks and focus on more creative, strategic,
and customer-centric activities.
Compliance and audit: RPA enforces standardized processes and
ensures compliance with regulations by following predefined rules and
guidelines.
Enhanced customer experience: By automating processes that involve cus-
tomer interactions, RPA can lead to faster response times and improved
customer satisfaction.
Quick implementation: RPA implementation typically involves minimal dis-
ruption to existing systems, allowing organizations to realize benefits rela-
tively quickly.
Interoperability: RPA can be combined with other Industry 4.0 technologies
like AI, IoT, and analytics to improve the entire digital transformation.

5.2.5.1 Examples of RPA Industry 4.0


Manufacturing: RPA can automate data entry, inventory management, order
processing, and quality control tasks in manufacturing processes.
Supply chain management: RPA can automate order tracking, invoice pro-
cessing, and shipment coordination, enhancing supply chain visibility and
efficiency.
Customer service: RPA can handle routine customer inquiries, order status
checks, and issue resolution, freeing up customer service agents for more
complex interactions.
Finance and accounting: RPA can automate invoice processing, expense man-
agement, and financial reporting tasks, ensuring accuracy and compliance.
86 AI-Driven IoT Systems for Industry 4.0

Human resources: RPA can streamline employee onboarding, payroll process-


ing, and benefits administration, reducing administrative burden.
Healthcare: RPA can automate data entry, claims processing, and appointment
scheduling, enabling healthcare providers to focus on patient care.
Logistics: RPA can automate route optimization, shipment tracking, and cus-
toms documentation, improving logistics operations.

Incorporating RPA into Industry 4.0 initiatives helps organizations streamline


operations, enhance efficiency, and embrace automation to drive digital transforma-
tion and remain competitive in the evolving technological landscape.

5.2.5.2 Risks and Challenges of Adoption of RPA in Industry


While RPA offers numerous benefits to industries, its adoption also comes with cer-
tain risks and challenges. To guarantee a successful RPA adoption, firms must be
aware of these potential downsides and address them appropriately. Here are some
of the key risks and challenges associated with the adoption of RPA in the industry:

1. High upfront costs: Implementing RPA requires initial investments in soft-


ware licenses, infrastructure, training, and consulting services. The costs
can be significant, especially for large-scale deployments, which might
impact the ROI in the short term.
2. Change management: RPA implementation frequently demands adjust-
ments to existing processes, job responsibilities, and workflows. Employees
may be resistant to these changes or fear job displacement, posing difficulties
in managing employee expectations and ensuring effective implementation.
3. Process complexity: Not all processes are suitable for automation. Complex
or variable processes may be challenging to automate effectively using
RPA, and significant process redesign may be required, adding complexity
to the implementation.
4. Scalability and maintenance: As RPA bots are scaled up, managing a
large number of bots can become complex. Maintenance, monitoring, and
updates to bots require ongoing effort, and ensuring their reliability and
performance at scale is crucial.
5. Data security and compliance: RPA requires the handling of sensitive data.
Organizations, particularly in industries with severe rules, must employ
comprehensive security measures to avoid unauthorized access, data
breaches, and compliance violations.
6. Lack of standardization: Processes that lack standardization or involve
unstructured data can be challenging to automate using RPA. Extracting
and processing data from diverse sources may require manual intervention
or advanced AI capabilities.
7. Integration with legacy systems: Integrating RPA with current legacy sys-
tems and technologies can be difficult and time-consuming, resulting in
compatibility concerns.
8. Bot failures and errors: RPA bots can encounter errors due to changes in
application interfaces, data format variations, or system failures. Ensuring
error handling, monitoring, and recovery mechanisms is essential.
Key Driving Technologies for Industry 4.0 87

9. Over-reliance on RPA: Organizations might become overly reliant on RPA


for automation and neglect other technological advancements or process
optimization opportunities.
10. Cultural resistance: Employees might perceive RPA as threatening job
security or an unwanted change. Building a culture of automation and
addressing employees’ concerns is crucial for successful adoption.
11. Vendor lock-in: Organizations may become dependent on a specific RPA
vendor’s technology, making it challenging to switch vendors or adopt alter-
native solutions in the future.
12. Lack of skilled resources: Organizations need skilled personnel who
can design, develop, and maintain RPA solutions. The scarcity of these
resources in the job market can pose a challenge.

Lee et al.’s survey [16] also highlighted the pitfalls in Table 5.2.
Addressing these risks and challenges requires careful planning, effective change
management, a thorough process assessment, and collaboration between IT, operations,
and business units. Figure 5.10 shows the approximate percentage of risk adoption.
By taking a holistic the approach and addressing these challenges proactively,
organizations can maximize the benefits of RPA adoption in Industry 4.0 while min-
imizing potential drawbacks.

5.2.6 Augmented Reality (AR) and Virtual Reality (VR)


AR and VR are disruptive technologies that play a crucial role in Industry 4.0 by
increasing human-machine interfaces, training, and visualization, and enabling new
design, maintenance, and collaboration possibilities [17]. Here are some examples of
how AR and VR are used in Industry 4.0:

5.2.6.1 Augmented Reality (AR) in Industry 4.0


Remote assistance: AR enables real-time remote assistance by overlaying digital
information onto a technician’s field of view [18]. This allows experts to guide on-site

TABLE 5.2
Approximate Percentage of Risk in the
Adoption of RPA
Approx. Percentage of
Challenges Risk in Adoption
Process ability 26
Technical complexity 23
Capacity 20
Ownership 16
Operational risk 14
Other 1
88 AI-Driven IoT Systems for Industry 4.0

FIGURE 5.10 Challenges of risk in adoption of RPA.

workers through complex tasks, reducing downtime and improving problem-solving


efficiency.

Maintenance and repair: AR provides maintenance personnel with step-by-


step visual instructions and real-time data overlays on equipment, simplify-
ing repair procedures and ensuring accurate execution.
Training and onboarding: AR enables immersive training experiences by superim-
posing digital data on real-world items. Before migrating to actual equipment,
new employees can learn processes and tasks in a safe virtual environment.
Design and prototyping: AR allows engineers and designers to visualize and
manipulate digital prototypes in real-world settings, enabling rapid design
iterations and improving the development process.
Data visualization: AR provides real-time data visualization on physical
equipment, helping operators monitor performance, track metrics, and
make informed decisions.
Logistics and warehousing: AR assists in picking and packing tasks by pro-
viding visual instructions and real-time inventory information, increasing
efficiency in logistics operations.

5.2.6.2 Virtual Reality (VR) in Industry 4.0


Design and simulation: VR enables engineers to create and simulate complex
models in a virtual environment, allowing them to visualize and test prod-
uct designs before physical production.
Training and skill development: VR training scenarios allow employees to
practice activities and operations in a controlled environment, improving
skills and lowering real-world dangers.
Key Driving Technologies for Industry 4.0 89

Collaboration and communication: VR facilitates remote collaboration by


enabling team members to meet and interact in virtual spaces, irrespective
of their physical locations.
Safety training: VR simulates hazardous scenarios, allowing workers to prac-
tice safety protocols and emergency response procedures without exposing
themselves to real danger.
Virtual factory tours: VR offers virtual tours of manufacturing facilities and
processes, allowing stakeholders to explore operations and gain insights
without being physically present.
Sales and marketing: VR creates interactive and immersive product demon-
strations, enhancing customer engagement and enabling potential clients to
experience products virtually.

AR and VR contribute to Industry 4.0 by providing immersive experiences,


boosting training and collaboration, and enabling more efficient and effective opera-
tions across multiple industries.

5.2.7 Cloud Computing
Cloud computing is a fundamental enabler of Industry 4.0, providing the necessary
infrastructure and capabilities to support the digital transformation of industries.
Cloud computing provides scalable and adaptable resources for data storage, pro-
cessing, and analytics, allowing businesses to use the power of new technologies
such as IoT, AI, and big data.

5.2.7.1 How Cloud Computing Is Applied in Industry 4.0


Scalable resources: Cloud computing enables enterprises to scale up or down
based on demand by providing on-demand access to computing resources.
This is critical for dealing with the large amount of data produced by IoT
devices and other Industry 4.0 technologies.
Data storage and management: Cloud platforms offer reliable and secure stor-
age solutions for vast amounts of data collected from sensors, devices, and
processes. This data can be easily managed, organized, and accessed for
analysis and decision-making.
Data analytics and processing: Cloud-based analytics tools allow industries to
process and analyze large datasets quickly and efficiently. This capability
supports real-time insights, predictive analytics, and optimization of manu-
facturing processes.
AI and ML: Cloud-based AI and ML services allow businesses to design
and deploy AI models without requiring large computational resources.
Automation, predictive maintenance, and data-driven decision-making are
all improved by these technologies.
Collaboration and communication: Cloud platforms facilitate collaboration
among remote teams by providing centralized access to data, applications,
and communication tools. This is especially valuable for distributed design,
development, and collaboration efforts.
90 AI-Driven IoT Systems for Industry 4.0

Remote monitoring and control: Cloud-based solutions offer remote monitor-


ing and management of industrial processes and equipment in real time,
allowing enterprises to manage operations from any location with an inter-
net connection.
Digital twin and simulation: Cloud-based digital twin platforms allow
industries to create virtual replicas of physical assets, enabling simula-
tion, testing, and optimization of processes without impacting real-world
operations.
Cost effectiveness: Cloud computing eliminates the need for big upfront hard-
ware and infrastructure investments. Organizations pay for the resources
they use, resulting in lower capital expenditures and more predictable oper-
ating costs.
Global reach: Cloud platforms have a global presence, enabling industries to
deploy applications and services closer to their target markets and improv-
ing latency and performance for users worldwide.
Flexibility and agility: Cloud computing facilitates the rapid deployment
of new apps and services, fostering innovation and experimentation in
Industry 4.0 efforts.
Security and compliance: Cloud providers provide powerful security features,
data encryption, and compliance certifications to assist industries in meet-
ing regulatory requirements and protecting sensitive data.
Energy efficiency: Cloud providers optimize data centers for energy effi-
ciency, which aligns with sustainability goals and reduces environmental
operations.

Cloud computing is a key technology for Industry 4.0, allowing firms to leverage
sophisticated technology capabilities, analyze data, and drive innovation in manu-
facturing, supply chain management, and other industrial processes.

5.3 ANALYSIS OF BARRIERS TO ADOPTING TECHNOLOGY


IN INDUSTRY 4.0
The adoption of technology in Industry 4.0 is influenced by various barriers that can
impact the pace and success of implementation. Analyzing these barriers provides
insights into the challenges organizations face and helps develop strategies to over-
come them. Here’s a detailed analysis of the key barriers to adopting technology in
Industry 4.0:

1. Inadequate awareness and understanding:


• Analysis: Many firms are unaware of and unconcerned about Industry
4.0 technologies and their potential benefits. This ignorance can stymie
decision-making and investment in these technologies.
• Impact: Without proper understanding, organizations may overlook
valuable opportunities to enhance efficiency, productivity, and competi-
tiveness through technology adoption.
Key Driving Technologies for Industry 4.0 91

2. High initial investment costs:


• Analysis: Implementing Industry 4.0 technologies often requires sub-
stantial upfront investments in hardware, software, infrastructure,
training, and talent acquisition.
• Impact: High prices can discourage firms from embarking on technol-
ogy adoption journeys, particularly small and medium-sized enter-
prises (SMEs) with limited resources.
3. Legacy systems and integration challenges:
• Analysis: Existing legacy systems and infrastructure may not be com-
patible with new technologies, leading to integration complexities and
potential disruptions.
• Impact: Integration challenges can delay implementation, increase
costs, and hinder the seamless flow of data and processes across the
organization.
4. Data security and privacy concerns:
• Analysis: Concerns about data breaches, cyberattacks, and compliance
with data protection requirements arise as a result of Industry 4.0’s
greater connection and data sharing.
• Impact: Security and privacy concerns can erode trust, limit data shar-
ing, and impede the adoption of technologies that rely on data sharing
and analysis.
5. Resistance to change:
• Analysis: Employees may be hesitant to accept new technologies
because of fears about job displacement, a lack of knowledge, or skill
obsolescence.
• Impact: Resistance to change can stall acceptance, impede the devel-
opment of a technologically knowledgeable workforce, and limit the
benefits of technology implementation.
6. Lack of skilled workforce:
• Analysis: The demand for specialized skills in areas such as data sci-
ence, AI, and cybersecurity often outpaces the available talent pool.
• Impact: Skill shortages can hinder technology implementation, limit
innovation, and lead to increased competition for skilled professionals.
7. Uncertain ROI:
• Analysis: Demonstrating clear and tangible ROI for Industry 4.0 invest-
ments can be challenging, especially in the short term.
• Impact: Organizations may hesitate to invest in technologies without a
clear understanding of the potential benefits and financial returns.
8. Regulatory and compliance complexities:
• Analysis: Compliance with industry-specific regulations and data pro-
tection laws can be complex, particularly when adopting new digital
solutions.
• Impact: Regulatory challenges can lead to legal and financial risks, as
well as hinder cross-border collaboration and data sharing.
9. Lack of standardization:
• Analysis: The absence of standardized protocols and interfaces can
hinder interoperability between different technologies and systems.
92 AI-Driven IoT Systems for Industry 4.0

• Impact: Lack of standardization can lead to integration challenges,


limit compatibility, and impede the seamless exchange of data.
10. Cultural and organizational resistance:
• Analysis: Organizational culture may resist change, hindering the
adoption of new technologies and innovative practices.
• Impact: Cultural resistance can create internal friction, delay imple-
mentation, and limit the organization’s ability to embrace digital
transformation.
Addressing these roadblocks necessitates a multifaceted approach that includes
education and awareness campaigns, strategic planning, staff training and engage-
ment, collaboration with technology partners, and a focus on developing an innova-
tive and adaptable culture. Overcoming these challenges is critical for enterprises
to fully reap the benefits of Industry 4.0 technology and remain competitive in an
ever-changing digital market.

5.4 HOW TO IMPLEMENT INDUSTRY 4.0


TECHNOLOGIES EFFECTIVELY
Implementing Industry 4.0 in your organization requires a well-planned and com-
prehensive approach that takes into account a variety of factors such as technology
adoption, process optimization, workforce preparation, and change management.
Here’s a complete guide to efficiently implementing Industry 4.0:

1. Set clear objectives: Define specific goals and outcomes you want to
achieve through Industry 4.0 implementation. Align these objectives with
your organization’s overall vision and strategy.
2. Leadership commitment: Gain buy-in and commitment from top leader-
ship. Ensure they understand the importance of Industry 4.0 and are actively
involved in driving the transformation.
3. Examine the existing situation: Examine your company’s current techno-
logical infrastructure, processes, and labor skills. Determine your strengths,
shortcomings, and areas for progress.
4. Create a roadmap: Make a thorough implementation roadmap, including
the steps, deadlines, and resources needed for the Industry 4.0 journey.
Divide the blueprint into doable stages.
5. Technology selection: Choose the right technologies (IoT, AI, big data, etc.)
that align with your objectives. Ensure they address your specific chal-
lenges and provide meaningful solutions.
6. Pilot projects: Begin with pilot projects to test and validate the technology
of choice. Choose areas where the influence can be quantified and concrete
advantages may be demonstrated.
7. Data strategy: Develop a comprehensive data strategy that includes data
collection, storage, analysis, and visualization. Ensure data quality, secu-
rity, and compliance.
8. Integration and connectivity: Establish a robust connectivity infrastructure
to link various devices, systems, and processes. Implement IoT devices and
sensors for data collection.
Key Driving Technologies for Industry 4.0 93

9. AI and analytics implementation: Integrate AI and analytics solutions to


process and analyze the collected data. Develop AI models for predictive
insights and data-driven decision-making.
10. Process optimization: Examine and improve existing processes to make
use of the possibilities of Industry 4.0 technologies. Workflows should be
redesigned for automation, efficiency, and agility.
11. Change management: Address workforce concerns and resistance to
change through effective change management strategies. Communicate the
benefits of Industry 4.0 and provide training.
12. Skill development: Invest in training and upskilling programs to provide
your employees the skills they need to operate and manage Industry 4.0
technologies.
13. Cooperation and association: Work with technology partners, suppliers,
and industry experts to use their expertise and experience in implementing
Industry 4.0.
14. Monitor and evaluate: Continuously monitor the progress of your Industry
4.0 initiatives. Collect feedback, measure key performance indicators
(KPIs), and make adjustments as needed.
15. Scalability and expansion: Scale up successful pilot projects and expand
Industry 4.0 implementation across different departments and processes.
16. Continuous innovation: Foster a culture of innovation that encourages
ongoing exploration of new technologies and approaches to drive continu-
ous improvement.
17. Cybersecurity and data privacy: Implement strong cybersecurity safeguards
to safeguard data, devices, and systems against potential threats and breaches.
18. Continuous learning and improvement: Keep up to current on the latest
technology breakthroughs, industry trends, and best practices to keep your
company at the forefront of Industry 4.0.
19. Celebrate success: Recognize and celebrate achievements and milestones
reached during the Industry 4.0 implementation. Share success stories to
inspire and motivate your workforce.
20. Adapt and evolve: Industry 4.0 is a never-ending process. To remain com-
petitive in the quickly changing digital market, you must constantly adapt
and evolve your strategies.

Effectively implementing Industry 4.0 requires a holistic approach that encom-


passes technology, processes, people, and culture. By following these steps and tai-
loring them to your organization’s specific needs, you can successfully navigate the
transformation and unlock the full potential of Industry 4.0.

5.5 SEIZING OPPORTUNITIES AMIDST CHALLENGES


While the promises of Industry 4.0 are boundless, the path to its realization is not
devoid of challenges. Organizations embarking on this transformative journey must
navigate through a maze of obstacles, from high initial investment costs and integra-
tion complexities to data security concerns and workforce readiness.
94 AI-Driven IoT Systems for Industry 4.0

The challenge of upskilling the workforce to operate and manage these advanced
technologies looms large, demanding a commitment to continuous learning and
development. Legacy systems and regulatory hurdles often act as bottlenecks,
impeding the seamless integration of new technologies. Addressing these challenges
necessitates a holistic approach, where visionary leadership, strategic planning, and
adept change management converge to pave the way forward.
The symphony of Industry 4.0 is not orchestrated by a single maestro; rather, it is a col-
laborative endeavor that encompasses governments, industries, academia, and individuals.
Collaborative ecosystems and partnerships foster innovation and knowledge exchange,
accelerating the pace of technological advancement. Governments play a pivotal role in
shaping policies, regulations, and standards that nurture the growth of Industry 4.0.
Industries and businesses, driven by a spirit of innovation and adaptability, fuel
the engine of transformation, leveraging cutting-edge technologies to gain a com-
petitive edge. Academia becomes the crucible of knowledge creation, where research
and development pave the way for novel applications and breakthroughs.
As Industry 4.0 unfolds its transformative potential, ethical considerations and
sustainability come to the forefront. The sheer scale of data collection and process-
ing raises questions about data privacy, security, and ownership. The responsible
and ethical use of AI, particularly in decision-making processes, demands careful
consideration to ensure fairness, transparency, and accountability.
Moreover, the sustainable deployment of Industry 4.0 technologies is a moral
imperative. Organizations must embrace environmentally friendly practices, mini-
mize energy consumption, and harness technology to address pressing global chal-
lenges, such as climate change and resource scarcity.

5.6 CONCLUSION
Industry 4.0 technologies are transforming sectors and delivering unprecedented
levels of efficiency, productivity, and competitiveness. The adoption of Industry 4.0
technologies opens up a plethora of options across multiple industries. IoT integrates
the physical and digital worlds, allowing for real-time data collection, remote moni-
toring, and predictive analytics. AI enables machines to learn, reason, and make
intelligent decisions, transforming processes such as automation, data processing,
and decision-making. Data analytics unveil hidden patterns, trends, and correlations,
guiding strategic choices and unleashing innovation. CPS bridge the gap between
physical processes and digital systems, creating smart, interconnected environments
that optimize operations and enhance safety.
However, the path to embracing Industry 4.0 is not without challenges.
Organizations must navigate barriers like technological complexity, high initial
costs, data security concerns, and resistance to change. Skill shortages, integration
issues, and regulatory compliance complexities add further layers of complexity to
the adoption process. Yet, these challenges are opportunities for growth and innova-
tion. By addressing these hurdles head-on and implementing a well-crafted strategy,
organizations can harness the full potential of Industry 4.0. Success requires vision-
ary leadership, a commitment to continuous learning, and a culture that embraces
change and collaboration.
Key Driving Technologies for Industry 4.0 95

As we stand at the cusp of a new era defined by Industry 4.0, the fusion of
technology, data, and human ingenuity holds the promise of a brighter and more
interconnected future. The organizations that embrace these technologies, adapt
their operations, and empower their workforce will be the trailblazers in this trans-
formative journey, shaping industries and redefining what is possible in the digital
age.

LIST OF ABBREVIATIONS
AI artificial intelligence
AR augmented reality
CPS cyber-physical systems
IoT Internet of Things
ML machine learning
RPA robotic process automation
VR virtual reality

REFERENCES
1. Li, S., Li, X., & Ma, X. The application of IoT in industrial environments: A systematic
review. Journal of Manufacturing Systems, 64, 261–271, 2022.
2. Goh, M., Wu, D., & Kim, T. H. (Eds.). Big Data Analytics for Smart Manufacturing.
Springer, 2021.
3. Rauschecker, U., Shen, W., & Wang, L. AI-driven predictive maintenance for industrial
equipment. Journal of Manufacturing Systems, 72, 307–319, 2023.
4. Azevedo, A., Barata, J., & Tovar, E. (Eds.). Cyber-Physical Systems for Industry 4.0.
CRC Press, 2022.
5. Gao, W., Zhang, Y., & Zhu, W. Additive manufacturing: Technology, applications, and
opportunities in industry 4.0. Additive Manufacturing, 39, 101922, 2021.
6. Ivezic, N., Howe, A., & Fussell, D. Augmented Reality and Virtual Reality in Industry
4.0. CRC Press, 2023.
7. Guo, Q., Li, X., & Zhu, Z. Cloud and edge computing for smart manufacturing: A
review. Robotics and Computer-Integrated Manufacturing, 79, 101998, 2022.
8. Vailshery, L. S. Internet of Things (IoT) and Non-IoT Active Device Connections
Worldwide from 2010 to 2025. Technology & Telecommunications, 2022.
9. Taylor, P. Forecast Revenue Big Data Market Worldwide 2011–2027, www.statista.com,
2022.
10. Archenaa, J., & Mary Anita, E. A. A survey of big data analytics in healthcare and
government. 2nd International Symposium on Big Data and Cloud Computing, 50,
408–413, 2015
11. Di Gesualdo, D. Artificial Intelligence for Industry 4.0, www.eeweb.com, 2022.
12. Berisha, B., & Meziu, E. Big Data Analytics in Cloud Computing: An Overview,
www.researchgate.net/publication/348937287_Big_Data_Analytics_in_Cloud_
Computing_An_overview, http://dx.doi.org/10.13140/RG.2.2.26606.95048, 2021.
13. Mosterman, P. J., & Zander, J. Industry 4.0 as a cyber-physical system study., Software
and Systems Modeling, 15, 17–29, 2016
14. Ervural, B. C., & Ervural, B. Overview of Cyber Security in the Industry 4.0 Era,
Managing The Digital Transformation, Springer Series in Advanced Manufacturing,
http://dx.doi.org/10.1007/978-3-319-57870-5_16, 2018.
96 AI-Driven IoT Systems for Industry 4.0

15. DavidPur, N. Which Countries Are Most Dangerous? Cyber Attack Origin – By
Country, www.cyberproof.com, 2022.
16. Lee, J., Lee, B., & Lee, D. The impact of robotic process automation (RPA) on busi-
ness process outsourcing (BPO). Technology Analysis & Strategic Management, 31(11),
1367–1381, 2019.
17. Schleicher, D., & Schöbel, A. Immersive Technologies in Industry 4.0—Current Status
and Future Prospects. Augmented and Virtual Reality in Operations Management,
15–35, 2020.
18. Sosa, R., & Oliveira, L. Industry 4.0 and Augmented Reality: Opportunities and
Challenges Production Engineering Research and Development, 10, 23–32, 2020.
6 Opportunities and
Challenges of Digital
Connectivity for
Industrial Internet
of Things
Mahesh Visveshwarappa

6.1 INTRODUCTION
The adoption of Internet of Things (IoT) in the industries (manufacturing) is referred to
as the Industrial IoT (IIoT). IoT refers to the network of smart systems and administration
platforms that will work together to get a good cause for societies. Because industrial
equipment must cooperate and operate synchronously, interconnection of equipment is
not a novel idea. The systems exchange valuable information and interconnect with each
other, but then again this interaction is only permitted in a precise area of a factory or
other industrial setting [1]. There is an explosion of smart devices and technology that
have made it possible for the physical world to turn into digital entities. Cheers to IIoT,
the factors behind technological advancement are machine learning, robots, and artificial
intelligence (AI) with higher speed internet. Generally, the IIoT, commonly referred to
as the Industry 4.0 or Industrial Internet, is one such technology that is driving trans-
formation in the manufacturing sector. Industry IIoT can achieve previously unheard-of
levels of competence, production, and performance by fusing machine-to-machine inter-
action with big data analysis. According to Accenture’s analytical reports, the IIoT will
revolutionize numerous industries that produce almost two-thirds of the global economy,
resulting in 14.2 trillion dollars in economic growth by 2030. However, as it has devel-
oped, new opportunities have also emerged as well as new problems and challenges for
corporate executives.
As a result, businesses in a variety of sectors are under increasing pressure to
navigate and get over the challenges associated with the development and adop-
tion of the IIoT [2]. In order to deliver interoperability, competence, and scalability,
industrial IoT uses sensor-driven computing, intelligent machines, and data analyt-
ics applications. This directly encourages automation in crucial infrastructure and
boosts business productivity.
The most significant difficulty is protecting the Industrial Infrastructure and all
of its components while working toward the objectives of increased productivity.
Concern is raised by cyberattacks on critical infrastructure and industries because

DOI: 10.1201/9781003432319-6 97
98 AI-Driven IoT Systems for Industry 4.0

of the enormous losses they cause. Therefore, it is important to take away from these
occurrences the fact that attackers are now targeting industries and that this problem
requires immediate attention. IIoT is frequently referred to as the integration of two
important technologies called operational technology (OT) and information technol-
ogy (IT), where OT is the plant network where manufacturing is done and IT is the
enterprise network. To avoid the compromise of IIoT infrastructure, these two men-
tioned technologies have diverse security requirements to be taken into account [1].
Therefore, both business and academic worlds have been advised to create new
technologies for wireless communication through which industrial contexts achieve
all of the aforementioned needs in IIoT. For instance, the authors of [3] suggested a
5G-enabled IIoT architecture, which demonstrates how 5G benefits the world with
the help of gigantic machine-to machine-communication, boosted mobile broadband
(MBB), and ultrareliable and low-latency communication (URLLC) and can help
greatly in the automation of various industries [4].

6.2 POSSIBLE ATTACKS IN IIoT


When discussing security in the IT industry, client-server models are frequently
used. In these models, interactions between the client and server take place using
famous protocols like HTTP, IP, TCP (transmission control protocol) and UDP (user
datagram protocol), among others. A successful attack typically results in financial
or reputational harm, very seldom in safety hazards. However, OT structures were
formed to make reliable and smooth running of industrial tasks. Because OT compo-
nents and subsystems were not developed with security in mind, security is provided
via the separation of OT networks and physical security measures (Figure 6.1).

FIGURE 6.1 IIoT architecture with layers separation [1].


Digital Connectivity for IIoT 99

6.3 CHALLENGES AND JUSTIFICATIONS


Let’s examine some significant industrial IoT implementation issues that industries
are now facing at present [5].

1. Inadequate data management by employees: The data produced by intel-


ligent devices must be appropriately stored, processed, analyzed, and man-
aged in order for IIoT technology to truly help manufacturing companies.
This calls for knowledgeable personnel who can appropriately assess reports
that have already been set up by an IoT solution provider or transform raw
data sets into insightful business insights. It is important to emphasize that
developing and installing IoT solutions don’t require extremely skilled pro-
fessionals to maintain the data management systems. The bar for becoming
a data scientist will be pretty low if the user chooses a reputable supplier
who develops a solution using cutting-edge technology. The user would be
bright enough to quickly locate the required expertise or may try training
the existing staff for the position by incorporating the most recent advances.
2. Requirement for uninterruptable network connectivity: The requirement
for continuous network connectivity is an additional difficulty in integrat-
ing industrial IoT systems in production. It can be very difficult to identify
the best effective and long-lasting solution in huge factories with various
machines and equipment. However, given the rapid advancement of new
technologies, network connectivity is not a concern at all. The COVID-19
epidemic has stimulated investments in cutting-edge, highly effective, and
scalable wireless network technologies, including 5G and Wi-Fi 6. These
technologies simplify and optimize the network connectivity of industrial
IoT devices.
3. Intensified threat of cyberattacks: The virtual world of today is always vul-
nerable to cyberattacks, regardless of the sector of the economy or the area
of life. Of course, some of these might involve IoT system security flaws.
According to Gartner, up to 25% of attacks could be connected to IoT, and
547 million USD would be spent on security in this industry. Indeed, cyber-
security is a crucial factor that must not be overlooked. However, there are
numerous contemporary data encryption techniques that reduce the possi-
bility of data leaks or system hacks. Data will be better protected if the user
chooses expert solutions and services from seasoned providers rather than
keeping it in the server rooms of their organization.
4. Uncertainty over the ROI: The IIoT is known for having significant imple-
mentation, management, and maintenance expenses because of how inven-
tive it is. Many business owners worry that such an investment might
possibly be greater than the possible profit. In the production sector, 50% of
business owners anticipate seeing their digitalization investments pay off in
just five years. Also keep in mind that the opportunities provided by IoT go
much beyond only the recovery of expenses. It is an invaluable investment
in consumer satisfaction, environmental protection, employee satisfaction,
safety, health, and other areas. It is important to keep in mind that each
100 AI-Driven IoT Systems for Industry 4.0

instance should be examined separately when estimating prospective costs.


Consult an experienced technology partner who can provide a solution
catered to the demands, capabilities, and particularity of a certain organi-
zation if you don’t have much understanding and experience in this area [6].
5. Integration issues with legacy systems and IoT: The IoT ecosystem as a
whole must be correctly set up and integrated with the systems that your
business utilizes in order for it to operate as intended. When using outdated
systems, this can be challenging. The best course of action in this situation
is to develop a specific IoT solution that is tailored to the systems of your
business.
6. Data storage problems caused by IoT devices: Issues with data storage can
also prohibit users from using IoT systems in the industrial sector. The serv-
ers and tools need to store all the data that industrial IoT devices produce.
By the way, cloud computing is now used to store the majority of corporate
data, which not only makes it possible to store enormous data sets but also
improves security.

6.4 OPPORTUNITIES IN IIoT


Currently, the IIoT is the largest and the most significant component of the IoT, but
consumer applications will eventually overtake it in terms of spending, primarily in
2018. However, in the entire IoT landscape, the IIoT further plays a significant role
and is more advanced.

1. Industrial IoT methods for minimizing time: Saving time at work places
significant advantages of IIoT for the industrial zone. The IoT can assist
with duties involving the configuration, registration, upkeep, updating,
and monitoring of machines. Hence, it is possible to manage some of these
procedures remotely. An essential component of the IoT development is
industrial control systems. In the next three years, up to 30% of control
systems will have analytical capabilities powered by AI, according to the
most recent Deloitte report [5].
2. Operational accuracy and reducing the likelihood of errors: In the produc-
tion sector, even a minor error can cost a lot of money. Intelligent, auto-
mated methods have made it feasible to prevent human errors that could be
the consequence of tiredness or lack of concentration. The productivity of
the entire firm can be increased by a well-programmed system, and since
it operates steadily, it is immune to being overworked, and it is more exact
and precise than human beings.
3. Enhanced operational efficiency: German production companies anticipate
a 12% performance gain after deploying intelligent solutions, according to
a PwC(PricewaterhouseCoopers) poll. The adoption of IoT solutions in
the sector allows staff to focus on tasks that advance development while
machines do repetitive duties. Employee retention is increased, and their
overall dedication and satisfaction are influenced when they are relieved of
tedious and unsatisfying jobs [7].
Digital Connectivity for IIoT 101

4. Industrial IoT technologies improve work planning: IoT enables busi-


nesses to make smarter business decisions by examining previous
data and external factors like weather or supply shortages. For pro-
duction organizations that want to trade numerous things, it can be
done without wasting any resources, and accurate estimates are essen-
tial. Planning operations relating to supply chain operations or the
work of particular departments can be aided by appropriate real-time
forecasting.
5. The capacity to produce useful data for business: With the help of IIoT
technology, businesses can gather vast volumes of data commencing from
every phase of the fabrication practice. As a result, manufacturers can iden-
tify problems that need to be fixed and optimize the production process for
optimal efficiency.
6. A potential to gain a competitive edge in manufacturing: Companies have
been able to expand their business opportunities thanks to the utilization
of cutting-edge technology like IoT and the blending of human and auto-
mated labor. This is a fantastic technique to increase the scope of cur-
rent offers and implement adjustments that boost the production process
effectively.
7. Improved employee well-being: IIoT solutions also contribute to safer work-
ing conditions. These remedies could be sophisticated monitoring systems
or tools that staff members carry around with them while working. In real
time, sensors included in industrial IoT devices identify and alert work-
ers to anomalies like machine vibrations, excessively high temperatures,
and even gas leaks. Cameras can identify when employees enter potentially
hazardous areas of the facility, alerting connected machines to change their
operating parameters to safer ones.
8. Cost savings from gadget maintenance and repair: Predictive mainte-
nance, as a strategy that enables you to monitor the status of devices while
they are in use and establish the ideal frequency of inspections, is a crucial
component of success in the manufacturing industry. Because maintenance
is only needed when it’s absolutely necessary, there is less downtime for
the machines, which also lowers the costs. This is supported by a report
released by Deloitte, which found that IoT technology boosts productiv-
ity by 25%, lowers malfunction rates by 70%, and lowers maintenance
expenses by 25% [7].

6.5 CONCLUSION
The security challenges must be taken into account from the very beginning given
the rise in potential threats to the IIoT. When introducing IIoT architecture or after
making changes to an existing architecture, a thorough penetration test is required
because the changes may open up new attack vulnerabilities that weren’t there before.
The various security risks, difficulties, and solutions mentioned in this chapter will
aid in preventing assaults on industries and provide chances for new business owners
to contribute to the development of the IIoT.
102 AI-Driven IoT Systems for Industry 4.0

REFERENCES
1. A. C. Panchal, V. M. Khadse and P. N. Mahalle, “Security Issues in IIoT: A
Comprehensive Survey of Attacks on IIoT and Its Countermeasures,” 2018 IEEE
Global Conference on Wireless Computing and Networking (GCWCN), Lonavala,
India, 2018, pp. 124–130, doi: 10.1109/GCWCN.2018.8668630
2. C. J. Turner, J. Oyekan, L. Stergioulas and D. Griffin, “Utilizing Industry 4.0 on the
Construction Site: Challenges and Opportunities,” in IEEE Transactions on Industrial
Informatics, vol. 17, no. 2, pp. 746–756, Feb. 2021, doi: 10.1109/TII.2020.3002197
3. E. T. Nakamura and S. L. Ribeiro, “A Privacy, Security, Safety, Resilience and
Reliability Focused Risk Assessment Methodology for IIoT Systems Steps to Build and
Use Secure IIoT Systems,” 2018 Global Internet of Things Summit (GIoTS), Bilbao,
Spain, 2018, pp. 1–6, doi: 10.1109/GIOTS.2018.8534521
4. S. S. A. Abbas and K. L. Priya, “Self Configurations, Optimization and Protection
Scenarios with Wireless Sensor Networks in IIoT,” 2019 International Conference on
Communication and Signal Processing (ICCSP), Chennai, India, 2019, pp. 0679–0684,
doi: 10.1109/ICCSP.2019.8697973
5. https://solwit.com/en/posts/industrial-iot-opportunities-and-threats/ website accessed
on Aug. 2023.
6. A. Artemenko, “Keynote: Advances and Challenges of Industrial IoT,” 2021 IEEE
International Conference on Pervasive Computing and Communications Workshops
and other Affiliated Events (PerCom Workshops), Kassel, Germany, 2021, pp. 526–526,
doi: 10.1109/PerComWorkshops51409.2021.9431146
7. A. Chowdhury and S. A. Raut, “Benefits, Challenges, and Opportunities in Adoption of
Industrial IoT (March 28, 2019),” in International Journal of Computational Intelligence
& IoT, vol. 2, no. 4, 2019, Available at SSRN: https://ssrn.com/abstract=3361586
7 Malicious QR Code
Detection and Prevention
Premanand Ghadekar, Faijan Momin,
Tushar Nagre, Sanika Desai,
Prathamesh Patil, and Vinay Aher

7.1 INTRODUCTION
Nowadays, everyone has a smartphone, and UPI has just changed the banking sys-
tem. Every single link is in the form of a quick response (QR). After 2020, due to
the pandemic, demand for a standard that could carry large amounts of information
without humans touching physical items has increased. The solution is provided by
QR code.
QR code includes black and white color modules that are structured in a square
shape on a plain color background. The information to be ciphered should be in one
of the four standard modes.
By the way, people are committing fraud by using malicious QR codes. And one
can’t predict which QR code is malicious and which one is not by just looking at them.

7.2 PROBLEM STATEMENT


Quick response code is everywhere nowadays. Whenever customers visit any shop
or hotel, they directly scan QR codes for payment, but while scanning the QR codes
[1], the customers don’t know whether it is safe or not or where it will take them if
that website is malicious. Therefore, to tackle this problem, normal users should be
aware of preventing their data from malicious sites and QR.

7.3 LITERATURE SURVEY


In their paper, Kharraz et al. [1] have analyzed diving into this technological world,
especially on the internet in which QR codes were used; authors have experimented
by visualizing and studying QR codes throughout 14 million web pages over a
one-year span. The study’s findings indicate that attackers frequently employ QR
code technology to disseminate malware or point people toward bogus websites.
Additionally, the relatively high number of malicious QR codes discovered during
trials indicates that, on a large scale, while exploring the internet, consumers are
infrequently exposed to the threats carried by QR codes, and the frequency of these
assaults isn’t very significant.
In this proposed paper [2], the authors have given a solution for determining the
JPEG image and QR code which are malicious and benign. They have proposed a
DOI: 10.1201/9781003432319-7 103
104 AI-Driven IoT Systems for Industry 4.0

system in which they have given a machine learning-based solution in which they
have used convolution Neural Network (CNN) by which in order to distinguish
between good and bad QR code/JPEG images, they have extracted 10 discriminative
characteristics using a machine learning classifier.
In this paper, authors [3] have designed a new two-level QR code in order to increase
the security of QR codes. They developed two types of QR codes: the public level, which
can be read by any QR code reader, and the private level, which employs textured pat-
terns as black modules and is encoded for private message transfer. Also private methods
do not affect or interfere with the process of the public level; thus, smooth functioning of
this two-level QR code takes place. This study [4] demonstrates how to identify perspec-
tive distortion in QR codes using edge direction and edge projection analysis.
The authors [5] of this work used a watermark technique for threat detection to
develop a blind digital image based on a QR code. The suggested method provides a
framework through which a modified binary form of data can be embedded into the
cover image’s DWT domain and used to identify images or QR codes.
This paper [6] shows how QR codes can be used to attack systems. Also authors
have discussed different phishing techniques from the point of view of attackers and
suggested possible solutions to it.
The authors [7] of this study have suggested a method for reading QR codes based
on the correlation of histograms between the reference image for the QR code and
the input picture. The paper also proposes a new algorithm which provides good
accuracy for the trained model.
This paper [8] describes the different worm attacks on the QR code and also pro-
vides the countermeasures for the same.
This paper [9] discusses the study of different malicious QR codes and also dem-
onstrates how to identify perspective distortion in QR codes using edge direction.
The authors [10] of this study have suggested the specification of the data matrix
of the barcode symbol.
This study [11] demonstrates the QR code libraries and provides a case study on
their application in the NITK central library.
In this paper [12], authors have designed QR code perspective distortion based on
edge projections and edge directions and their analysis.
In the proposed paper [13], different QR patterns are given and their invariant
moments and localization.
In this proposed paper [14], authors have analyzed the BRISK (Binary Invariant
Scalable Keypoints) algorithm.
This paper [15] presents quick QR code detection and recognition in high-reso-
lution photos.

7.4 STRUCTURE OF QR CODE


The QR code contains different areas that contain different patterns (Figure 7.1).
They are as follows:

1. Finding Pattern: This pattern can detect or identify the QR code and it can
be read at any angle.
Malicious QR Code Detection and Prevention 105

FIGURE 7.1 QR code structure.

2. Timing Pattern: This pattern differentiates black and white modules or


squares alternatively.
3. Alignment Pattern: This pattern aligns the data points that are encoded in
the data region.
4. Separators: Separators are used to separate the finding patterns.

7.5 QR CODE ATTACKS


QR grew in popularity over the past few years. Thus, QR codes have become more
famous, and they are the target for hackers to spread malware and steal vital infor-
mation. Hackers can use QR codes for different malicious purposes. Basically there
are two main types of QR code attacks or exploitations. They are as follows.

7.5.1 Quishing Attack
The first type of attack is quishing [16], and it is somehow similar to phishing attacks.
In this attack, a phishing page that hackers have designed to steal information like
user credentials, personal data, or some sensitive information is sent to a victim [17].
This attack can be avoided by making the QR code dynamic because it has more
security than the static ones. Also aging time can be added, which means if the user
does not use or activate the QR code in a specified time then it will become invalid.
It can be prevented by encrypting the data as well [18].

7.5.2 QRLjacking
The second type of attack is QRLjacking [19]. In this attack, hackers spread mal-
ware to the user’s or victim’s device. QRLjacking is short for QR code login jack-
ing [20]. Here the user or victim scans the QR code on a fake website which is
provided by hackers and then the hackers have all access to the user’s account. This
attack can be avoided by sending email/SMS to the users from service providers [21].
106 AI-Driven IoT Systems for Industry 4.0

Additional authentication methods can be added to user login. For example, sound-
based authentication can be used.

7.6 ALGORITHMS USED


7.6.1 Random Forest
Random forest and other algorithms for supervised machine learning are frequently
used to solve classification and regression issues [22]. It develops decision trees from
various data samples, using their average for categorization and majority vote for
regression (Figures 7.2 and 7.3). Random forest performs the following procedure
[23]:

Step 1: Scanning the QR code


Step 2: Preprocessing the QR code
Step 3: Loading the training and testing dataset
Step 4: Feature extraction with the help of the BRISK algorithm
Step 5: In random forest, a number of records derived from the dataset are
randomly selected
Step 6: An exclusive decision tree is constructed for every sample.
Step 7: An output executes it, produced by each decision tree.
Step 8: For classification and regression, the final result is evaluated using a
majority vote or an average as malicious/benign QR code

FIGURE 7.2 Random forest implementation.


Malicious QR Code Detection and Prevention 107

FIGURE 7.3 Random forest ROC curve.

7.6.2 Decision Tree
Decision tree algorithm is used for classification and regression problem analysis
based on the nodes of the tree (Figure 7.4).

Step 1: Scanning the QR code


Step 2: Preprocessing the QR code
Step 3: Load the training and testing dataset
Step 4: Feature extraction with the help of the BRISK algorithm
Step 5: Applying decision tree
Step 6: Train the model
Step 7: Detecting the type of QR code (malicious/benign)

FIGURE 7.4 Decision tree ROC curve.


108 AI-Driven IoT Systems for Industry 4.0

FIGURE 7.5 KNN ROC curve.

7.6.3 KNN Algorithm


KNN is a supervised machine learning algorithm in which the classification of data
points is done on the basis of the closest neighbor class (Figure 7.5).

Step 1: Scanning the QR code


Step 2: Preprocessing the QR code
Step 3: Loading the training and testing dataset
Step 4: Feature extraction with the help of the BRISK algorithm
Step 5: Choosing the value of ‘K’, i.e., nearest datapoint
Step 6: Based on the value of k and the distance of the datapoints, KNN will
assign it separate classes
Step 7: Training the model
Step 8: On the basis of the classes, it will classify the QR code as malicious/benign.

7.7 DIAGRAMS
In the proposed system (Figure 7.6) [24], by using the camera of the system, first
of all scan an image QR code. For data preprocessing, the model uses various data
preprocessing techniques such as scaling and translation.
Training Testing Model [25]: For training and testing of the proposed model, the
dataset has split into 70% training dataset and 30% testing dataset into two classes.
After preprocessing the QR code, images for training and testing of the proposed
model extract features from the QR code image [26]. Therefore, for extracting fea-
tures of QR codes, selected patterns are used as features for the purpose of feature
extraction. In this there are three main features: finder pattern, alignment pattern,
and timing pattern. For the purpose of feature extraction, here the BRISK algorithm
is implemented [19], which selects these patterns and finds relations between them,
and selects the best features from them by using scale invariance and rotational
invariance [27]. Once the features are selected, separate files of the extracted fea-
tures are then created and converted [28] into CSV files.
Malicious QR Code Detection and Prevention 109

FIGURE 7.6 Flow diagram for the proposed system.

Once the feature extraction is over, [29], then on the basis of QR code type, our
system checks whether it is an SSL (Secure Sockets Layer) or if it is redirecting the
user to a malicious website or it uses UPI protocol, and based on that, classification
is done in respective types.
After detecting the QR code type, by using a random forest algorithm, classify
them into two classes: malicious and normal (benign) QR codes which was our main
objective [16]. While using the random forest algorithm, features extracted from the
BRISK algorithm as an input parameter, and with the help of different decision tree
[30] classifiers of random forest algorithm based on BRISK features, classify QRs
into malicious and normal on the basis of the majority of votes of decision tree clas-
sifiers, and hence, proposed models have selected random forest as the classifying
algorithm as its accuracy was the highest. And at the end, print the result as a mali-
cious QR code or normal QR code.

7.8 IMPLEMENTATION
Our solution is mainly based on two problems: the first is the efficiency of existing
QR code scanners that are not on that level as of now so the proposed model uses
an improved random forest algorithm to increase accuracy and efficiency, and the
second is to detect the problematic one or malicious QR codes, [6] because QR code
fishing is a big issue nowadays so proposed model secures the user perspective for
security on QR scanning by showing them proper security warning so user can make
better decisions.
110 AI-Driven IoT Systems for Industry 4.0

TABLE 7.1
Experimental Results
Sr. No Algorithm Exiting Accuracy Proposed System Accuracy
1 KNN 69.61% 71.61%
2 Decision tree 65.83% 76.76%
3 Random forest 70.45% 81.46%

7.9 COMPARATIVE ANALYSIS


In the proposed system, the accuracy of the random forest algorithm is 81.46%
which is greater than other algorithms like decision tree classifier (76.76%) and SVM
(72%). So one can interpret that the random forest classifier performs better for our
proposed system as it uses ensemble learning techniques. The experimental analysis
is given in Table 7.1 and Figures 7.7–7.9.

7.10 EXPERIMENTAL RESULTS

FIGURE 7.7 Accuracy of random forest algorithm.


Malicious QR Code Detection and Prevention 111

FIGURE 7.8 Accuracy of decision tree algorithm.

FIGURE 7.9 Accuracy of KNN algorithm.


112 AI-Driven IoT Systems for Industry 4.0

7.11 FUTURE SCOPE


In the future, there is scope to improve the proposed system model for more accu-
rate detection and classification of malicious QR codes by implementing improved
machine learning algorithms. Overall, the emphasis will be on making the scanning
and detection process more efficient, as well as providing the user with more detailed
and accurate information regarding malicious QR codes.

7.12 CONCLUSION
With the help of a model, the detection of malicious QR is easy. A number of suspi-
cious QR will be detected, and so it will be helpful in the case of authentication,
digital information, and especially in payment transactions. It will provide great
security, and most importantly people will also adapt to it. Every time while scan-
ning QR code, they will check it through the model. And the number of frauds in
digitalization and transactions will be very low.

REFERENCES
1. Kharraz, A., Kirda, E., Robertson, W., uBalzarotti, D., and Francillon, A. 2014. Optical
delusions: A study of malicious QR codes in the wild. In 2014 44th Annual IEEE/IFIP
International Conference on Dependable Systems and Networks, Atlanta, GA, USA,
2014, pp. 192–203, doi: 10.1109/DSN.2014.103.
2. Shaikh, A., Kotavadekar, R., Sawant, S., and Landge, S. 2021. Machine learning-based
solution for the detection of malicious JPEG images, IEEE access, 10.1109/ACCESS.
2020.2969022, PP, 2020/01/23.
3. Tkachenko, I., Guichard, C., Strauss, O., Gaudin, J.-M., Puech, W., Senior Member,
IEEE, and Destruel, C. 2016. Two-level QR code for private message sharing and docu-
ment authentication, IEEE transactions on information forensics and security, vol. 11(2).
4. Singh, P. K., Zhu, J., Huo, L., and Pavlovich, P. A. 2021. Research on QR image code
recognition system based on artificial intelligence algorithm, Research on QR image
code recognition system based on artificial intelligence algorithm, vol. 30.
5. Thulasidharan, P. P., and Nair, M. S. 2015. QR code based blind digital picture water-
marking with attack detection code. AEU – International Journal of Electronics and
Communications, 69(7):1074–1084.
6. Security of the QR code by P. Kieseberg, M. Leithner, M. Mulazzani, L. Munroe,
S. Schrittwieser, M. Sinha, and E. Weippl. 2010. In MoMM’2010 – The Eighth
International Conference on Advances in Mobile Computing and Multimedia, 8–10
November 2010, Paris, France.
7. Geneva, Switzerland: ISO. 24778:2008. Specification for an Aztec Code barcode sym-
bol. Geneva, Switzerland: ISO.
8. Sharma, V. November 2011. An analytical survey of recent worm attacks. International
Journal of Computer Science and Network Security, 11.
9. Sharma, V., 2011, A study of malicious QR codes. International Journal of Computer
Science and Network Security. 9(5).
10. ISO 16022:2006. Specification for the DataMatrix barcode symbol.
11. Shettar, I. M. 2016. Quick response (QR) codes libraries: Case study on applications in
the NITK Library, National Conference on Future Librarianship, TIFR BOSLA; pages
129, vol 34.
Malicious QR Code Detection and Prevention 113

12. Karrach, L., Pivariová, E., and Boek, P. 2020. Identification of QR code perspective
distortion based on Edge directions and Edge projections analysis. 6(7):67.
13. Rajan, R., and Vibhor, S. 2017. Hu invariant moments-based localisation of QR code
patterns by tribal and ZAZ. International Journal of Advanced Computer Science and
Applications(IJACSA), 8(9):162–72.
14. Leutenegger, S., Chli, M., and Siegwart, R. Y. 2010 BRISK: Binary Robust Invariant
Scalable Keypoints Autonomous Systems Lab. ETH Zurich.
15. Szentandrási, I., Herout, A., and Dubská, M. May 2012. Quick QR code detection
and recognition in high-resolution photos. In The 28th Spring Conference on Computer
Graphics Proceedings. Association for Computing Machinery, New York, US, pp. 129–36.
16. Jain, A., and Chen, Y. 1993. Bar code localization using texture analysis. In Second
International Conference on Document Analysis and Recognition, pp. 41–44.
17. Alfthan, J. 2008. Robust Detection of Two-Dimensional Barcodes in Blurry Images.
Master’s thesis, KTH Computer Science and Communication, Stockholm, SE.
18. Liu, Y., Yang, J., and Liu, M. 2008. Recognition of QR code with mobile phones. In
Chinese Control and Decision Conference, CCDC 2008, 203–206.
19. Mikolajczyk, K., and Schmid, C 2005. A performance evaluation of local descriptors.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10):1615–1630.
20. Dalal, N., and Triggs, B. 2005. Histograms of oriented gradients for human detection.
Computer Vision and Pattern Recognition, 2005. IEEE Computer Society Conference,
@inrialpes.fr
21. Arnould, S., Awcock, G., and Thomas, R. 1999. Remote bar-code localisation using
mathematical morphology. In Seventh International Conference on Image Processing
and Its Applications, vol. 2, pp. 642–646.
22. JOUR, Chang, Jae. 2014. An introduction to using QR codes in scholarly journals,
Science Editing 1(2):113–117, 10.6087/kcse.2014.1.113, 2014/08/01
23. Muniz, R., Junco, L., and Otero, A. 1999. A robust software barcode reader using
the Hough transform. In International Conference on Information Intelligence and
Systems, 313–319.
24. Duda, R. O., and Hart, P. E. 1972. Use of the Hough transformation to detect lines and
curves in pictures. Communications of the ACM, 15(1):11–15.
25. Dubská, M., Herout, A., and Havel, J. 2011. PClines – Line detection using parallel
coordinates. In Proceedings of CVPR 2011.
26. Haindl, M., and Mikes, S. 2005. Colour texture segmentation using modelling approach.
In: Singh, S., Singh, M., Apte, C., Perner, P. (eds), Pattern Recognition and Image
Analysis. ICAPR 2005. Lecture Notes in Computer Science, vol 3687, pp. 484–491.
Springer, Berlin, Heidelberg. https://doi.org/10.1007/11552499_54
27. Parvu, O., and B, A. 2009. A method for fast detection and decoding of specific 2d
barcodes. In 17th Telecommunications forum TELFOR 2009, 1137–1140.
28. Belussi, L., and Hirata, N. 2011. Fast QR code detection in arbitrarily acquired images.
24th SIBGRAPI Conference on Graphics, Patterns and Images, Sibgrapi 2011, Alagoas,
Maceió, Brazil, August 28–31, 2011.
29. Herout, A., Dubská, M., and Havel, J. 2012. Real-time precise detection of regular grids
and matrix codes. Journal of Real-Time Image Processing, 10.1007/s11554-013-0325-6,
vol 11, 2013/02/14.
30. Hu, H., Xu, W., and Huang, Q. 2009. A 2D barcode extraction method based on texture
direction analysis. In Fifth International Conference on Image and Graphics, ICIG ‘09.,
pp. 759–762.
8 Integration of Advanced
Technologies for
Industry 4.0
Tanmay Paliwal, Aditya Sikdar, and Zidan Kachhi

8.1 INTRODUCTION
Industry 4.0 is the term used to describe the fourth industrial revolution, charac-
terized by the integration of advanced technologies such as the Internet of Things
(IoT), artificial intelligence (AI), big data analytics, robotics, and automation in
the manufacturing sector. Industry 4.0 aims to create intelligent factories capa-
ble of producing high-quality products with minimal human intervention while
being flexible, efficient, and responsive to customer needs. One of the critical
drivers of Industry 4.0 is the IoT, which refers to the network of physical devices,
sensors, and actuators connected to the Internet and can communicate with each
other and with cloud-based services. IoT enables data collection and sharing
across the entire value chain, from product design and development to production
and distribution. IoT also facilitates real-time monitoring and control of the man-
ufacturing process, predictive maintenance, and quality assurance. Another key
driver of Industry 4.0 is AI, which refers to the ability of machines and systems
to perform tasks that usually require human intelligence, such as learning, rea-
soning, and decision-making. AI can enhance the automation and optimization
of the manufacturing process by analyzing large amounts of data, recognizing
patterns, and generating insights. AI can also enable intelligent decision-making
by providing recommendations, suggestions, and solutions based on data-driven
models [1–4].

8.2 INDUSTRY 4.0 FRAMEWORK AND KEY TECHNOLOGIES


Industry 4.0 is based on a framework comprising four main components: cyber-
physical systems (CPS), the IoT, big data analytics, and AI. These components cre-
ate an intelligent manufacturing environment to optimize production and deliver
customer value.

8.2.1 Cyber-Physical Systems
CPS are systems that integrate physical components with computational compo-
nents. This integration allows for real-time communication between the physical
and digital worlds. For example, a CPS can monitor the status of a machine through
114 DOI: 10.1201/9781003432319-8
Integration of Advanced Technologies for Industry 4.0 115

sensors, send the data to a cloud service for analysis, and receive instructions from
an AI system to adjust the machine parameters or perform maintenance tasks. CPS
is essential for Industry 4.0, which is the fourth industrial revolution. Industry 4.0
is characterized by using automation, data, and connectivity to create smart facto-
ries. CPS can help manufacturers achieve higher levels of automation, efficiency,
quality, and flexibility in their production processes. They can also enable new
functionalities such as self-configuration, self-optimization, self-diagnosis, self-
healing, and self-learning. CPS is already used in various industries, including
manufacturing, transportation, healthcare, and energy. They have the potential to
revolutionize the way we produce goods and services, and their impact will only
grow in the future [3, 5, 6].

1. Benefits of CPS
a. Increased visibility and control: Manufacturers can get real-time
visibility into their production processes using CPS. This enables
them to make better judgments about maximizing their outputs and
identifying and fixing issues more rapidly. For instance, CPS can
keep track of the effectiveness and quality of items and machin-
ery and notify the operators if any flaws or defects are found. This
can lower expenses associated with waste, downtime, and rework.
Additionally, CPS may gather and analyze information from various
sources, including sensors, cameras, radio frequency identification
(RFID) tags, and Global Positioning System (GPS) gadgets. This can
assist manufacturers in streamlining their logistics, energy use, and
inventory levels [7, 8].
b. Improved collaboration: In a manufacturing organization, it may make
it possible for various stakeholders and departments to work together.
This may result in more effective information flow and better decision-
making. CPS, for instance, can make coordinating and communicating
easier for teams working on design, engineering, production, mainte-
nance, and customer service. This can shorten product development,
reduce mistakes, and increase client happiness. Collaboration between
manufacturers and their partners, suppliers, and clients can also be
made possible via CPS. This may result in a more responsive and trans-
parent supply chain [7, 8].
c. New business models: CPS may make new business models possible
that were not before. For instance, manufacturers can now provide their
clients with services for predictive maintenance. This means that they
can utilize CPS to check on the health of their products remotely and
do necessary maintenance or repairs before malfunctioning. This can
improve client retention, cut warranty expenses, and open fresh rev-
enue streams. Mass customization is another illustration of a brand-new
business model made possible by CPS. Producers may create products
tailored to each customer’s unique requirements and tastes by using
CPS. This may improve differentiation, competitiveness, and consumer
pleasure [7, 8].
116 AI-Driven IoT Systems for Industry 4.0

Despite these challenges, the benefits of CPS outweigh the risks. CPS has the
potential to revolutionize the way we produce goods and services. They are already
significantly impacting several industries, and their impact will only grow.

8.2.2 Internet of Things (IoT)


The IoT is a network of actual physical things linked to the Internet and can gather
and share data. The effectiveness, productivity, and safety of industrial processes
can be enhanced using this data. Simple sensors to sophisticated machinery can all
be IoT devices and communicate with one another and cloud platforms. Depending
on the application, IoT devices can be remotely controlled or operated automatically.

1. Uses of IoT devices in a variety of manufacturing applications


a. Machine monitoring: IoT devices can monitor the status of machines
and equipment in real time. This data can be used to identify potential
problems before they cause a breakdown. For example, IoT devices can
measure the temperatures, vibrations, pressure, and power consump-
tion of machines and equipment and send alerts if any abnormality is
detected. This can help to prevent damage, reduce waste, and optimize
performance. Machine monitoring can also help to improve worker
safety by detecting hazardous situations and notifying the operators or
managers [9].
b. Predictive maintenance: It can identify potential failure points for
machinery or other equipment. Preventive maintenance can be planned
using this information, preventing expensive downtime. IoT devices
collect data on these variables as machines and equipment age and
operate differently over time. Predictive maintenance then employs
advanced analytics and machine-learning algorithms to spot patterns
and trends. Predictive maintenance can determine the remaining useful
life of machinery and equipment based on this study and suggest the
ideal time for maintenance [9].
c. Quality control: It can gather information about product quality as it
is being produced. Problems can be found and fixed early in the pro-
duction process using this data. IoT devices, for instance, can utilize
sensors, cameras, or scanners to check a product’s size, appearance,
or performance at various stages of manufacture and compare it to
predetermined criteria. IoT devices can initiate remedial measures or
alert the quality control staff if any deviations or flaws are discovered.
Increasing customer happiness, lowering rework costs, and adhering to
laws can all be accomplished with quality control [10].
d. Logistics: It can track how items and supplies are moved around
throughout production. Using this data, the materials flow may be opti-
mized, and on-time product delivery can be guaranteed. IoT devices,
for instance, can use RFID tags, GPS units, or barcode scanners to
continuously track the position, condition, and stock of resources and
goods. This could lower the cost of inventory, prevent delays or losses,
Integration of Advanced Technologies for Industry 4.0 117

and enhance customer service. Logistics throughout the supply chain


can also facilitate coordination with suppliers, partners, and clients [11].
e. Increased visibility: It gives producers immediate access to informa-
tion on their manufacturing processes. This enables them to make
better judgments about maximizing their outputs and identifying
and fixing issues more rapidly. IoT devices, for instance, can gather
and send data from various manufacturing-related sources, including
tools, machinery, goods, personnel, and the environment. Dashboards
or reports that display key performance indicators (KPIs), such as
output, quality, efficiency, or utilization, can be used to visualize this
data. Enhanced operational intelligence and agility can benefit from
increased visibility [12].

Another challenge is the integration of IoT data with existing systems. IoT data
can be generated from various sources, and it cannot be easy to integrate this data
with existing systems. Along with the challenges, the benefits of using IoT in manu-
facturing outweigh the risks. IoT has the potential to revolutionize the way we pro-
duce goods and services. It has already had a significant impact on several industries,
and its impact will only grow in the future.

8.2.3 Big Data Analytics


Large and complicated datasets are being mined for value via big data analytics.
Decision-making, optimization, innovation, and customer pleasure are based on this
value. The fourth industrial revolution, known as Industry 4.0, is dependent mainly
on big data analytics. Industry 4.0 incorporates digital technologies into production,
including cloud computing, AI, the IoT, and CPS. Manufacturers can use these tech-
nologies to their advantage using big data analytics to obtain a competitive edge [13].

1. Uses of big data analytics


a. Demand forecasting: It can be used to forecast future product demand.
Utilizing this data will improve inventory control and production plan-
ning. Big data analytics is used in demand forecasting to examine past
sales data, industry trends, consumer behavior, and external factors like
weather, seasonality, or events. Based on this study, demand forecast-
ing can produce precise and timely projections of future demand for
various items, regions, or market segments. This can assist manufac-
turers in improving customer service, lowering inventory costs, avoid-
ing stockouts or overstock, and aligning their production capacity with
anticipated demand [14].
b. Inventory management: It can monitor inventory levels and spot issues
like stockouts or excess inventory. In inventory management, big data
analytics continuously tracks the availability and caliber of raw mate-
rials, works-in-progress, and finished commodities. Big data analyt-
ics also improve safety stocks, reorder points, and other inventory
replenishment procedures. Inventory management, which uses big data
118 AI-Driven IoT Systems for Industry 4.0

analytics, can assist firms in maintaining ideal inventory levels, lower-


ing waste and obsolescence, boosting inventory turnover, and improv-
ing product availability [15].
c. Supply chain optimization: It can streamline the movement of goods
and commodities along the supply chain. Costs can be cut, and pro-
ductivity can be increased as a result. Optimization uses big data ana-
lytics to gather and combine data from several supply chain sources,
including suppliers, manufacturers, distributors, retailers, and custom-
ers. In addition, big data analytics is used to solve complicated supply
chain issues by applying cutting-edge optimization methods, including
genetic algorithms, mixed-integer programming, and linear program-
ming. These issues include demand fulfillment, inventory allocation,
supplier selection, production scheduling, and transportation planning.
Supply chain optimization can assist manufacturers in enhancing cus-
tomer satisfaction, lowering lead times, and increasing service levels
through big data analytics [16].
d. Fraud detection: It can be used to identify fraudulent behavior, including
insurance fraud and credit card fraud. In fraud detection, big data ana-
lytics finds abnormalities, patterns, or behaviors that point to fraudulent
claims or transactions. Big data analytics also confirm the legitimacy of
partners, suppliers, and clients. Big data analytics, for instance, can be
used in fraud detection to examine transactional data, such as quantity,
frequency, or location, to identify odd or suspicious transactions. To con-
firm a person’s identity or creditworthiness, it is also possible to employ
big data analytics to compare customer or supplier profiles with those of
external databases like social media or credit bureaus. Fraud detection
may help manufacturers avoid financial losses, safeguard their reputa-
tion, and adhere to rules by leveraging big data analytics [17].
e. Risk management: It can be used to assess and manage risks such as finan-
cial risks or operational risks. Big data analytics is used in risk manage-
ment to quantify and track the exposure to and effects of different hazards
on manufacturing operations. Big data analytics is also used to assess and
put into practice risk mitigation techniques, including hedging, diversi-
fication, and contingency planning. For instance, risk management can
estimate the financial risks of currency swings or price volatility by using
big data analytics to analyze market data such as exchange rates, interest
rates, or commodity prices. Data analytics can also estimate the opera-
tional risks of equipment failure or product flaws by analyzing operational
data such as machine performance, product quality, or safety events.
2. Benefits of using big data analytics in manufacturing
a. Improved decision-making: Big data analytics can aid in better deci-
sion-making by giving manufacturers insights into their data. This can
boost revenue, lower expenses, and boost client happiness. By examin-
ing the demand, supply, and competition for their products, big data
analytics, for instance, can assist businesses in optimizing their pric-
ing strategy. They can use this information to determine the pricing
Integration of Advanced Technologies for Industry 4.0 119

to maximize their profits and market share. By examining client com-


ments, preferences, and behavior, big data analytics can assist manufac-
turers in bettering the design and development of their products. They
may be able to produce goods as a result that meet or surpass consumer
requirements and expectations [7, 8, 18].
b. Increased efficiency: It can assist producers in locating and getting rid
of processing inefficiencies. This may result in lower expenses and more
production. By examining the energy demand and utilization of their
machinery and equipment, for instance, firms can reduce the amount of
energy they use. They may be able to lower their energy expenses and
carbon footprint. By examining the processing parameters and results,
big data analytics can assist manufacturers in increasing the quality
and dependability of their processes. They can use this to identify and
stop flaws, mistakes, or failures. By analyzing the processing variabil-
ity and uncertainty, big data analytics may also assist manufacturers
in improving their processing flexibility and agility. This can facilitate
their quick and effective customer-change adaptation [7, 8, 18].
c. Improved customer satisfaction: It can aid businesses in better under-
standing their clients and giving them a better experience. Increased con-
sumer loyalty and repeat business may result from this. For instance, by
examining client personas, segments, and profiles, big data analytics can
assist manufacturers in personalizing their goods or services. This can
assist them in providing specialized or personalized solutions that meet
the demands and preferences of the customer. By examining consumer
contacts, feedback, and complaints, big data analytics can assist manu-
facturers in improving their customer service and support. They may
be able to solve problems more quickly and successfully. By examining
customer happiness, loyalty, and advocacy, big data analytics can help
manufacturers boost their customer retention and acquisition [7, 8, 18].

The adoption of big data analytics in manufacturing is still in its early stages, but it
is growing. The benefits of big data analytics are clear, and manufacturers who adopt
big data analytics are likely to see significant improvements in their performance.

8.2.4 Artificial Intelligence (AI)


The field of AI, which is constantly developing, significantly impacts the manufac-
turing sector. AI can streamline production procedures, enhance decision-making,
and automate operations. AI can also adapt to shifting circumstances and settings
by learning from data and experience. AI can be used in manufacturing for several
purposes, including:

1. Uses of AI in a variety of applications


a. Predictive maintenance: AI can anticipate when machinery or other
equipment will break down. Preventive maintenance can be planned
using this information, preventing expensive downtime. AI is used in
120 AI-Driven IoT Systems for Industry 4.0

predictive maintenance to gather and examine data from sensors, cam-


eras, and other devices that monitor the health and efficiency of machin-
ery and equipment. Machine-learning algorithms that can find patterns,
trends, or anomalies in the data are also used using AI. Predictive
maintenance can determine the remaining useful life of machinery and
equipment based on this study and suggest the ideal time for mainte-
nance. AI can also be used in predictive maintenance to streamline the
maintenance process, such as choosing the optimal approach, planning
activities, or assigning resources [7, 8, 18].
b. Quality control: AI can be used to check for flaws in items. This may
contribute to a decrease in recalls and an improvement in product qual-
ity. AI is used in quality control to automate inspection tasks, including
taking pictures of products, finding flaws, and categorizing them. AI is
also used to improve the inspection process, for example, by increasing
the consistency, speed, or accuracy of flaw detection. Quality control can
also employ AI to learn from inspection data to update defect detection
models, give production process feedback, or produce quality reports.
Manufacturers may ensure product compliance, customer satisfaction,
and brand reputation by employing AI in quality control [7, 8, 18].
c. Product design: AI can be utilized to create brand-new goods that are more
effective, user-friendly, and efficient. AI is used in product design to help
create design concepts, assess design options, and optimize design param-
eters. Additionally, it leverages AI to improve the design process by fos-
tering designers’ originality, creativity, and teamwork. AI can be used in
product design to incorporate user feedback, enhance design performance,
and develop new design features by learning from the design process out-
comes. Product design may assist producers in producing goods that meet
or surpass consumer expectations and needs by utilizing AI [7, 8, 18].

AI is a powerful tool that can help manufacturers to improve their performance


and competitiveness. However, it is essential to note that AI is not a silver bullet. It is
important to clearly understand the problem you are trying to solve before you start
using AI. You also need to have the correct data and the right tools to be successful.

8.2.5 Robotics and Automation


Technologies like robotics and automation are used to automate processes in produc-
tion. Robotics is the study of the design, manufacture, and use of machines that can
move, manipulate, or sense physical objects. Automation uses tools such as software,
hardware, or systems to carry out activities automatically. Automation and robotics
can be used in tandem to design effective production procedures that boost output,
quality, safety, and innovation.

1. Uses of robotics and automation to perform a variety of tasks


a. Picking and placing: Robots are capable of picking and placing objects
precisely. This is helpful for jobs like product assembly or product
Integration of Advanced Technologies for Industry 4.0 121

packaging. Robots can, for instance, pick up components or items from


a conveyor belt or a bin and arrange them in the proper orientation or
position on another belt or a pallet using grippers, suction cups, or mag-
nets. Robots can also recognize the objects they pick up or place using
sensors, vision systems, or barcode scanners. Manufacturing compa-
nies can improve their manufacturing lines’ efficiency, precision, and
adaptability by picking and placing robots [13, 18].
b. Inspection: It can be used to look for flaws in products. This may con-
tribute to a decrease in recalls and an improvement in product qual-
ity. Robots can check products for faults like cracks, scrapes, dents, or
holes using cameras, lasers, or ultrasonic devices. Robots can also ana-
lyze inspection data using AI to categorize the problems based on their
nature or severity. Manufacturers can use inspection robots to guaran-
tee product compliance, client happiness, and brand reputation [13, 18].
c. Testing: It can be used to check the functionality of products. This can
ensure that goods live up to consumer expectations. For instance, robots
can evaluate performance, dependability, or durability using mechani-
cal, electrical, or environmental devices. Robots can also compare test
results to specifications or standards using AI and offer feedback or
suggestions for improvement. Robotic testing can assist manufacturers
in improving product quality, decreasing warranty costs, and boosting
customer loyalty [13, 18].
d. Packaging: It can be applied to shipping goods packaging. Cost sav-
ings and efficiency gains may result from this. For instance, robots can
package goods in boxes, bags, or containers using wrappers, sealers, or
labels. Robots can also utilize sensors, visual systems, or scales to con-
firm the weight, size, or shape of the packages. Manufacturers can use
packaging robots to maximize their space, time, and materials [13, 18].
e. Welding: It can be applied to join components. This may aid in enhanc-
ing weld quality and lowering the possibility of worker injuries. For
instance, robots can join metal parts using arc, spot, or laser weld-
ing procedures. Using sensors, controls, or vision systems, robots
can also change welding parameters like speed, current, or pressure.
Manufacturing companies may improve the quality, consistency, and
aesthetic of their welds using welding robots [13, 18].
2. AI can enable robots to
a. Perceive their environment: Robots can benefit from AI to see, hear,
and feel their environment. They can decide how to engage with their
environment more effectively. Robots can utilize computer vision to
recognize things, faces, and gestures, for instance, with AI. They can
use this to find prey, avoid danger, or interact with people. Robots can
also benefit from AI by using speech recognition to comprehend spoken
instructions or queries. They can use this to follow directions, respond
to inquiries, or give comments. Robots can benefit from AI by using
haptic sensors to detect temperature, force, and touch. They can use
this to control items, change their grasp, or prevent damage [7, 18, 19].
122 AI-Driven IoT Systems for Industry 4.0

b. Plan their actions: Robots can use it to organize their actions. They
can be more effective and less prone to errors as a result. Robots can
utilize path planning, for instance, to identify the best way to get from
one place to another with the aid of AI. They may be able to save time,
energy, or resources. Robots can utilize task planning to break down a
problematic activity into smaller, more straightforward tasks with the
aid of AI. This can assist them in carrying out the task in a systematic
and orderly manner. Robots can also benefit from AI by using con-
tingency planning to foresee and deal with unforeseen circumstances.
This can assist them in adapting to setbacks and recovering from mis-
takes [7, 18, 19].
c. Coordinate with other robots or humans: Robots can use AI to help
them coordinate their behaviors with those of humans or other robots.
They can collaborate more successfully as a result. For instance,
using multi-agent systems to collaborate with other robots in a
shared environment, AI can assist robotics. This can assist people in
achieving a shared objective, like putting together a product or clean-
ing a space. AI can also assist robots in collaborating socially with
people through human-robot interaction. They can then modify their
behavior due to a better understanding of human emotions, intents,
and preferences. Robots can utilize swarm intelligence to mimic the
group behavior of natural systems like ants, bees, or birds with the
use of AI [7, 18, 19].
d. Learn from their outcomes: It can help robots to learn from their mis-
takes and improve their performance over time. Robotics and automa-
tion are essential for Industry 4.0 because they can help manufacturers
reduce labor costs, increase productivity, improve quality, and meet
customer expectations. Robotics and automation can also enable new
possibilities, such as collaborative robots, autonomous vehicles, and
additive manufacturing. Collaborative robots, also known as cobots,
can work safely alongside humans. This makes them ideal for tasks
that require a combination of human and machine skills. For example,
cobots can assist human workers with assembly tasks or perform dan-
gerous tasks that would be unsafe for humans. Autonomous vehicles
are vehicles that can operate without human input. This technology
is still in its early stages, but it has the potential to revolutionize the
way we transport goods and people. For example, autonomous vehi-
cles could deliver goods to customers or transport workers to and from
work. Additive manufacturing, also known as 3D printing, is a process
that uses a computer-controlled machine to create three-dimensional
objects from a digital file. This technology is becoming increasingly
popular in manufacturing because it allows manufacturers to create
complex parts with less waste and time. The adoption of robotics and
automation in manufacturing is growing. The benefits of these tech-
nologies are clear, and manufacturers who adopt them will likely see
significant performance improvements [20–23].
Integration of Advanced Technologies for Industry 4.0 123

8.3 BENEFITS AND ADVANTAGES OF TECHNOLOGY


INTEGRATION
Integrating advanced technologies such as IoT, AI, big data analytics, robotics, and
automation can bring significant benefits and advantages to the manufacturing sector
in Industry 4.0. Some of these benefits and advantages are:

8.3.1 Higher Operational Effectiveness and Productivity


Technology integration can help manufacturers achieve higher operational effective-
ness and productivity by monitoring, controlling, and optimizing their production
processes in real-time. Technology integration can also help manufacturers increase
their output, reduce waste, and improve quality by using data-driven insights and
intelligent decision-making [7, 18, 19].

8.3.2 Cost Reduction through Optimized Resource Allocation


Technology integration can help manufacturers reduce their costs by enabling them
to allocate their resources more efficiently and effectively. Technology integration
can also help manufacturers save energy, materials, and time by using predictive
maintenance, quality assurance, and demand forecasting [7, 18, 19].

8.3.3 Smooth Production Procedures and Real-Time


Data-Driven Insights
Technology integration can help manufacturers achieve smooth production proce-
dures by enabling them to coordinate and synchronize their activities across the
value chain. Technology integration can also help manufacturers gain real-time
data-driven insights by enabling them to collect, share, and analyze data from vari-
ous sources, such as machines, products, customers, and suppliers [7, 18, 19].

8.3.4 Adaptability to Changing Demands through


Flexible Manufacturing Lines
Technology integration can help manufacturers adapt to changing demands by creat-
ing flexible manufacturing lines that produce different products with minimal setup
time. Technology integration can also help manufacturers customize their products
according to customer preferences using product configuration, personalization, and
recommendation systems [7, 18, 19].

8.4 IoT AND AI: TRANSFORMING THE INDUSTRY


IoT is a crucial enabler of Industry 4.0, the fourth industrial revolution. Industry 4.0
is characterized by using automation, data, and connectivity to create smart facto-
ries. IoT is essential for intelligent factories because it gives manufacturers the data
they need to improve their processes. The adoption of IoT in manufacturing is still
124 AI-Driven IoT Systems for Industry 4.0

in its early stages, but it is growing. The benefits of IoT are clear, and manufacturers
who adopt IoT are likely to see significant improvements in efficiency, productivity,
and quality. They can also achieve higher automation, optimization, and innovation
levels in their production processes [19, 24, 25].
Intelligent factories integrating IoT and AI can dramatically increase perfor-
mance, efficiency, and creativity. Smart factories and Industry 4.0, the fourth
industrial revolution propelled by digital technologies, are other names for intel-
ligent factories. IoT devices are used in intelligent factories to gather and send
data from various sources during the manufacturing process, including machines,
equipment, products, employees, and the environment. They also employ AI to
analyze and understand the data to deliver insights, ideas, or actions that can
optimize the production process. There are several uses for intelligent factories,
including:

1. Smart factories can be used as


a. Reducing costs: Automation of jobs and production optimization are
two ways smart factories can cut expenses. For instance, robots can be
used in intelligent factories to carry out monotonous or risky activities
like selecting and placing, welding, or packing. This can save labor
costs, boost output, and lessen human error. AI can be used in intel-
ligent industries to optimize logistics, inventory control, and energy
usage. This may lower their waste production, pollution, and operating
expenses [19, 24, 25].
b. Improved quality: Intelligent factories can improve quality by spotting
issues early on by monitoring equipment and procedures. For instance,
intelligent factories may check product quality while being created
using sensors, cameras, or scanners. They can utilize AI to find errors
or irregularities in the data and start warnings or remedial actions.
Customer satisfaction, brand reputation, and product compliance can
all be enhanced as a result [19, 24, 25].
c. Increase flexibility: Smart factories can quickly switch between
producing multiple goods, enabling firms to adapt to shifting con-
sumer demands. For instance, modular, adaptive, or reconfigurable
machinery and equipment that can transition between various jobs
or functions can be used in smart factories. Additionally, they can
utilize AI to modify production factors like volume, pace, or diver-
sity, following the market’s demands or the clientele. Their product
variety, customization, and competitiveness may all benefit from
this [19, 24, 25].
d. Innovate faster: By using data and intelligence to make better deci-
sions, intelligent factories can innovate faster and bring new products
to market more quickly. Adopting IoT and AI is essential for man-
ufacturers who want to remain competitive in the Industry 4.0 era.
These technologies are transforming the manufacturing industry, and
manufacturers who embrace them will be well-positioned for success
[26–29].
Integration of Advanced Technologies for Industry 4.0 125

8.4.1 Case Studies Showcasing Successful Integration of IoT Solutions


IoT solutions have been successfully integrated into various industries and sectors,
such as automotive, aerospace, healthcare, and agriculture.

1. How IoT has transformed the industry in Industry 4.0


a. Automotive: The IoT enables the development of connected and autono-
mous vehicles that can communicate with each other and the infrastruc-
ture and provide enhanced safety, comfort, and convenience for drivers
and passengers. For example, Tesla is a leading company in electric and
self-driving cars that uses IoT to collect data from its vehicles, update
their software, and improve performance. Tesla also uses IoT to provide
remote diagnostics, maintenance, and customer service [30, 31].
b. Aerospace: It enables the creation of intelligent aircraft that can moni-
tor their health, performance, and environment and optimize their fuel
consumption, flight routes, and maintenance schedules. For example,
Boeing is a leading commercial and military aircraft company that uses
IoT to collect data from its planes, analyze it in real-time, and provide
insights and recommendations for its pilots, engineers, and operators.
Boeing also uses IoT for predictive maintenance, quality assurance, and
customer satisfaction [30, 31].
c. Healthcare: The IoT enables personalized and preventive healthcare
services to improve patients’ quality of life, wellbeing, and outcomes.
For example, Philips is a leading company in the field of health tech-
nology that uses IoT to provide connected devices, such as wearable
sensors, smart inhalers, and smart pills that can monitor patients’ vital
signs, medication adherence, and health conditions. Philips also uses
IoT to provide cloud-based platforms, such as HealthSuite, that can
store, share, and analyze data from various sources, such as electronic
health records, medical images, and genomic data, and provide insights
and solutions for patients, caregivers, and healthcare providers [30, 31].

8.4.2 Leveraging AI for Advanced Analytics in the


Manufacturing Process
AI is a technology that enables machines and systems to perform tasks that usu-
ally require human intelligence, such as learning, reasoning, and decision-making.
AI can enhance the automation and optimization of the manufacturing process by
analyzing large amounts of data, recognizing patterns, and generating insights. AI
can also enable intelligent decision-making by providing recommendations, sugges-
tions, and solutions based on data-driven models. It has had a transformative impact
on Industry 4.0 because it allows manufacturers to create intelligent systems that
can adapt to changing conditions and demands, learn from their own experiences,
and improve their performance over time. AI can also enable new capabilities, such
as cognitive computing, machine learning, natural language processing, computer
vision, and speech recognition [7, 8].
126 AI-Driven IoT Systems for Industry 4.0

1. Applications of AI in Industry 4.0


a. Adaptive manufacturing
AI can be used to create adaptive manufacturing systems that adjust
their parameters and operations according to feedback from sensors or
customers. This allows manufacturers to create more efficient, flexible,
and responsive production systems. For example, Siemens is a leading
company in industrial automation that uses AI to create self-optimiz-
ing production systems. These systems can learn from their data and
improve efficiency, quality, and flexibility. Siemens also uses AI to cre-
ate digital twins, virtual simulations of physical systems [19].
Before they are used in the real world, digital twins can test, validate,
and optimize production systems. Digital twins are computer-generated
simulations of real-world systems or things that are updated in real time
and use logic, machine learning, and simulation to aid decision-making.
Manufacturers can use digital twins to make a digital replica of their
production system and run different scenarios to assess performance,
spot potential problems, and make adjustments [32].
The following are some advantages of implementing AI in adaptive
manufacturing systems:
i. Improved efficiency
Systems for adaptive manufacturing can automatically change some
settings to increase production. This has the potential to increase
efficiency, particularly in high-volume manufacturing significantly.
The speed, temperature, pressure, or flow of the production process,
for instance, can be monitored and controlled by adaptive manufac-
turing systems using AI, and they can be optimized based on prod-
uct specifications, quality standards, or environmental conditions.
This can improve output and profitability while lowering waste,
energy use, and emissions [33].
ii. Enhanced quality
Real-time monitoring and management of production processes
are possible. This can aid in early problem detection and solution,
resulting in higher product quality. For instance, adaptive manufac-
turing systems can utilize AI to scan items with sensors, cameras,
or scanners for flaws or anomalies. Additionally, they can employ
AI to examine the inspection data and start warnings or corrective
actions when problems are found. This can support maintaining
brand reputation, customer happiness, and product compliance [34].
iii. Reduced costs
Automation of processes, increased effectiveness, and production
optimization can all contribute to cost savings. AI, for instance, can
be used in adaptive manufacturing systems to automate processes
like selecting and putting, welding, and packaging. Additionally,
they can employ AI to enhance their logistics, maintenance, and
inventory control. Costs of labor, operation, and maintenance may
all be decreased as a result [35].
Integration of Advanced Technologies for Industry 4.0 127

b. AI algorithms
AI programs can handle challenging tasks, including analysis, pattern
recognition, and prediction. As a result, producers can improve their
products and services and make better operational decisions. As an
illustration, IBM is a pioneer in cognitive computing and offers Watson
using AI. With the ability to handle audio, photographs, and other types
of data, this platform may offer perceptions and solutions for a variety
of fields and sectors [7, 19, 36].
Manufacturers can use Watson for the following purposes:
i. Analyze data
Watson can use big data analytics to examine historical and cur-
rent data from industrial processes, including output, quality, effi-
ciency, and utilization. Watson may also employ natural language
processing to examine text data, including reviews, complaints, and
consumer feedback. Watson can assist manufacturers in improv-
ing their decision-making by using data analysis to provide insights
into their operations, goods, and customers [7, 19, 36].
ii. Recognize patterns
It helps find patterns in data. This can be used to find issues,
streamline procedures, and develop forecasts. Watson, for
instance, may utilize machine learning to identify patterns of
flaws or faults in goods or procedures and recommend fixes or
enhancements. Additionally, Watson may utilize deep learn-
ing to identify client behavior or preferences trends and pro-
vide tailored goods or services. Using computer vision, Watson
can interact with humans and robots by recognizing patterns in
things, faces, or movements. Watson can assist manufacturers in
problem-solving, process optimization, and prediction by seeing
trends [7, 19, 36].
iii. Make predictions
It can be used to anticipate what will happen in the future. This
can be applied to production planning, inventory optimization, and
customer demand fulfillment. Watson, for instance, may forecast
future product demand using predictive analytics based on market
trends, seasonality, or events. In addition, Watson may employ pre-
scriptive analytics to suggest the best courses of action or fixes in
light of expected outcomes. Watson can also employ reinforcement
learning to learn from its own actions and results and gradually
improve. Watson can assist manufacturers in production planning,
inventory optimization, and meeting consumer demand by making
predictions [7, 19, 36].

The adoption of AI in manufacturing is still in its early stages, but it is growing.


The benefits of AI are clear, and manufacturers who adopt AI are likely to see sig-
nificant improvements in their performance.
128 AI-Driven IoT Systems for Industry 4.0

8.4.3 AI-driven Predictive Maintenance and Potential


Production Problem Identification
AI enables manufacturers to use predictive maintenance techniques to help them
prevent breakdowns, malfunctions, and failures in their machines and equipment
by using data from sensors or historical records to predict when they need main-
tenance or repair. AI also enables manufacturers to identify potential production
problems, such as defects, errors, or deviations by using data from cameras or other
sources to detect anomalies or outliers. AI-driven predictive maintenance and
potential production problem identification can help manufacturers reduce their
downtime, costs, and risks by enabling them to take proactive actions before prob-
lems occur or escalate. AI-driven predictive maintenance and potential production
problem identification can also help manufacturers improve their quality, reliability,
and customer satisfaction by ensuring that their products meet the standards and
expectations [7].
Some examples of how AI-driven predictive maintenance and potential produc-
tion problem identification are used in Industry 4.0 are:

1. Bosch: Bosch is a leading company in engineering and technology that uses


AI to provide predictive maintenance solutions that can monitor the condi-
tion and performance of machines and equipment and provide alerts and
recommendations for maintenance or repair. Bosch also uses AI to provide
anomaly detection solutions to identify defects or errors in products or pro-
cesses and provide feedback and correction [37].
2. General Electric (GE): GE is a leading company in the field of industrial
innovation that uses AI to provide Predix. This platform can collect and
analyze data from industrial assets and provide insights and solutions for
predictive maintenance and optimization. GE also uses AI to provide digi-
tal ghost, a solution that can create digital replicas of physical assets and use
them to detect and prevent cyberattacks or physical damage [38].

8.5 INDUSTRY 4.0 FRAMEWORKS AND ADAPTIVE


SUPPLY CHAINS
Industry 4.0 frameworks are models or guidelines that can help manufacturers
implement Industry 4.0 transformation in their production processes. Industry 4.0
frameworks can provide a vision, a strategy, a roadmap, and a set of best practices
for achieving Industry 4.0 goals and objectives. It can vary depending on the con-
text, the scope, and the manufacturers’ perspective. However, some common ele-
ments that are usually included in Industry 4.0 frameworks are the definition and
description of Industry 4.0 and its components, along with the identification and
assessment of the current state and the desired state of the production process and
formulation, prioritization of the goals and objectives of Industry 4.0 transforma-
tion, the selection and integration of the appropriate technologies and solutions,
and the measurement and evaluation of the performance and impact of Industry 4.0
transformation [39, 40].
Integration of Advanced Technologies for Industry 4.0 129

Some examples of world-leading Industry 4.0 frameworks are as follows:

1. Industrie 4.0: Industrie 4.0 is a framework developed by the German gov-


ernment and industry associations with the aim of creating smart factories
that can produce customized products with high efficiency and quality by
using CPS, IoT, AI, and other technologies. Industrie 4.0 defines six design
principles for intelligent factories: interoperability, virtualization, decentral-
ization, real-time capability, service orientation, and modularity [39, 40].
2. Made in China 2025: Made in China 2025 is a framework developed by the
Chinese government that aims to transform China into a global leader in man-
ufacturing innovation by using IoT, AI, big data analytics, and other technolo-
gies. Made in China 2025 defines ten critical sectors for development: new
information technology, high-end numerical control machinery and robotics,
aerospace equipment, marine engineering equipment and high-tech ships,
advanced rail transport equipment, energy-saving vehicles and new energy
vehicles, electrical equipment, agricultural machinery equipment, new mate-
rials, and biomedicine and high-performance medical devices [39, 40].
3. Smart Manufacturing Leadership Coalition (SMLC): Smart Manufacturing
Leadership Coalition (SMLC) is a framework developed by a consortium
of US industry, academic, and government organizations aiming to create
intelligent manufacturing systems that can optimize production by using
IoT, AI, big data analytics, and other technologies. SMLC defines five
critical capabilities for intelligent manufacturing systems: sensing, control,
modeling, analysis, and decision-making [39, 40].

8.5.1 Adaptive Supply Chains


Adaptive supply chains are supply chains that can adjust their operations according
to the changes in the market, customer, or environmental conditions. Adaptive sup-
ply chains can use IoT solutions and edge components to enable seamless commu-
nication, coordination, and collaboration among the various actors involved in the
supply chain, such as suppliers, manufacturers, distributors, retailers, and customers.
Adaptive supply chains are essential for Industry 4.0 because they can help manu-
facturers achieve higher levels of responsiveness, agility, efficiency, and customer
satisfaction by enabling them to [19]:

i. Monitor the demand and supply signals in real time.


ii. Optimize the inventory levels and the logistics routes.
iii. Coordinate the production schedules and delivery times.
iv. Customize the products according to customer preferences.
v. Track the product quality and customer feedback.

Some examples of how IoT solutions and edge components are used for adaptive
supply chains are:
1. RFID tags: RFID tags are devices that can store information, such as prod-
uct specifications, history, location, and status, and transmit it to RFID
readers through radio waves. RFID tags can help manufacturers track their
130 AI-Driven IoT Systems for Industry 4.0

products throughout the supply chain, from raw materials to finished goods,
and optimize their inventory management, quality control, and customer
service [41, 42].
2. Smart containers: Intelligent containers are containers that are equipped
with sensors, actuators, GPS devices, and cameras that can monitor their
environment, such as temperature, humidity, and pressure as well as their
location, movement, and security, and communicate with cloud services or
edge devices through wireless networks. Smart containers can help manu-
facturers ensure their product’s safety, quality, and efficiency during trans-
portation and optimize logistics operations [41, 42].

8.5.2 Edge Computing
A distributed computing paradigm known as edge computing moves computation
and data storage closer to the network’s edge or the location where data is generated.
As a result, applications’ efficiency, dependability, and security may all be enhanced.
Additionally, edge computing can enable brand-new applications like augmented
reality, self-driving cars, and smart cities that are not viable or practical with cloud
computing [43, 44].

1. In manufacturing, edge computing can be used in the following ways


i. Reduce latency: Edge computing can lower latency by processing data
closer to the point of data production. This is crucial for applications
that demand instantaneous action, such as quality control and predictive
maintenance. Predictive maintenance monitors the health and function-
ality of machines and equipment and looks for any indications of failure
or deterioration using sensors and AI. Quality control uses cameras and
AI to check their quality as things are being made. These applications
can deliver quicker feedback and actions by processing data at the edge,
aiding breakdown prevention, increasing product quality, and saving
costs [43, 44].
ii. Reduce bandwidth: It can minimize bandwidth by processing data
locally and delivering only the most crucial data to the cloud. For pro-
grams like machine vision and robotics that produce a lot of data, this
can reduce expenses and boost performance. Machine vision uses cam-
eras and AI to recognize objects, faces, or gestures and enable human-
robot interaction. To carry out activities like picking and placing,
welding, or packaging, robotics uses sensors and AI. These apps can
increase their accuracy and efficiency by processing data at the edge,
which lowers network traffic and latency [43, 44].
iii. Increase security: Security can be improved by processing data locally
and transmitting the least important data to the cloud. This can aid
in defending sensitive data against online threats. For instance, edge
computing can support the security of internet-connected industrial
IoT equipment, including sensors, actuators, and controllers. The func-
tionality or security of these devices may be jeopardized by hacking,
Integration of Advanced Technologies for Industry 4.0 131

malware, or denial-of-service attacks. These devices can protect their


data from tampering or unauthorized access by processing it at the edge
before sending it to the cloud [43, 44].
iv. Improve reliability: Edge computing can increase reliability by process-
ing information locally and delivering just the most crucial information
to the cloud. Thus, apps can still be made available even if the network
is down. Edge computing, for instance, can support continued produc-
tion in the event of network outages or other issues. This is important
for real-time data or control-dependent applications, including process
automation or machine control. These applications may operate inde-
pendently from the cloud thanks to the data processing at the edge,
which helps prevent production delays or mishaps [43, 44].
v. Enable local data analysis and decision-making: Edge computing
makes local data analysis and decision-making possible, increasing the
effectiveness and responsiveness of manufacturing operations. Edge
computing, for instance, can support dispersed intelligence in manu-
facturing processes. This implies that each system component, such as
a machine, piece of equipment, or product, can function independently
and autonomously and interact with other system components via peer-
to-peer networks. These components can analyze their data and make
local decisions, such as modifying their parameters, coordinating their
actions, or improving their performance by processing the data at the
edge. Manufacturing systems may become more flexible, adaptable,
and durable [43, 44].

Edge computing is a promising technology that has the potential to revolutionize


the manufacturing industry. By bringing computation and data storage closer to the
network’s edge, edge computing can improve manufacturing applications’ perfor-
mance, reliability, security, and flexibility. The adoption of edge computing in manu-
facturing is still in its early stages but is growing. The benefits of edge computing
are clear, and manufacturers who adopt edge computing are likely to see significant
improvements in their performance [43, 44].

8.6 AI IN THE CONSTRUCTION OF DIGITAL TWINS


Digital twins are digital representations of real-world systems or objects. They can
imitate an object or system’s behavior and functionality in real time. AI can be used
to update digital twins from real-time data and provide insights, ideas, or actions that
can improve the physical system or object [19].
Digital twins can be employed in a variety of fields and sectors for things like the
following:

1. Uses of digital twins


a. Predictive maintenance: Using digital twins, it is possible to anticipate
when a physical system or object may break down. Preventive main-
tenance can be planned using this information, preventing expensive
132 AI-Driven IoT Systems for Industry 4.0

downtime. For instance, digital twins can be used to track the operation
and state of various pieces of machinery, including turbines, engines,
and pumps. AI can also identify failure or deterioration indicators,
including vibration, temperature, and pressure. Digital twins can deter-
mine the machinery and equipment’s remaining usable life based on
this analysis and suggest the best time for maintenance. Digital twins
can use AI to optimize the maintenance procedure, including choosing
the optimal maintenance approach, planning the maintenance chores,
and allocating the necessary resources [45, 46].
b. Quality control: It can be utilized to model the manufacturing process.
Using this, possible issues can be found before they arise. For instance,
digital twins can be used to model the production of products like cars,
planes, and smartphones. The production data, such as output, qual-
ity, efficiency, or utilization, can also be analyzed using AI. Based on
this study, digital twins can pinpoint flaws or mistakes in the product
or process and provide fixes or enhancements. Digital twins can also
use AI to validate or provide feedback by comparing the simulation
results with the requirements or standards. Manufacturers may guaran-
tee product compliance, client satisfaction, and brand reputation using
digital twins [45, 46].
c. Design optimization: It can be applied to improve the layout of real-
world systems or items. Performance, effectiveness, and safety can all
be enhanced by doing this. For instance, digital twins can be utilized
to improve the design of a bridge, stadium, or skyscraper. Additionally,
they can utilize AI to assess the design choices following several stan-
dards, such as price, robustness, or beauty. Digital twins can provide
the optimal design solution that matches or exceeds the demands and
expectations of the consumer based on this evaluation. AI can also
be used with digital twins to test and validate the design under vari-
ous conditions, such as weather, load, or natural disasters. Architects
and engineers may use digital twins to produce better, more effective
designs [45, 46].
d. Training and simulation: It can be used to mimic various scenarios and
train operators. This could help to increase security and effectiveness.
Digital twins, for instance, can be used to train operators of intricate
systems, such as those in airplanes, ships, or power plants. AI can also
be used to develop realism and immersion in simulations that mirror
the conditions and circumstances of the actual world. Digital twins can
give operators feedback and direction on operating the systems safely
and effectively based on these simulations. Digital twins can also use
AI to evaluate the abilities and performance of the operators and offer
individualized training and suggestions for development. Digital twins
allow operators to learn more quickly and effectively [45, 46].
e. Research and development: It can be utilized for product and process
development and research. This may speed up and lower the price of
launching new items. Digital twins can be employed, for instance, in
Integration of Advanced Technologies for Industry 4.0 133

the study and development of new medications or vaccinations. AI


can also be used to mimic how medications or vaccines will affect
human cells or organs. Digital twins can offer information about the
effectiveness and safety of treatments or vaccines based on these sim-
ulations. Digital twins can also employ AI to speed up the regulatory
approval process, choose the best candidates for clinical trials, and
improve the drug discovery and development process. Pharmaceutical
companies can develop better medications or vaccines by employing
digital twins [45, 46].
Digital twins are a powerful tool that can be used to improve the
performance, efficiency, and safety of physical objects or systems. They
are becoming increasingly popular in various industries, including
manufacturing, healthcare, and transportation.
2. Benefits of using digital twins in manufacturing
a. Improved decision-making: Manufacturers may receive real-time data
regarding the performance of their assets from digital twins. Better
decisions about production, upkeep, and other elements of operations
can be made using this knowledge. Digital twins, for instance, can
assist manufacturers in streamlining their manufacturing schedules,
lowering their inventory levels, and enhancing the quality of their prod-
ucts. Digital twins can assist producers in locating and resolving prob-
lems, such as equipment breakdowns, ineffective processes, or quality
flaws, before they become more serious. Manufacturers can improve
their decision-making and achieve operational excellence by utilizing
digital twins [45, 46].
b. Reduced costs: Avoiding unnecessary downtime and increasing effi-
ciency can help manufacturers cut expenses. Digital twins, for instance,
can assist firms in determining when their machines or equipment need
maintenance and planning it appropriately. This may assist in prevent-
ing expensive failures, repairs, or replacements. Additionally, digital
twins can assist manufacturers in optimizing their logistics, waste
management, and energy use. Their operating expenses, environmental
effects, or emissions may all be decreased. Manufacturers can cut their
total cost of ownership and boost profitability by utilizing digital twins
[45, 46].
Enhanced customer experience: It can assist manufacturers in
improving the client experience by giving them personalized rec-
ommendations and real-time updates on the status of their orders.
Digital twins, for instance, can assist manufacturers in tracking and
monitoring the progress of their orders from production to deliv-
ery and delivering accurate and timely notifications to customers.
Manufacturers can use digital twins to analyze consumer comments,
preferences, and behavior to create products and services that are
unique to each customer and meet or exceed their expectations.
Manufacturers can increase consumer pleasure, loyalty, and reten-
tion by utilizing digital twins [45, 46].
134 AI-Driven IoT Systems for Industry 4.0

The adoption of digital twins in manufacturing is still in its early stages but grow-
ing. The benefits of digital twins are clear, and manufacturers who adopt digital
twins are likely to see significant improvements in their performance.
Digital twins are essential for Industry 4.0 because they can help manufactur-
ers accelerate their product development lifecycles, optimize their production pro-
cesses, and improve their product quality and customer satisfaction by enabling
them to [19]:

i. Test and validate their designs or processes before implementation.


ii. Monitor and control their systems or processes remotely.
iii. Detect and diagnose problems or anomalies in their systems or processes.
iv. Suggest design or process changes based on data-driven insights.
v. Experiment with different configurations or parameters for their systems
or processes.

Some examples of how AI is used for constructing digital twins are as follows:

1. Siemens: Siemens is a leading company in the field of industrial automa-


tion that uses AI to create digital twins that can simulate the behavior and
performance of physical systems, such as machines, plants, and factories.
Siemens also uses AI to create digital twins that can simulate the behavior
and performance of products, such as turbines, trains, and cars. Siemens
uses digital twins to enable testing, validation, optimization, and innovation
before implementation [45, 46].
2. GE: GE is a leading company in the field of industrial innovation that uses
AI to create digital twins that can simulate the behavior and performance
of physical assets, such as jet engines, wind turbines, and power plants. GE
also uses AI to create digital twins that can simulate the behavior and per-
formance of industrial processes, such as oil and gas extraction and power
generation. GE uses digital twins to enable predictive maintenance, quality
assurance, and customer satisfaction [38].

8.7 CHALLENGES IN THE INTEGRATION OF NEW


TECHNOLOGY IN INDUSTRY 4.0
Industry 4.0’s technology integration presents several difficulties that need to be
resolved by producers, researchers, decision-makers, and other stakeholders. Some
of these difficulties include the following:

1. Technology integration challenges


The difficulties or obstacles manufacturers encounter when choosing,
integrating, and deploying technologies and solutions for the Industry 4.0
revolution are called “technology integration challenges.” These difficul-
ties include technological complexity, expense, risk aspects, compatibility,
interoperability, scalability, dependability, and usability issues [47, 48].
Integration of Advanced Technologies for Industry 4.0 135

For instance:
a. Compatibility problems, when combining several technologies or sys-
tems with various standards, protocols, or architectures, manufacturers
may encounter compatibility problems. This could lead to data inter-
change or communication issues, as well as decreased functionality or
performance [47, 48].
b. Interoperability problems, when merging technologies or systems with
various functionalities, capabilities, or interfaces, manufacturers may run
into interoperability problems. Problems with coordination or collabora-
tion and an increase in complexity or redundancy may follow [47, 48].
c. Scalability problems, when combining technologies or systems with
varying capacities, demands, or requirements, manufacturers may run
into scalability problems. Performance decline, resource limitations, or
system instability could occur from this [47, 48].
d. Reliability problems, when merging technologies or systems with vary-
ing degrees of quality, robustness, or resilience, manufacturers may
run into reliability problems. System faults, mistakes, or failures could
come from this [47, 48].
e. Usability problems, manufacturers may run into usability problems
when integrating technologies or systems with varying degrees of user-
friendliness, accessibility, or flexibility. Users may become angry, frus-
trated, or perplexed [47, 48].
f. Manufacturers may run into technical complexity when integrating
technologies or systems with various levels of sophistication, invention,
or novelty. This could make choosing, deploying, or using the technolo-
gies or systems more challenging or unpredictable [47, 48].
g. Cost considerations, when integrating technologies or systems that have
varying degrees of affordability, availability, or maintainability, manu-
facturers may have to consider costs. This could lead to higher costs or
investments for purchasing, setting up, or updating the technology or
systems [47, 48].
h. Risk factors, when integrating technologies or systems that have various
levels of security, safety, or compliance, manufacturers may encounter
risk issues. This may come from increased exposure to cyberattacks,
data breaches, legal obligations, or regulatory fines [47, 48].
2. Cybersecurity and data privacy challenges
The hazards or weaknesses manufacturers confront while working with
massive amounts of data and connected devices in Industry 4.0 are called
“cybersecurity and data privacy challenges.” In addition to legal, ethical,
and societal ramifications, these difficulties include cyberattacks, data
breaches, data theft, data manipulation, data loss, and data misuse problems
[47, 48]. For instance:
a. Cyberattacks, malicious actors wishing to disrupt, harm, or destroy
manufacturers’ systems, data, or business activities may launch cyber-
attacks. These assaults include ransomware, phishing, or denial-of-service
attacks [47, 48].
136 AI-Driven IoT Systems for Industry 4.0

b. Data breaches can occur when competitors, insiders, or hackers get


unauthorized access to a manufacturer’s systems or data. These hacks
may cause data to leak, be exposed, or be disclosed [47, 48].
c. Manufacturers may experience data theft due to the malicious theft of
their data by thieves, spies, or rivals. These thefts could cause data loss,
misuse, or exploitation [47, 48].
d. Manufacturers may experience data manipulation due to hackers, rivals,
or insiders intentionally altering their data. Data corruption, distortion,
or falsification may result from these manipulations [47, 48].
e. Manufacturers may experience data loss due to system mistakes, human
error, or natural calamities that result in unintentional data loss. Due to
these losses, data may become unavailable, inaccessible, or unrecover-
able [47, 48].
f. Manufacturers may be subject to data misuse due to unethical use of
their data by third parties, including marketers, advertisers, and gov-
ernments. These abuses could lead to data breaches, invasions, or
infringements [47, 48].
g. Legal repercussions, failure to comply with the rules and legislation
governing manufacturers’ data and systems in various jurisdictions and
domains may result in legal repercussions for manufacturers. These
repercussions could lead to legal action, penalties, or fines [47, 48].
h. Manufacturers may be subject to moral and social repercussions as a
result of their data and systems on their stakeholders and society at
large [47, 48].
3. Upskilling and training challenges
The gaps or mismatches manufacturers encounter when adjusting to the
new skills and competencies required for Industry 4.0 are called upskilling
and training issues. These difficulties include the requirement for educa-
tion, training, and development and issues with skill gaps, shortages, obso-
lescence, and mismatches [49, 50]. For instance:
a. Lack of sufficient or high-quality people with the knowledge and abilities
needed for Industry 4.0 may result in manufacturer skill shortages. These
shortfalls may result in reduced productivity, quality, or innovation [49, 50].
b. Manufacturers may experience skill obsolescence if their current
employees possess out-of-date or obsolete skills and abilities that are no
longer required or appreciated in the context of Industry 4.0. This obso-
lescence could result in reduced performance, satisfaction, or employ-
ability [49, 50].
c. Skill gaps: Manufacturers may experience skill gaps if their current
workforce lacks the knowledge, abilities, and competencies required
or desired for Industry 4.0. These gaps could result in reduced efficacy,
efficiency, or safety [49, 50].
d. Manufacturing companies may experience skill mismatches if their
current employees do not have the appropriate or compatible skills and
competencies for Industry 4.0. Reduced fit, engagement, or retention
could occur due to these mismatches [49, 50].
Integration of Advanced Technologies for Industry 4.0 137

e. Manufacturers may have education issues if they must give their


employees the formal training or certification necessary to give them
the skills and competencies needed for Industry 4.0. These require-
ments could necessitate more money, time, or effort [49, 50].
f. Manufacturers may encounter training requirements when giving their
employees informal training or coaching to improve their skills and
competencies for Industry 4.0. These requirements could necessitate
more money, time, or effort [49, 50].

8.8 FUTURE RESEARCH DIRECTIONS


Technology integration in Industry 4.0 also opens up new opportunities for future
research directions to help manufacturers overcome these challenges and achieve
their goals and objectives [51, 52].
Some of these research directions are:

1. Technology integration research


Technology integration research is the process of developing new technolo-
gies or solutions that can improve the performance, efficiency, and inno-
vation of Industry 4.0 transformation. This includes developing new IoT
devices, AI algorithms, big data analytics techniques, robotics, and auto-
mation systems as well as improving their compatibility, interoperability,
scalability, reliability, and usability [51, 52].
a. The goal of technology integration research is to help manufacturers
achieve the following:
i. Increased productivity
ii. Enhanced quality
iii. Improved decision-making
iv. New product development
Adopting new technologies is essential for manufacturers who want
to remain competitive in the Industry 4.0 era. Technology integration
research is helping to accelerate the adoption of new technologies by
developing solutions that are compatible, interoperable, scalable, reli-
able, and easy to use [51, 52].
2. Cybersecurity and data privacy research
Cybersecurity and data privacy research is developing new methods or
mechanisms to protect data and systems involved in Industry 4.0 trans-
formation. This includes developing new encryption, authentication,
authorization, verification, and validation techniques and establishing
new standards, regulations, policies, and ethics for data governance.
Cybersecurity and data privacy research aim to help manufacturers
protect their data and systems from cyberattacks and data breaches.
This is important because Industry 4.0 systems generate and collect
large amounts of sensitive data, which could be valuable to hackers
[11, 12].
138 AI-Driven IoT Systems for Industry 4.0

a. Key research directions in cybersecurity and data privacy for


Industry 4.0
i. Encryption: Encryption converts data into a code that authorized
users can only decode. This is a critical security measure for pro-
tecting data from unauthorized access.
ii. Authentication: Authentication verifies a user’s or device’s identity. This
is important for preventing unauthorized access to systems and data.
iii. Authorization: Authorization grants users or devices access to spe-
cific resources. This is important for controlling who can access
what data.
iv. Verification: Verification is the process of ensuring that data is
accurate and complete. This is important for preventing data cor-
ruption and fraud.
v. Validation: Validation is the process of ensuring that data meets spe-
cific requirements. This is important for ensuring the quality of data.
In addition to developing new technical solutions, cyberse-
curity and data privacy research also focus on establishing new
standards, regulations, policies, and ethics for data governance.
This ensures data is used responsibly and ethically in Industry
4.0. Adopting new technologies is essential for manufacturers who
want to remain competitive in the Industry 4.0 era. However, the
increasing connectivity and complexity of Industry 4.0 systems
make them more vulnerable to cyberattacks. Cybersecurity and
data privacy research is helping to address this challenge by devel-
oping new methods and mechanisms to protect data and systems in
Industry 4.0 [11, 12].
3. Upskilling and training research
It is the process of developing new strategies or programs to prepare the
workforce for Industry 4.0 transformation. This includes developing new
curricula, courses, modules, and certifications and designing new learning
methods, tools, and platforms. The goal of upskilling and training research
is to help manufacturers ensure that their workforce has the skills and
knowledge they need to succeed in the Industry 4.0 era. This is important
because Industry 4.0 technologies require a more skilled workforce than
traditional manufacturing technologies [19, 50].
a. Key research directions in upskilling and training for Industry 4.0
i. Curriculum development: Upskilling and training researchers
are developing new curricula that cover the skills and knowledge
needed for Industry 4.0. This includes topics such as data analytics,
robotics, and AI.
ii. Course development: They are developing new courses that cover
the skills and knowledge needed for Industry 4.0. These courses can
be offered in various formats, such as online, in-person, or blended.
iii. Module development: They are developing new modules that cover
specific topics related to Industry 4.0. These modules can be used to
supplement existing courses or to provide targeted training.
Integration of Advanced Technologies for Industry 4.0 139

iv. Certification development: They are developing new certifica-


tions that validate the skills and knowledge of workers in Industry
4.0. These certifications can be used to demonstrate proficiency to
employers and to advance in one’s career.
v. Learning methods: They are designing new effective learning
methods for Industry 4.0. This includes methods such as gamifica-
tion, experiential learning, and blended learning.
vi. Tools: They are developing new tools that can be used to support
learning in Industry 4.0. This includes tools such as virtual reality,
augmented reality, and simulation software.
vii. Platforms: They are developing new platforms that can be used to
deliver learning in Industry 4.0. This includes MOOCs, corporate
learning platforms, and social learning platforms.
Adopting new technologies is essential for manufacturers who
want to remain competitive in the Industry 4.0 era. However, the
increasing complexity of Industry 4.0 technologies also means
that workers must be upskilled and trained to use them effectively.
Upskilling and training research is helping to address this challenge
by developing new strategies and programs to prepare the work-
force for Industry 4.0 [19, 50].

8.9 CONCLUSION
Industry 4.0 is a term that describes the fourth industrial revolution, which is charac-
terized by the integration of advanced technologies such as IoT, AI, big data analyt-
ics, robotics, and automation in the manufacturing sector. Industry 4.0 aims to create
intelligent factories capable of producing high-quality products with minimal human
intervention while being flexible, efficient, and responsive to customer needs.
Integrating IoT and AI technologies is essential for achieving the full potential of
Industry 4.0 transformation. By combining the data collection and communication
capabilities of IoT with the data analysis and decision support capabilities of AI,
manufacturers can create CPS that can interact with each other and with humans in
real time. These systems can also adapt to changing conditions and demands, learn
from their own experiences, and improve their performance over time.
Advanced technologies such as IoT, AI, big data analytics, robotics, and automa-
tion can bring significant benefits and advantages to the manufacturing sector in
Industry 4.0. These benefits and advantages include higher operational effective-
ness and productivity, cost reduction through optimized resource allocation, smooth
production procedures, real-time data-driven insights, and adaptability to changing
demands through flexible manufacturing lines.
However, technology integration in Industry 4.0 also poses several challenges
that need to be addressed by manufacturers, researchers, policymakers, and other
stakeholders. These challenges include technology integration, cybersecurity, data
privacy, and upskilling and training challenges.
It also opens up new opportunities for future research directions to help manu-
facturers overcome these challenges and achieve their goals and objectives. These
140 AI-Driven IoT Systems for Industry 4.0

research directions include technology integration, cybersecurity and data privacy,


and upskilling and training research.
In conclusion, technology integration is a critical factor for Industry 4.0 transforma-
tion that can enable manufacturers to create smart factories that can leverage data and
intelligence to achieve higher performance, efficiency, and innovation levels. Technology
integration is also a complex and dynamic process that requires continuous improve-
ment and adaptation to the industry’s and society’s changing needs and expectations.
Therefore, manufacturers must embrace the potential of Industry 4.0 transformation
by adopting a holistic, strategic, and collaborative approach to technology integration.

ACKNOWLEDGMENTS
My thanks go to the editor of the volume, Dr. Preethi Nanjundan, for giving me the
chance to write a chapter on this area of AI forecasting and making helpful sugges-
tions and to Zidan Kachhi for guiding me on the right path.

LIST OF ABBREVIATIONS
AI Artificial intelligence
CPS Cyber-physical system
GPS Global Positioning System
Industry 4.0 The Fourth Industrial Revolution
IoT Internet of Things
QR code Quick-response code
RFID Radio frequency identification
SMLC Smart Manufacturing Leadership Coalition

REFERENCES
1. Vasja R, Maja M, and Alojz K. “A Complex View of Industry 4.0”, SAGE Open, 6(2),
2158244016653987, 2016. doi: 10.1177/2158244016653987, 2016
2. Kumar, A., & Singh, R. K. “Integrating Industry 4.0 and Circular Economy: a Review”,
Journal of Enterprise Information Management, 2021. doi: 10.1108/JEIM-11-2020-0465
3. Alshamrani, A., Alshamrani, M., & Alshamrani, H. “Industry 4.0 and Its
Implementation: A Review”, Information Systems Frontiers, 1–18, 2021.
4. Wang, Y., Liang, H., & Zhang, Y. “The Current Status and Developing Trends of
Industry 4.0: a Review”, Information Systems Frontiers, 1–16, 2021.
5. Eercan, E. “What is Industry 4.0 and Cyber Physical Systems?”, Retrieved from https://
eercan.com/post/what-is-industry-40-and-cyber-physical-systems/, 2020.
6. Radanliev, P., De Roure, D., Nicolescu, R., Huth, M., & Santos, O. “Digital Twins:
Artificial Intelligence and the IoT Cyber-Physical Systems in Industry 4.0”,
International Journal of Intelligent Robotics and Applications, 171–185, 2021. doi:
10.1007/s41315-021-00180-5
7. Lee, J., Bagheri, B., & Kao, H. A. “A Cyber-Physical Systems Architecture for Industry
4.0-Based Manufacturing Systems”, Manufacturing Letters, 3, 18–23, 2015.
8. Liang, X., Shetty, S., Tosh, D., Kamhoua, C., Kwiat, K., & Njilla, L. “Provchain: A
blockchain-based data provenance architecture in cloud environment with enhanced
privacy and availability”, In 17th IEEE/ACM International Symposium on Cluster,
Cloud and Grid Computing, CCGRID, pp. 468–477, 2017.
Integration of Advanced Technologies for Industry 4.0 141

9. Sakovich, N. “IoT in Manufacturing: Ultimate Guide and Use Cases”, SaM Solutions,
2021. https://www.sam-solutions.com/blog/iot-in-smart-manufacturing/
10. Chalishazar, T. “IoT in Manufacturing: The Ultimate Guide”, Peerbits, 2022.
11. “IoT Applications in Manufacturing Industry”, TechVidvan, 2021.
12. “How to Use the Internet of Things (IoT) in Manufacturing”, PixelPlex, 2021.
13. Kaur, S., & Singh, S. “Robotic automation in manufacturing”, In Industry 4.0: Trends
in Management of Intelligent Manufacturing System, S. Singh & S. Kaur, Eds.,
pp. 35–54, 2023.
14. Kumar, V. “Big Data Analytics in Manufacturing”, HCL Blogs, Hcltech.com, 2019.
15. Littlefield, M. “What Is Big Data Analytics in Manufacturing?”, Blog.lnsresearch.com,
2015.
16. Srivastava, S. “Big Data in Manufacturing – Importance and Use Cases”, Appinventiv.
com, 2022.
17. Consoli, R. “Using Big Data Analytics to Improve Production”, Manufacturing.net.,
2018.
18. Auschitzky, E., Hammer, M., & Rajagopaul, A. “How Big Data Can Improve
Manufacturing”, McKinsey, 2014.
19. Zhou, J., Liu, X., & Zhou, Z. “Industry 4.0: Towards future industrial opportunities
and challenges”, In Industry 4.0: Trends in Management of Intelligent Manufacturing
Systems, Springer, Singapore, pp. 3–20, 2019.
20. ROBOTNIK. “Mobile Robots in Industry 4.0: Automation & Flexibility”, Robotnik,
2021.
21. Goel, R., & Gupta, P. “Robotics and Industry 4.0”, In A Roadmap to Industry 4.0:
Smart Production, Sharp Business and Sustainable Development, pp. 157–169, 2019.
22. Violino, B. “Robotics and Industry 4.0: Reshaping the Way Things Are Made”, ZDNet.,
2016.
23. POLLY. “The Role of Collaborative Mobile Robots in Industry 4.0”, Robotics &
Automation News, 2021.
24. L2L. “The Role of IoT and Industry 4.0 in Creating Digital Factories of Tomorrow”, IoT
for All, 2022.
25. Gregolinska, E., Khanam, R, Lefort, F., & Parthasarathy, P. “Industry 4.0: Digital
Transformation in Manufacturing”, McKinsey, Www.mckinsey.com, 2022.
26. Eranda, K. “IoT in Industry 4.0 – Deep Explanation – JFrog Connect”, JFrog, 2021.
27. Rejig. “Top Five Benefits of Industry 4.0 for Modern Enterprises”, Rejig Digital, 2021.
28. Saribardak, E. “How IoT Reshapes Industry 4.0 and Benefits of IoT for SMEs”,
ReadWrite, 2020.
29. Mendoza, M. “IoT in Manufacturing: Applications Today and the transformative tech-
nologiesFuture”, Hitachi Solutions, 2020.
30. Lowe, A., Adiraju, P., Fuzellier, M., Janakiraman, P., & Cummins, T. “Connected
Factory Solution Based on AWS IoT for Industry 4.0 Success”, Amazon Web Services,
2020.
31. Aheleroff, S., Xu, X., Lu, Y., Aristizabal, M., Velásquez, J. P., Joa, B., & Valencia,
Y. “IoT-Enabled Smart Appliances under Industry 4.0: A Case Study”, Advanced
Engineering Informatics, 43, 101043, 2022. doi: 10.1016/j.aei.2020.101043
32. IBM. “What Is a Digital Twin”, Www.ibm.com, 2022.
33. Wikipedia Contributors. “Digital Twin”, Wikipedia, Wikimedia Foundation, 2019.
34. Collie, B., Parker, G., Haimes, Y., and Dubno, D. “Adaptive Manufacturing and
Homeland Security”, White Paper for HSSTAC Quadrennial Homeland Security
Review (QHSR), 2017.
35. Gosselin, S.. “Adaptive Manufacturing Technology and the Modern Factory”, Integris,
2021.
142 AI-Driven IoT Systems for Industry 4.0

36. Gnamm, J., Frost, T., Lesmeister, F., & Ruehl, C, eds. “Factory of the Future: How
Industry 4.0 and AI Can Transform Manufacturing”, Bain, 2023.
37. Keleko, A. T., Kamsu-Foguem, B., Ngouna, R. H., & Tongne, A. “Artificial Intelligence
and Real-Time Predictive Maintenance in Industry 4.0: A Bibliometric Analysis”, AI
and Ethics, 553–577, 2022. doi: https://doi.org/10.1007/s43681-021-00132-6
38. Achouch, M., Dimitrova, M., Ziane, K., Karganroudi, S. S., Dhouib, R., Ibrahim,
H., & Adda, M. “On Predictive Maintenance in Industry 4.0: Overview, Models, and
Challenges”, Applied Sciences, 12(16), 8081, 2022.
39. Hofmann, E., Sternberg, H., Chen, H., Pflaum, A., & Prockl, G. “Supply Chain
Management and Industry 4.0: Conducting Research in the Digital Age”, International
Journal of Physical Distribution & Logistics Management, 49(10), 945–955, 2019.
40. Caiado, R. G. G., Scavarda, L. F., Azevedo, B. D., de Mattos Nascimento, D. L., &
Quelhas, O. L. G. “Challenges and Benefits of Sustainable Industry 4.0 for Operations
and Supply Chain Management—A Framework Headed toward the 2030 Agenda”,
Sustainability 14(2), 830, 2022.
41. Gaur, V. “Bringing Blockchain, IoT, and Analytics to Supply Chains”, Harvard
Business Review, 2021. https://hbsp.harvard.edu/product/H06RVC-PDF-ENG
42. Zhu, X. N., Peko, G., Sundaram, D., & Piramuthu, S. “Blockchain-Based Agile Supply
Chain Framework with IoT”, Information Systems Frontiers, 23(4):1023–1039, 2021.
43. Ouyang, C. “Edge Computing and Hybrid Cloud: Scaling AI within Manufacturing”,
IBM Blog, 2021.
44. Pelizzo, G. “Council Post: How Edge Computing Enables Industry 4.0”, Forbes, 2021.
45. Downey, J. “What Is Digital Twin Technology and How It Benefits Manufacturing
in the Industry 4.0 Era?” SL Controls, 219–241, 2020. ISBN 978-0-12-817630-6, doi:
https://doi.org/10.1016/B978-0-12-817630-6.00011-4
46. Tao, F., Zhang, M., Liu, Y., & Nee, A. Y. C. “Digital Twin Driven Smart Manufacturing:
Connotation, Reference Model, Applications and Research Issues”, Robotics and
Computer-Integrated Manufacturing, 61, 101837, 2019
47. Peraković, D., Periša, M., & Zorić, P. “Challenges and Issues of ICT in Industry 4.0”,
Lecture Notes in Mechanical Engineering, 259–269, 2019. https://doi.org/10.1007/
978-3-030-22365-6_26
48. TXM Lean Solutions. “The Advantages & Challenges: Implementing Industry 4.0”,
TXM Lean Solutions, 2020.
49. Ellingrud, K., Gupta, R., & Salguero, J. 2020. “Reskilling Workers for Industry 4.0”,
McKinsey, Www.mckinsey.com, 2020.
50. Li, L. “Reskilling and Upskilling the Future-Ready Workforce for Industry 4.0 and
Beyond”, Information Systems Frontiers, 24(3), 2022. doi: https://doi.org/10.1007/
s10796-022-10308-y
51. Moraes, E. B., Kipper, L. M., Kellermann, A. C. H., Austria, L., Leivas, P., Moraes,
J. A. R., & Witczak, M. “Integration of Industry 4.0 Technologies with Education
4.0: Advantages for Improvements in Learning”, Interactive Technology and Smart
Education, ISSN 17415659, 2022. doi: 10.1108/ITSE-11-2021-0201
52. Tabim, V. M., Ayala, N. F., & Frank, A. G. “Implementing Vertical Integration in the
Industry 4.0 Journey: Which Factors Influence the Process of Information Systems
Adoption?, Information Systems Frontiers, 1–18, 2021. doi: 10.1007/s10796-021-10220-x
9 Challenges in Digital
Transformation
and Automation
for Industry 4.0
Manjari Sharma, Tanmay Paliwal,
and Payashwini Baniwal

9.1 INTRODUCTION
Throughout history, industrial revolutions have consistently been more than merely
small changes. They have signified enormous strides that have altered social struc-
tures, production paradigms, and the global economic environment. Each transition has
embodied the pinnacle of human ingenuity and an unyielding drive for advancement,
from the thunderous clangs of the first steam engines in the late 18th century, heralding
the beginning of mechanized production, to the quiet digital murmurs echoing through
contemporary global supply chains. As civilization enters the era of Industry 4.0, it
stands at the intersection of the physical and digital, where the revolutionary nature of
digital technologies and the tangible elements of industrial processes collide [1].
This Fourth Industrial Revolution is defined by Industry 4.0, which represents a fun-
damental change in how industries are understood, produced, and advanced. Industry 4.0
is not merely another stage in the continual development of industry. This digital trans-
formation is supported by a network of connected hardware, sensors, and systems known
as the Internet of Things (IoT), as well as the reasoning abilities of artificial intelligence
(AI). However, the smooth and well-coordinated integration of various technologies is
what sets Industry 4.0 apart from its predecessors, not only its technological armament.
Envision a factory where cyber-physical systems communicate in real time,
where an anomaly detected by a sensor on the assembly line instantaneously triggers
a recalibration of processes, ensuring optimal productivity and minimal downtime.
Imagine a scenario where data analytics platforms, fed by a constant stream of data,
not only detect patterns but also predict future trends, offering strategic insights that
drive not just reactive but proactive decision-making. This is the promise of Industry
4.0 – an industrial landscape that is dynamically responsive, inherently intelligent,
and perpetually evolving.
However, as with any profound transformation, the journey toward achieving the
full potential of Industry 4.0 is fraught with challenges. The intricate mesh of con-
nected devices, while offering unprecedented levels of communication and control,
also introduces vulnerabilities. Cybersecurity, thus, emerges as a prime concern,

DOI: 10.1201/9781003432319-9 143


144 AI-Driven IoT Systems for Industry 4.0

necessitating robust protocols and safeguards. Moreover, while automation and AI


promise enhanced efficiencies, they also engender debates about the future of human
labor in industries. Will the factory of the future be devoid of human intervention?
Or will human ingenuity and machines’ efficiency strike a symbiotic balance, forg-
ing a partnership where each complements the other?
The vast data repositories, often hailed as the new oil, also raise pertinent ques-
tions about data privacy, ownership, and ethical considerations. Who owns the data?
How is it leveraged, and more importantly, safeguarded? In an era where data-driven
insights can offer a competitive edge, these considerations take center stage, neces-
sitating a reevaluation of not just technological but also ethical frameworks.
Moreover, the harmonization of these technologies requires significant capi-
tal investments, skilled workforces, and infrastructural overhauls, challenges that
especially resonate with small and medium enterprises (SMEs) that might lack the
resources of their larger counterparts. Thus, ensuring that the dividends of Industry
4.0 are equitably distributed becomes paramount.

9.2 DEFINITION AND SIGNIFICANCE OF INDUSTRY 4.0


Industry 4.0, often referred to as the Fourth Industrial Revolution, delineates the
current trend of automation and data exchange in manufacturing technologies. This
transformative phase is marked by the convergence of cyber-physical systems, the
IoT, cloud computing, and cognitive computing. Essentially, Industry 4.0 encap-
sulates a smart industrial environment where machinery and devices can enhance
operations and production processes through automation and self-optimization, ben-
efiting from real-time data analytics while continually optimizing performance [2, 3].

9.2.1 Real-Time Data and Proactive Decision-Making


With the integration of IoT and advanced data analytics, manufacturing processes
are no longer solely about production but are interlaced with real-time data process-
ing. This facilitates more informed and quick decision-making, ensuring that the
manufacturing process remains uninterrupted and efficient [4].

9.2.2 Enhanced Productivity and Efficiency


Through automated monitoring and the introduction of smart machines that can pre-
dict failures and autonomously trigger maintenance processes, there’s a pronounced
increase in productivity and efficiency. Such advancements mean reduced downtime,
leading to increased production and, thus, profitability [5].

9.2.3 Flexibility and Customization


The cyber-physical systems in Industry 4.0 allow for modular structured smart fac-
tories. This flexibility means products can be more tailored to individual customer
requirements. Manufacturers can shift seamlessly from creating one product to
another based on real-time demands [6].
Challenges in Digital Transformation and Automation for Industry 4.0 145

9.2.4 Resource Optimization
With a more connected and integrated supply chain, Industry 4.0 brings about an
optimal use of resources, including raw materials, energy, and manpower. The effi-
cient use of resources not only reduces costs but also minimizes waste, fostering
sustainable and eco-friendly manufacturing practices [7].

9.2.5 Human-Machine Collaboration
Contrary to the popular belief that automation would replace human jobs, Industry
4.0 emphasizes collaboration between humans and machines. With smart devices
aiding human decision-making processes and humans guiding machines in more
complex tasks, a synergistic relationship emerges [8].

9.3 TECHNOLOGICAL DRIVERS – IoT, AI, AUTOMATION,


CLOUD COMPUTING
The combination of four technical drivers – the IoT, AI, automation, and cloud com-
puting – accelerates industrial and operating paradigms to unprecedented heights in
the ever-evolving landscape of Industry 4.0. These drivers form the foundation of
the revolutionary journey that defines Industry 4.0, underlying its significance and
ushering in a new era of possibilities [9].

9.3.1 Internet of Things
IoT is at the forefront of Industry 4.0, redefining how gadgets, sensors, and machin-
ery interact and communicate. The IoT connects physical things into a seamless
digital fabric, allowing for real-time data transmission and cooperation. This inter-
connection serves as Industry 4.0’s nervous system, providing producers with a full
perspective of their activities. From smart sensors that monitor manufacturing pro-
cesses to connected logistics systems that optimize supply chains, the IoT boosts
operational efficiency, improves decision-making, and ushers in previously unthink-
able levels of real-time responsiveness.

9.3.2 Artificial Intelligence
AI is the cognitive engine that will bring Industry 4.0 from concept to reality.
AI gives machines the ability to analyze data, learn from patterns, and make
informed decisions on their own. This mutually beneficial interaction between
AI and IoT powers predictive analytics, allowing manufacturers to anticipate and
minimize possible hazards. Algorithms for machine learning optimize processes,
detect abnormalities, and enhance tactics. Furthermore, AI enhances human
capacities by encouraging creative problem-solving, facilitating human-robot
collaboration, and lifting the entire production ecosystem to new levels of adapt-
ability and efficiency.
146 AI-Driven IoT Systems for Industry 4.0

9.3.3 Automation
The cornerstone of Industry 4.0’s promise of accuracy, consistency, and agility
is automation. Smart factories use automation to orchestrate industrial processes
with little human interaction. Robotic systems that are AI-infused and led by real-
time data from IoT devices perform activities ranging from ordinary assembly to
intricate customization. Automation reduces human error, boosts throughput, and
maintains consistent product quality. It speeds up production, shortens lead times,
and frees human workers from repetitive duties, allowing them to focus on higher
value tasks.

9.3.4 Cloud Computing
Cloud computing is the digital nexus that connects the many components of Industry
4.0. This technology provides scalable computing resources, storage capacities, and
powerful analytics tools for processing the massive amounts of data created by IoT
devices. Cloud systems lay the groundwork for real-time data analysis, promot-
ing informed decision-making and collaboration across geographically distributed
teams. The flexibility of the cloud enables manufacturers to extend their operations
and experiment with new technologies without the limits of physical infrastructure.
These technology forces, when combined, enable Industry 4.0 to surpass old pro-
duction paradigms. They establish the framework for intelligent, linked ecosystems
in which robots, data, and humans work in tandem. This convergence boosts opera-
tional efficiency, feeds innovation, and promotes a future in which manufacturing is
defined by agility, customization, and long-term growth. As organizations navigate
this technology terrain, they discover new opportunities while confronting new dif-
ficulties, leading them toward a digitally transformed and automated future.

9.4 EMERGING TRENDS – BIG DATA AND QUANTUM


TECHNOLOGIES
9.4.1 Big Data
Big Data refers to the enormous volume of structured and unstructured data that is
generated at high speeds and is too complex to be understood and processed by tra-
ditional data processing applications [10].

Significance: Big Data is reshaping industries by providing insights that were


previously unattainable. It has opened up a plethora of opportunities for
business innovation, optimization, and transformation. For instance, Big
Data analytics, through pattern recognition and predictive analytics, offers
businesses the capability to anticipate consumer preferences, detect anoma-
lies in real time, and make data-informed decisions, thus driving competi-
tiveness and innovation. Moreover, its application in fields like healthcare
is enabling personalized treatments, predictive diagnostics, and improved
patient outcomes [11, 12].
Challenges in Digital Transformation and Automation for Industry 4.0 147

9.4.2 Quantum Technologies
Quantum technologies leverage the principles of quantum mechanics, especially
superposition and entanglement, to develop advanced systems and methods for com-
puting, communication, and sensing, among other applications [13].
Quantum technologies, particularly quantum computing, promise to revolution-
ize fields that are currently restricted by the capabilities of classical computers.
Problems that are computationally intensive for classical machines, such as factor-
ing large integers or simulating complex molecular structures, can be addressed
more efficiently using quantum computers. This breakthrough has vast implications
for cryptography, drug discovery, and optimization problems, among other areas.
Furthermore, quantum sensors and quantum communication technologies can pro-
vide unprecedented precision in measurements and ultra-secure communication
channels, respectively [14].

9.5 TECHNOLOGICAL CHALLENGES IN ADOPTION AND


MODERNIZATION OF INDUSTRY 4.0
Industry 4.0 stands at the nexus of a revolutionary change in manufacturing and
operations, marking a seismic shift from conventional methods to a world under-
pinned by automation, interconnectivity, and real-time data. However, with trans-
formative potential comes an array of technological challenges that industries must
grapple with. It is imperative to understand that addressing these challenges is not
solely an exercise in technical proficiency. Instead, it encompasses a broader spec-
trum – the confluence of technological advancements, human adaptability to these
novelties, and the dynamism of organizational structures that are now tasked with
integrating these innovations. This chapter endeavors to unpack the complexities of
this terrain, honing in on the technological obstacles that define the transition toward
the Industry 4.0 paradigm [2, 15].
The first and foremost challenge lies in the integration of diverse technologies
that come with their own legacy constraints and unique operational functionalities.
Industry 4.0 envisions a seamlessly interconnected ecosystem, where machines, sys-
tems, and processes “talk” to each other, thereby optimizing operational efficiency.
Achieving this interconnectedness demands interoperability and standardized com-
munication across platforms, which, in many instances, were not originally designed
to interact [16].
Furthermore, while the world marvels at the potential of real-time data, the sheer
volume, velocity, and variety of this data present formidable challenges. Data man-
agement, storage, security, and analysis become paramount, with organizations
needing to pivot toward more robust and scalable solutions, like cloud computing
and advanced analytics, to harness the true potential of this data influx.
Lastly, the human element cannot be overlooked. As industries tread the path
of automation, ensuring the workforce is skilled and adaptable to these changes is
pivotal. The gap between the existing workforce’s capabilities and the demands of
Industry 4.0 technologies necessitates continuous training, reskilling, and upskilling
initiatives [17].
148 AI-Driven IoT Systems for Industry 4.0

In essence, the journey toward the full realization of Industry 4.0 is rife with chal-
lenges. However, these challenges, when viewed through the lens of opportunity, can
pave the way for innovations that not only enhance operational efficiency but also
foster a culture of continuous learning and adaptation [18].

9.5.1 Ensuring Interoperability and Addressing


Compatibility in Modern Systems
In the expanding digital domain, the ability of systems to work cohesively is crucial.
Both interoperability and compatibility stand as pillars ensuring smooth, efficient,
and effective digital interactions.

1. Ensuring interoperability among interconnected systems: Interoperability


is vital for the seamless functioning of a variety of software and hardware
components from different vendors.
a. Significance of interoperability: The essence of interoperability is not
just connectivity but the ability of systems to understand and effectively
use the exchanged data.
b. Challenges to interoperability: Challenges include a lack of standard-
ization across industries, proprietary designs, and issues of data secu-
rity and privacy.
c. Advancements aiding interoperability: The push for open standards
and the increasing adaptation of cloud platforms provides an avenue for
enhancing interoperability.
2. Compatibility issues and solutions: With frequent updates and evolving
technologies, ensuring compatibility remains a persistent concern in the
digital space.
a. Root causes of compatibility issues: These arise due to evolving soft-
ware versions, changing hardware capabilities, and discrepancies
between software-software interfaces.
b. Solutions to compatibility issues: Middleware solutions have been piv-
otal in bridging the gap between different software applications, ensur-
ing they communicate efficiently [19].
c. Virtualization: Virtual environments can be utilized to run software
that isn’t inherently compatible with certain hardware or other software.
d. Frequent updates: Developers provide patches to tackle compatibility
challenges that emerge due to system or third-party software updates.

9.5.2 Managing Massive Data Influx


The digital age, especially under the aegis of Industry 4.0, is characterized by a
deluge of data. Managing this data is not just about storage but optimizing its poten-
tial for actionable insights. Traditional storage mechanisms buckle under the sheer
volume, velocity, and variety of this data influx. As such, cloud computing emerges
as a beacon, offering not just storage solutions but also processing capabilities, par-
ticularly beneficial for real-time analytics.
Challenges in Digital Transformation and Automation for Industry 4.0 149

Data governance morphs from a best practice to a critical imperative, ensur-


ing data quality, security, and compliance. In an interconnected realm, where data
flows freely, cybersecurity measures become the bulwark against potential breaches.
Coupled with this is the ever-present need to respect privacy, demanding compliance
with stringent regulations and the adoption of innovative techniques like anonymiza-
tion [20].
The path to fully realizing the potential of Industry 4.0 is paved with techno-
logical challenges. However, with collaborative effort, innovative thinking, and a
commitment to continuous learning and adaptation, these challenges can be trans-
formed into stepping stones toward a brighter, more efficient, and integrated indus-
trial future.

9.6 UPSKILLING AND RESKILLING FOR THE WORKER SKILL GAP


The Fourth Industrial Revolution’s dawn of Industry 4.0, which redefines the lim-
its of contemporary business and manufacturing processes through the rapid con-
vergence of automation, data analytics, and networked systems. As this transition
spreads across many industries, it presents a variety of difficulties, with the talent
landscape serving as the primary emphasis [1].
The growing gap between the skill set of the current workforce and the competen-
cies required by this new industrial paradigm is at the heart of these difficulties. By
2022, roughly 54% of all employees would need significant upskilling and reskill-
ing, according to. This urgency is brought on by the emergence of AI, sophisticated
robots, and the IoT, which, while opening up new opportunities, also eliminate a
number of old occupations [21].
The need for upskilling and reskilling activities has grown more urgent as a result
of this significant shift. The practice of teaching new skills to the current workforce
so they can handle more complicated job needs is known as upskilling. On the other
hand, reskilling refers to preparing people for completely new employment positions
after their existing roles become obsolete. Both of these strategies are crucial for
giving the world’s workforce the resources it needs to prosper in the Industry 4.0
age [22].
This enormous shift has already been visible in a number of industries. For
instance, manufacturing, which once relied primarily on physical labor, is moving
toward automation and smart factories. Workers who have historically performed
repetitive activities are seeing their roles marginalized in this situation more and
more. Instead of a simple job decrease, more specialized positions like systems ana-
lysts, robotics technicians, and data scientists are now in demand.
In this evolving landscape, corporations, academic institutions, and governments
hold the collective responsibility to initiate proactive upskilling and reskilling
strategies. For corporations, this isn’t merely a matter of altruism but an imper-
ative for sustainability. Companies that actively engage in upskilling initiatives
are better poised to retain top talent, maintain operational efficiency, and ensure
long-term profitability. Such investments in human capital foster a resilient and
adaptable workforce, capable of steering organizations through technological dis-
ruptions [23, 24].
150 AI-Driven IoT Systems for Industry 4.0

Academic institutions, on their part, need to reconceptualize curricula, ensuring


they align with real-world industrial requirements. Traditional educational models,
which often focus on rote learning, must give way to more experiential and project-
based learning methodologies, fostering critical thinking, creativity, and adaptability
among students [25].
Governments play a pivotal role as well. Beyond the macroeconomic perspective
of ensuring employment, they need to facilitate an ecosystem conducive to continu-
ous learning. This can be achieved through public-private partnerships, tax incen-
tives for corporations investing in upskilling, and the establishment of national skill
development programs [26].

9.6.1 Addressing the Changing Workforce Dynamics


The world stands at the precipice of an unprecedented technological transformation,
characterized by the digital metamorphosis of every sector. One of the most impacted
spheres in this revolution is the global workforce. This transformation brings forth
the pressing need for consistent adaptation, requiring both individuals and organiza-
tions to frequently refresh their skills to stay relevant. This chapter delves into the
dynamics of the rapidly altering workforce landscape, highlighting the urgency of
continuous learning and the strategies needed to bridge the widening skill gap.

1. Importance of upskilling and reskilling: As the Fourth Industrial Revolution,


or Industry 4.0, gains momentum, many skills that were once considered
crucial are on the verge of becoming obsolete. This is not just speculative.
The World Economic Forum, in its 2018 report, underscored the dramatic
shift by noting that approximately 65% of today’s primary school children
would eventually engage in jobs that are yet to be created [21].
However, upskilling – the process of acquiring new and relevant com-
petencies – and reskilling – learning new skills for a new position – are not
just about individual growth and preparedness. They have emerged as the
lifeblood of organizations aiming to harness the full potential of the digital
revolution. Deloitte’s Global Human Capital Trends report of 2019 offers
empirical evidence of this effect. The report highlights that organizations
that prioritize upskilling are more adept and better positioned to benefit
from digital transformations.
2. Bridging the skill gap through training programs: Recognizing the need
for upskilling and reskilling is one facet of the equation. The more signifi-
cant challenge is addressing the conspicuous skill gap that persists. A 2018
report by McKinsey starkly illustrates this gap, revealing that a meager 16%
of companies are confident about possessing the requisite skills for their
envisioned future [27]. Addressing this gap necessitates the execution of a
multifaceted strategy
a. Collaboration with academia: The synergy between academic institu-
tions and industry players is indispensable. Academic curricula should
evolve to echo the demands of the real world, ensuring graduates are
primed to contribute productively from day one.
Challenges in Digital Transformation and Automation for Industry 4.0 151

b. Lifelong learning platforms: With the half-life of skills shrinking, con-


tinuous learning isn’t a luxury; it’s a necessity. Organizations should
tap into the vast reservoirs of knowledge available on platforms such
as Coursera and edX. These platforms offer myriad courses ranging
from niche technological skills to broad-based management principles,
ensuring that employees remain conversant with emerging trends [28].
c. Holistic skill development: While the lure of technical skills, like AI
expertise or data analytics, is undeniable, it would be a gross oversight
to neglect soft skills. A 2019 PwC study underlines the importance of
a balanced skill set. Skills like effective communication, adaptability,
and critical thinking are as paramount as their technical counterparts.
Modern training programs should, therefore, be structured to foster this
dual growth [29].

9.6.2 Navigating the Human-Machine Collaboration


As the world stands at the precipice of Industry 4.0, a significant misconception
frequently confronted is the overemphasis on machines supplanting human roles.
Contrary to the looming fear of obsolescence, the heart of this industrial revolution
lies not in replacing, but rather harmonizing human and machine capabilities. A
deeper exploration of this collaborative paradigm offers insights into future work-
force dynamics and strategies.

1. Roles of workers in an automated landscape: The canvas of an automated


world often paints a grim picture of joblessness and redundancy. While it’s
undeniable that certain tasks are destined for automation, it’s an oversim-
plification to suggest a mass extinction of roles. In reality, as automation
technologies mature, the contours of job roles evolve.
One needn’t look further than the assembly lines of the past to under-
stand this evolution. With the introduction of automated systems, manual
assembly roles diminished. However, parallelly, new roles in system main-
tenance, quality assurance, and control emerged. Machines, as powerful as
they are, still require supervision, calibration, and troubleshooting – areas
innately suited for human expertise.
Arntz, Gregory, and Zierahn, in their seminal research from 2016,
delved into the depth of this issue. Their findings challenge the overarch-
ing fears of total automation. According to them, less than 10% of jobs
bear the potential for full automation. This statistic is revelatory, emphasiz-
ing the fact that the overwhelming majority of roles, even in an advanced
technological landscape, necessitate a human touch. The conclusion isn’t
that humans will be obsolete, but that their roles will transition. Instead of
merely operating machinery, humans might find themselves in roles where
they oversee algorithms, supervise robotic processes, or troubleshoot com-
plex systems [30].
2. Emphasizing creativity and problem-solving skills: Distinguishing the
competencies of humans and machines provides a roadmap for future
152 AI-Driven IoT Systems for Industry 4.0

training and education paradigms. Machines, by design, are excellent at


tasks that are repetitive, well-defined, and data-driven. They can process
vast amounts of information at speeds incomprehensible to humans. Yet,
when it comes to creativity, abstract thinking, empathy, and nuanced prob-
lem-solving, machines lag significantly behind.
Humans, by nature, are innately creative, adaptable, and capable of lat-
eral thinking. These attributes are difficult, if not currently impossible, to
replicate in machines. Many discourses extolled these unique human capa-
bilities. They argued that as machines take over data-driven and repeti-
tive tasks, humans should double down on their strengths in creativity and
problem-solving [31].
Training and education, therefore, should pivot in this direction. Instead
of focusing on rote learning and repetitive skill acquisition, there should
be a concerted effort to nurture critical thinking, creativity, and problem-
solving skills. This not only future-proofs the workforce but also ensures
that the collaboration between humans and machines is truly symbiotic.
Machines can handle vast data sets and execute repetitive tasks with unpar-
alleled accuracy, while humans can tackle unprecedented challenges, ideate
novel solutions, and drive innovation.

9.7 CHANGE MANAGEMENT AND CYBERSECURITY


IN TRANSITION
In the current era of digital transformation, businesses are constantly grappling with
the dual challenges of effectively managing organizational change and ensuring
robust cybersecurity. This dual focus ensures not only the seamless adoption of novel
technologies but also the security of assets in an increasingly digital environment.

9.7.1 Importance of Effective Change Management


As the saying goes, the only constant is change. Yet, in many organizations, the
introduction of new technologies is often met with resistance.

1. Overcoming resistance to new technologies: Employee resistance to tech-


nological changes can stem from a myriad of sources, be it the perceived
threat to job security, concerns of redundancy, or simply the discomfort
associated with unfamiliarity. Successfully navigating these concerns
becomes paramount for smooth transitions [32].
Change management, at its core, is about communication. Engaging
employees early on, offering them a clear understanding of the reasons
behind technological shifts, and emphasizing the benefits can significantly
alleviate concerns. Moreover, when employees are provided with the neces-
sary training and resources, their perceived self-efficacy in handling new
technologies improves, reducing resistance. Pilot programs, which offer
a phased approach to technological adoption, further allow employees to
adapt at a comfortable pace, ensuring higher success rates [33, 34].
Challenges in Digital Transformation and Automation for Industry 4.0 153

2. Aligning organizational culture with digital transformation: Beyond the


tangible aspects of technology, the cultural fabric of an organization sig-
nificantly influences the outcome of digital transformations. Organizational
culture, as described by, encompasses shared values, beliefs, and norms. As
companies digitize, these shared constructs often need realignment [35].
Leadership plays an indispensable role in steering cultural shifts. By setting
a clear vision for the future, leading by example, and ensuring consistent
communication, leaders can bridge the gap between current organizational
ethos and the demands of a digitized environment [36].

9.7.2 Ensuring Robust Cybersecurity Measures


Alongside effective change management, the increasing interconnectedness of the
digital era necessitates heightened cybersecurity measures.

1. Protecting against cyber threats in an interconnected environment: Today’s


business environments are characterized by interconnected systems, from
internal communication networks to broader supply chains. While these
connections drive efficiency and collaboration, they also increase vulnera-
bility to cyber threats. The potential threats, including distributed denial of
service (DDoS) attacks, ransomware, and phishing, to name a few, neces-
sitate multifaceted defense strategies.
An effective cybersecurity strategy encompasses both technical defenses
like firewalls and intrusion detection systems and human-centered strate-
gies. Comprehensive training ensures that employees, often the first line
of defense, can recognize and respond to potential threats. Periodic audits
and penetration testing provide a real-time assessment of vulnerabilities,
enabling organizations to bolster defenses proactively [37, 38].
2. Balancing convenience and security in automation: The allure of automa-
tion is undeniable. Automated processes offer unparalleled efficiency, accu-
racy, and scalability. Yet, each automated and interconnected system, each
IoT device, also presents potential entry points for cyber adversaries [39].
To safeguard against this heightened risk, organizations must strike a
balance. Zero Trust architectures have emerged as a promising solution in
this domain. Rooted in the principle of “never trust, always verify”, these
architectures eschew traditional perimeter defenses for a more holistic
security approach, ensuring each access request, irrespective of its source,
is thoroughly vetted.

9.8 RESPONSIBLE IMPLEMENTATION OF AUTOMATION AND AI


The integration of automation and AI within the dynamic landscape of Industry
4.0 has unveiled new dimensions of efficiency and innovation. However, this chap-
ter delves into the pivotal importance of responsible implementation, shedding
light on ethical intricacies, societal impacts, and the delicate equilibrium between
human intervention and automation. As automation and AI gain prominence,
154 AI-Driven IoT Systems for Industry 4.0

ethical concerns pertaining to accountability and justice come to the fore. The soci-
etal implications, including potential job displacement, underscore the need for a
human-centered approach, focused on workforce reskilling. The chapter under-
scores the harmonious coexistence of human and machine, harnessing their respec-
tive strengths while advocating proactive measures to address unforeseen outcomes
through robust testing and continuous vigilance. Ultimately, responsible automation
and AI integration align with ethical principles and societal well-being, catalyzing
the transformative potential of Industry 4.0 [40].

9.8.1 Ethical Considerations in AI and Automation


The advent of Industry 4.0 ushers in profound ethical considerations surrounding
automation and AI, with transparency, justice, accountability, and bias mitigation at
the forefront. Transparency, notably, plays a pivotal role in the context of integrat-
ing automation and AI into Industry 4.0. The opacity of complex AI models often
gives rise to decisions that seem impenetrable and bewildering. This opacity not only
erodes confidence but also hampers comprehension, leaving stakeholders puzzled
about the rationale behind AI-driven conclusions. Responsible AI implementation
calls for algorithms that are both accurate and interpretable. This necessitates the
development of models that offer insights into the factors and patterns that influence
their decisions. Organizations can enhance comprehensibility by shedding light on
the decision-making process, ensuring AI-generated outcomes are comprehensible
to human operators and open to scrutiny for fairness and accountability. Enabling
the examination and understanding of automated judgments fosters trust, nurtures
ethical decision-making, and establishes a framework for ethical AI integration
aligned with human values [31, 41].

Addressing bias and ensuring fairness: The imperative to mitigate biases


within AI systems gains centrality as Industry 4.0 propels the integration of
automation and AI. AI systems learn from historical data, and if this data
harbors biases, algorithms may perpetuate unjust or discriminatory out-
comes. Proactively identifying and rectifying biased data points through-
out the training process are indispensable remedies. Furthermore, adapting
algorithms to accommodate imbalances and ensuring that AI models are
trained on diverse and representative datasets are essential. Integrating fair-
ness assessments during algorithm development aids in identifying and rec-
tifying potential biases prior to deployment. Regular audits of AI systems
are equally essential in detecting and mitigating biases that might evolve
over time. This proactive approach, aimed at preventing automated deci-
sions from inadvertently reinforcing cultural biases, necessitates consistent
monitoring and unwavering commitment. Organizations can champion AI
systems that strive for equitable and unbiased decision-making by acknowl-
edging and rectifying biases. This approach not only adheres to ethical
norms but also advances the development of AI technologies as instruments
for societal transformation, enhancing the legitimacy and societal impact of
AI within the scope of Industry 4.0.
Challenges in Digital Transformation and Automation for Industry 4.0 155

In the context of Industry 4.0, transparency, justice, accountability,


and bias mitigation emerge as pivotal ethical considerations in the realm
of automation and AI integration. These core principles collectively guide
responsible AI adoption, enabling organizations to harness the revolution-
ary potential of technology while upholding ethical standards. Prioritizing
transparency ensures comprehensible decisions, fostering trust and com-
prehension. Fairness guarantees that AI procedures yield impartial
outcomes, bolstering equity. Accountability mandates that automated deci-
sions adhere to ethical norms and human values. Addressing biases necessi-
tates the intentional identification and rectification of biased data, ensuring
impartiality. By infusing these principles into AI adoption, organizations
navigate the transformative landscape of Industry 4.0 while cultivating pub-
lic trust, charting a future where technology serves humanity ethically and
responsibly.

9.8.2 Achieving Balance between Human Oversight and


Autonomous Decision-Making in Industry 4.0
The delicate equilibrium between human intervention and autonomous decision-
making is a paramount consideration in the context of Industry 4.0. This chapter
delves into the intricacies of harmonizing the roles of human operators and auto-
mated systems, with a focus on collaborative engagement, oversight, and adapt-
ability. Achieving this balance is essential for effectively navigating the challenges
posed by modern manufacturing.

1. Human oversight in critical processes: The growing prominence of


Industry 4.0 highlights the critical role of human oversight in pivotal oper-
ations. Automation and AI excel in routine tasks but often face limita-
tions in complex, uncertain, and ethical contexts. Human operators bring
nuanced judgment, contextual understanding, and the ability to address
ethical dilemmas – qualities that automated systems may lack. Authorizing
human operators to modify, amend, or override automated decisions, par-
ticularly in vital processes, enhances accountability and mitigates the risk
of “automation bias”. Human oversight fosters transparency and confi-
dence in AI systems, contributing to their effective integration [42, 43].
2. Designing AI systems with fail-safe mechanisms: Ensuring the responsible
and secure integration of automation and AI necessitates the incorporation
of fail-safe mechanisms. These mechanisms enable AI systems to handle
unexpected failures and exceedances gracefully. An effective approach
involves identifying potential failure scenarios through comprehensive
risk assessment and scenario modeling. This proactive strategy anticipates
instances where automated systems might produce unfavorable outcomes.
Developing graceful shutdown protocols ensures that AI systems transition
to safe states in the event of breakdowns, reducing the risk of collateral dam-
age [44]. Real-time monitoring and anomaly detection serve as crucial tools
for identifying deviations and initiating fail-safe actions promptly [45].
156 AI-Driven IoT Systems for Industry 4.0

3. Harmonizing human intervention and autonomous decision-making in


Industry 4.0:
a. A dual approach of oversight and fail-safe mechanisms: The pursuit of
harmoniously blending human intervention and autonomy in decision-
making has emerged as a pivotal concern within the Industry 4.0 land-
scape. As the people navigate this technological revolution, achieving a
delicate balance between human judgment and automated precision is
essential. This chapter sheds light on the intricate equilibrium required to
optimize the roles of both human workers and automated systems, with
a distinct focus on collaborative engagement, oversight, and adaptability.
b. Collaborative human-machine synergy: Central to this equilibrium is the
concept of collaborative human-machine synergy, which recognizes the
expertise of automated systems in mundane tasks and data processing.
Simultaneously, it acknowledges the irreplaceable human traits of context
awareness, intuition, and creativity. This harmonization paves the way for
a symbiotic relationship that maximizes the strengths of both entities.
c. Communication enhancement: To facilitate effective communication
between humans and automated systems, the development of user-
friendly interfaces and the integration of natural language processing
are crucial. This combination enables human insights to complement
data-driven conclusions, fostering a holistic decision-making process.
d. Oversight and accountability measures: In the quest for a balanced
Industry 4.0 environment, oversight and accountability measures play a
pivotal role. These mechanisms empower human operators to intervene,
amend, or counter automated decisions, particularly in scenarios of
ambiguity or critical importance. Such interventions ensure that ethical
standards are upheld and foster trust in the capabilities of AI systems.
e. Fail-safe mechanisms ensuring responsible integration: Given the
prevalence of automation and AI, the integration of fail-safe mecha-
nisms gains paramount importance. Fail-safe techniques are designed
to minimize the impact of unforeseen failures or unprecedented cir-
cumstances. These mechanisms ensure that AI systems respond appro-
priately even when confronted with novel data patterns, technical faults,
or ethical dilemmas beyond their training data.

9.9 STRATEGIES FOR OVERCOMING CHALLENGES


IN INDUSTRY 4.0
A diverse set of ideas arise to address the multiple difficulties of this transforma-
tional era. This chapter goes into digital transformation, creativity and cybersecurity,
and recruitment and retention methods. Managing digital transformation requires
thorough assessments, well-defined roadmaps for integrating technology, and adapt-
able designs. Cultivating an innovative culture necessitates encouraging experimen-
tation, cooperation, and accepting failure as a driver for growth. To protect against
emerging cyber threats, robust cybersecurity solutions include network security,
data encryption, and workforce education. Upskilling, engagement with educational
Challenges in Digital Transformation and Automation for Industry 4.0 157

institutions, and increasing diversity for improved problem-solving are all part of
developing a skilled workforce. These strategies collectively equip organizations to
manage the complexity of Industry 4.0, assuring they capitalize on its revolutionary
potential while confronting difficulties, supporting innovation, and fostering long-
term growth.

9.9.1 Formulating Comprehensive Digital Transformation Strategies


Crafting effective digital transformation plans is critical in the context of Industry
4.0 in order to reap the benefits of technology breakthroughs. The process starts
with a thorough examination of present procedures, which identifies inefficiencies
and gaps. The following decisions are influenced by the diagnostic phase. Following
that, organizations prioritize technologies based on impact and feasibility in order
to create technology adoption roadmaps. These roadmaps ensure that technology
is integrated in an organized manner, hence minimizing disruption. Flexibility in
architecture is crucial to harmoniously blend existing systems with emerging tech-
nologies. Adaptable architectures, including edge computing, facilitate real-time
data processing. These strategies collectively guide organizations in their journey
toward digital transformation, enabling them to optimize operations, remain respon-
sive to industry trends, and capitalize on the opportunities that Industry 4.0 offers.
Through well-planned strategies that align with objectives, organizations can navi-
gate the intricate landscape of technological evolution and emerge as leaders in the
era of digital innovation.
The act of identifying and grasping issues is a foundational step in designing
effective strategies in the Industry 4.0 world, where digital transformation deter-
mines the destiny of enterprises. Organizations embark on an introspective and ana-
lytical journey in order to identify current bottlenecks, inefficiencies, and hurdles
that impede optimal performance. This diagnostic phase lays the groundwork for
the upcoming digital transformation efforts. Identifying all difficulties necessitates
a thorough analysis of the overall operational scene. This entails studying processes,
systems, workflows, and interactions in order to uncover hidden pain points that
would otherwise stymie development. Organizations can use this method to identify
places where manual procedures may be slowing down operations, data silos may
be impeding information flow, or old technology no longer serves evolving require-
ments. Furthermore, identifying difficulties aids in the development of successful
change management techniques. It assists organizations in anticipating the effects
of transformation on various departments, roles, and processes. This understanding
enables the development of customized training programs, communication plans,
and support mechanisms to ensure that staff are adequately prepared to manage
the changes. Finally, identifying and comprehending issues is critical in developing
comprehensive digital transformation strategies. This introspective process not only
exposes where digital technologies can have the most impact but also helps organiza-
tions prepare for the complexities of change. Armed with this knowledge, firms can
deliberately target areas in need of transformation, overcome resistance, and develop
a cohesive plan that meets difficulties head-on, ultimately enabling a successful voy-
age into the revolutionary world of Industry 4.0.
158 AI-Driven IoT Systems for Industry 4.0

The alignment of organizational goals with technical capabilities is a critical


approach in the era of Industry 4.0 when digital transformation is a driving factor.
This alignment ensures that technology adoption is intentional, strategic, and in line
with the organization’s overall goals. It entails a careful assessment of available tech-
nologies as well as a rigorous mapping of how these technologies might be used to
achieve certain goals. The notion that technology should serve as a facilitator of busi-
ness objectives, rather than as a standalone endeavor, is at the heart of this strategy.
Organizations must first declare their strategic aims, whether they are to improve
operational efficiency, improve customer experience, or expand market reach. With
these objectives in mind, the next stage is to investigate the technology landscape
for tools and solutions that are compatible with these objectives. The capabilities of
technology range from IoT and AI to cloud computing and data analytics. Effective
alignment necessitates a thorough awareness of these competencies as well as their
potential impact on other elements of the organization. For example, if the goal is to
improve customer interaction, AI-powered chatbots or personalized recommendation
engines can be investigated. Fostering alignment also necessitates continual commu-
nication between technology and business teams. Open channels of communication
ensure that technology decisions are guided by a comprehensive understanding of
business requirements and that technology projects contribute tangibly to strategic
goals. Organizations that link their aims with their technological capabilities not only
optimize their use of technology but also improve their overall competitiveness and
agility. This strategy promotes a laser-like focus on digital transformation, allowing
organizations to traverse the challenges of Industry 4.0 with purpose, precision, and a
clear vision of how technology can create long-term success and innovation.
Therefore, creating effective digital transformation strategies necessitates a two-
pronged approach – thoroughly identifying obstacles and connecting organizational
goals with technical capabilities. The first aspect comprises the rigorous examina-
tion of existing processes to identify bottlenecks and inefficiencies, offering critical
insights for targeted digital solutions. The second dimension emphasizes the align-
ment of strategic goals with accessible technological tools, resulting in purpose-
driven technology adoption. Organizations optimize their transformative efforts by
connecting goals with attainable technological solutions, increasing competitiveness
and agility. This strategic approach enables firms to overcome problems, capitalize
on opportunities, and thrive in the rapidly changing landscape of Industry 4.0. These
strategies encourage innovation, operational efficiency, and position organizations
at the vanguard of the digital revolution through a well-balanced blend of resolving
limits and maximizing technology’s promise.

9.9.2 Leveraging Industry 4.0 Transformative Potential


1. Unleashing the power of Industry 4.0 – strategies for effective technologi-
cal integration: The potential of Industry 4.0 hinges on the adept utilization
of key technologies. This chapter delves into two strategic imperatives that
empower organizations to harness the capabilities of Industry 4.0 effectively.
The first approach revolves around real-time connectivity and control within
automation, cultivating a networked ecosystem where data-driven insights
Challenges in Digital Transformation and Automation for Industry 4.0 159

enable agile decision-making and predictive maintenance. The second strat-


egy involves the symbiotic amalgamation of IoT, AI, and cloud technolo-
gies. This synergy fosters innovation through insights extraction, predictive
analysis, and personalized experiences. The rise of Industry 4.0 necessitates
strategic adoption of these techniques, driving operational excellence and
innovation to the forefront. This confluence of technologies creates an envi-
ronment where efficiency, responsiveness, and innovation converge, guiding
organizations toward the transformative horizon of Industry 4.0.
2. Real-time connectivity and control within automation: At the heart of
Industry 4.0’s transformative potential lies the approach of real-time com-
munication and control within automated processes. This method seamlessly
links devices, sensors, and machinery to form an integrated network, enabling
instantaneous communication and collaboration. Consequently, an ecosys-
tem characterized by free-flowing data and real-time insights emerges, revo-
lutionizing decision-making and operational efficiency. By synchronizing
sensors and IoT devices, vast amounts of data are collected across the manu-
facturing process, encompassing machine performance, energy consump-
tion, production rates, and environmental variables. This data is transmitted
and processed instantaneously due to real-time connectivity, providing orga-
nizations with previously unattainable actionable insights. The pivotal advan-
tage of real-time networking is evidenced in data-driven decision-making,
empowering managers and operators with real-time information to identify
bottlenecks, anomalies, and avenues for efficiency enhancement. Predictive
maintenance becomes a reality, as data patterns indicate potential machinery
failures, allowing preemptive repairs. Furthermore, real-time communication
facilitates remote monitoring and control, enhancing operational agility and
accommodating flexible work arrangements [46].
3. Symbiotic integration of IoT, AI, and cloud technologies: Realizing Industry
4.0’s transformative potential hinges on the symbiotic fusion of IoT, AI, and
cloud technologies. This convergence forms a dynamic milieu fostering
innovation, efficiency, and data-centric decision-making. IoT, characterized
by interconnected devices and sensors, serves as the conduit for data col-
lection. These devices furnish real-time data from diverse sources across
the production process, supply chain, and end-user interactions. This data
influx enables nuanced insights into operational performance, product uti-
lization, and consumer behavior, facilitating informed strategic decisions.
AI elevates this data deluge by employing intricate algorithms and machine
learning to identify concealed patterns, anomalies, and correlations imper-
ceptible to human observation. This technology unlocks predictive ana-
lytics, enabling trend anticipation, process optimization, and equipment
breakdown prediction, thus enhancing operational efficiency. Cloud tech-
nologies provide the infrastructure for handling the substantial data gener-
ated by IoT and analyzed by AI, offering scalable storage and processing
capabilities. This scalability accommodates varying data volumes during
peak and off-peak operational periods while fostering collaboration and
innovation through a centralized platform [47].
160 AI-Driven IoT Systems for Industry 4.0

9.10 CONCLUSION
The journey through the complex world of Industry 4.0 is defined by the junction of
technology innovation, labor dynamics, and ethical considerations. The adoption of
Industry 4.0 appears to be a continuous pursuit of operational dominance and lasting
growth based on the conundrums and strategies presented in this chapter.
The necessity for a comprehensive viewpoint on digital metamorphosis is high-
lighted by an assessment of the major roadblocks and their pertinent solutions as
this investigation into Industry 4.0’s difficulties and potential comes to a close. This
requires the intricate task of managing the data influx as well as the delicate blending
of many technologies. The chapter stresses the importance of employee reskilling to
bridge the skills gap and the crucial role that change stewardship plays in ensuring
a smooth transition. Cybersecurity worries have brought attention to the precarious
balance between accessibility and security. A shift toward openness, fairness, and
lessening bias has been prompted by ethical considerations in AI and mechanization.
The solutions put forth here emphasize creating comprehensive digital metamor-
phosis plans that take into account difficulties and align goals with technological
prowess. Utilizing the revolutionary potential of Industry 4.0 on around immediate
connectivity, mastery of mechanization, and the harmonic marriage of IoT, AI, and
cloud infrastructures to foster innovation.
As this chapter’s analysis of Industry 4.0’s problems and solutions comes to a
close, it is becoming clear that this revolutionary era is more like an odyssey than a
conclusion. Companies are urged to adopt a pliable mentality in recognition of the
fact that new horizons will open up and that the landscape is constantly changing.
Understanding that transformation is an ongoing process rather than a passing phase
is necessary in order to view Industry 4.0 as an ongoing journey. Such acceptance
inspires the development of a society that values adaptability and creativity and is
supported by ongoing learning. This necessitates close monitoring of developing
trends, technology, and potential disruptions. Businesses may put themselves at the
forefront of innovation by recognizing the continual nature of Industry 4.0. This
necessitates close monitoring of developing trends, technology, and potential disrup-
tions. Businesses may put themselves at the forefront of innovation by recognizing
the continual nature of Industry 4.0. This enables them to anticipate problems, grab
emerging opportunities, and maintain a competitive edge. Businesses that internal-
ize change as constant, navigate it strategically, and persistently forge new ground
in their pursuit of distinctiveness will prosper as technology advances and societal
needs change.
The promises Industry 4.0 makes – of competency, dependability, and creativ-
ity – mark the end of this chapter’s investigation of its problems and tactics. These
commitments are concrete goals, not just abstract aspirations, which organizations
can achieve by steadfastly navigating the terrain. By strategically implementing
technologies, streamlining processes, and making the most of available resources,
proficiency takes shape. As infrastructures become more durable, resilient to distur-
bances, and flexible to changing requirements, trustworthiness emerges. When orga-
nizations encourage creativity, embrace change, and use cutting-edge technology to
develop novel solutions, innovation thrives. To keep these promises, an integrated
Challenges in Digital Transformation and Automation for Industry 4.0 161

approach that combines technology, labor, and modalities is required. It demands


a commitment to morally sound endeavors, responsible automation, and continual
improvement. Anchoring corporate activities to these commitments as they start
their Industry 4.0 journey directs their actions and decisions toward operational bril-
liance, increased reliability, and ground-breaking innovation. By doing this, they
don’t only get around obstacles; instead, they put themselves on a path to unlock
Industry 4.0’s full potential and experience its disruptive effects on many industries
and social structures.

ACKNOWLEDGMENTS
My thanks go to the editor of the volume, Dr. Preethi Nanjundan, for giving me the
chance to write a chapter on this area of AI forecasting and making helpful sugges-
tions, and to Dr. Manjari and Zidan Kachhi for guiding me on the right path.

LIST OF ABBREVIATIONS
AI Artificial intelligence
DDoS Distributed denial of service
Industry 4.0 The Fourth Industrial Revolution
IoT Internet of Things
SMEs Small and medium enterprises

REFERENCES
1. Schwab, K. “The Fourth Industrial Revolution (Industry 4.0) A Social Innovation
Perspective”, Tạp Chí Nghiên Cứu Dân Tộc, pp. 23, 2016.
2. Lu, Y. “Industry 4.0: A Survey on Technologies, Applications and Open Research
Issues”, Journal of Industrial Information Integration, pp. 1–10, 2017.
3. Papulová, Z., Gažová, A., & Šufliarský, Ľ. “Implementation of Automation Technologies
of Industry 4.0 in Automotive Manufacturing Companies”, Transportation Research
Board, 2022. doi: https://doi.org/10.1016/j.procs.2022.01.350. https://www.sciencedi-
rect.com/science/article/pii/S1877050922003593
4. Zhong, R. Y., Newman, S. T., Huang, G. Q., & Lan, S. “Big Data for Supply Chain
Management in the Service and Manufacturing Sectors: Challenges, Opportunities,
and Future Perspectives”, Computers & Industrial Engineering, pp. 572–591, 2017.
5. Lasi, H., Fettke, P., Kemper, H.-G., Feld, T., & Hoffmann, M. “Industry 4.0”, Business
& Information Systems Engineering, pp. 239–242, 2014.
6. Wang, S., Wan, J., Li, D., & Zhang, C. “Implementing Smart Factory of Industry 4.0: An
Outlook”, International Journal of Distributed Sensor Networks, 2016. doi: https://doi.
org/10.1155/2016/3159805. https://journals.sagepub.com/doi/full/10.1155/2016/3159805
7. Ghobakhloo, M. “The Future of Manufacturing Industry: A Strategic Roadmap Toward
Industry 4.0”, Journal of Manufacturing Technology Management, pp. 910–936, 2018.
8. Kagermann, H., Wahlster, W., Helbig, J., Kagermann, H., Wahlster, W., & Helbig,
J. “Securing the Future of German Manufacturing Industry Recommendations for
Implementing the Strategic Initiative Industrie 4.0. Final Report of the Industrie 4.0
Working Group”, Acatech – National Academy of Science and Engineering, Scientific
Research Publishing, p. 678, 2013.
162 AI-Driven IoT Systems for Industry 4.0

9. Lampropoulos, G., Siakas, K., & Anastasiadis, T. “Internet of Things in the Context
of Industry 4.0: An Overview”, International Journal of Entrepreneurial Knowledge,
pp. 4–19, 2019.
10. Chen, M., Mao, S., & Liu, Y. “Big Data: A Survey”, Mobile Networks and Applications,
pp. 171–209, 2014.
11. Raghupathi, W., & Raghupathi, V. “Big Data Analytics in Healthcare: Promise and
Potential”, Health Information Science and Systems, 2014. doi: 10.1186/2047-2501-2-3
12. Wang, G., Gunasekaran, A., Ngai, E. W. T., & Papadopoulos, T. “Big Data Analytics
in Logistics and Supply Chain Management: Certain Investigations for Research and
Applications”, International Journal of Production Economics, pp. 98–110, 2016.
13. Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., & Lloyd, S. “Quantum
Machine Learning”, Nature, pp. 195–202, 2017.
14. Preskill, J. “Quantum Computing in the NISQ Era and Beyond”, Quantum, p. 79, 2018.
15. Kagermann, H. “Change through digitization—Value creation in the age of Industry
4.0”, In Management of Permanent Change, Springer, Wiesbaden, pp. 23–45, 2015.
16. Hermann, M., Pentek, T., & Otto, B. “Design principles for Industrie 4.0 scenarios”,
49th Hawaii International Conference on System Sciences (HICSS), 2016.
17. Rüßmann, M., Lorenz, M., Gerbert, P., Waldner, M., Justus, J., Engel, P., & Harnisch,
M. “Industry 4.0: The Future of Productivity and Growth in Manufacturing Industries”,
BCG Global, 2015.
18. Geissbauer, R., Schrauf, S., Koch, V., & Kuge, S. “Industry 4.0 – Opportunities and
Challenges of the Industrial Internet”, PwC, 2014.
19. Huhns, M. N., & Singh, M. P. “Service-Oriented Computing: Key Concepts and
Principles”, IEEE Internet Computing, pp. 75–81, 2005.
20. Emaminejad, N., & Akhavian, R. “Trustworthy AI and Robotics: Implications for the
AEC Industry”, Automation in Construction, 2022. doi: 10.1016/j.autcon.2022.104298
21. World Economic Forum. “The Future of Jobs Report 2018 Insight Report Centre for the
New Economy and Society”, World Economic Forum, 2018.
22. Acquire Media. Robots Need Not Apply: New ManpowerGroup Research Finds
Human Strengths are the Solution to the Skills Revolution, ManpowerGroup Inc, 2023.
23. Bontas, R. “Human Capital Trends Report 2019”, Deloitte Romania, 2019.
24. Mazor, A. H. “Leading the Social Enterprise: Reinvent With a Human Focus”, Deloitte
Insights, 2019. https://www2.deloitte.com/us/en/insights/focus/human-capital-trends/2019/
leading-social-enterprise.html
25. Bessen, J. E. “AI and Jobs: The Role of”, SSRN Electronic Journal, 2017. https://papers.
ssrn.com/sol3/papers.cfm?abstract_id=3106676
26. OCDE. “Getting Skills Right: Future-Ready Adult Learning Systems”, OECD, 2014.
27. McKinsey. Skill Shift: Automation and the Future of the Workforce, McKinsey &
Company, 2018.
28. J, F., & M, R. “Building From the Bottom Up – Managing the Future of Work”, Harvard
Business School, 2019.
29. Partner, F. “PwC Report: ‘Upskilling for a Digital World’ Finds Lack of Skills a Global
Concern”, Fair360, 2019.
30. M, A., T, G., & U, Z. “Risk Capital in OECD Countries”, Financial Market Trends,
pp. 113–151, 2006.
31. Biran, O., Cotton, C., & Nohl, C. “Explanation and Justification in Machine Learning:
A Survey”, Foundations and Trends® in Information Retrieval, pp. 105–307, 2021.
32. Kotter, J., & Schlesinger, L. “Choosing Strategies for Change”, Harvard Business
Review, 13, 2018. https://hbr.org/2008/07/choosing-strategies-for-change
33. Armenakis, A. A., & Bedeian, A. G. “Organizational Change: A Review of Theory and
Research in the 1990s”, Journal of Management, 25(3), pp. 293–315, 1999.
Challenges in Digital Transformation and Automation for Industry 4.0 163

34. Lewin, K, & Lewin, K. Frontiers in group dynamics “Frontiers in Group Dynamics:
Concept, Method and Reality in Social Science; Social Equilibria and Social Change”,
Human Relations, pp. 5–41, 1947.
35. Schein, E. H. “Organizational Culture and Leadership”, SSRN, 1985.
36. Kouzes, J. M., & Posner, B. Z. “The Leadership Challenge”, John Wiley & Sons, 1995.
37. Mitnick, K., & Simon, W. “The Art of Deception Controlling the Human Element of
Security”, Hyperion, 2002. https://archive.org/details/artofdeceptionco0000mitn
38. West-Brown, M., Stikvoort, D., Kossakowski, K., Killcrece, G., Ruefle, R., & Zajicek,
M. T. “Handbook for Computer Security Incident Response Teams (CSIRTs)”, Semantic
Scholar, 2003.
39. Schneier, B. “Data and Goliath: The Hidden Battles to Collect Your Data and Control
Your World”, New York, Ny [U.A.] Norton, 2015.
40. Ribeiro, J., Lima, R., Eckhardt, T., & Paiva, S. “Robotic Process Automation and
Artificial Intelligence in Industry 4.0 – A Literature Review”, Procedia Computer
Science, pp. 51–58, 2021.
41. Floridi, L. “The Logic of Design as a Conceptual Logic of Information”, SSRN Electronic
Journal, 2017. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3849209
42. Lee, J. D., & See, K. A. “Trust in Automation: Designing for Appropriate Reliance”,
Human Factors: The Journal of the Human Factors and Ergonomics Society,
pp. 50–80, 2004. doi: https://doi.org/10.1518/hfes.46.1.50_30392. https://journals.sage-
pub.com/doi/abs/10.1518/hfes.46.1.50_30392
43. Parasuraman, R., & Riley, V. “Humans and Automation: Use, Misuse, Disuse, Abuse”,
Human Factors: The Journal of the Human Factors and Ergonomics Society, pp. 230–
253, 1997.
44. Knight, J. C., & Leveson, N. G. “An Experimental Evaluation of the Assumption
of Independence in Multiversion Programming”, IEEE Transactions on Software
Engineering, pp. 96–109, 2012.
45. Amodei, D., Olah, C., Steinhardt, J., Christiano, P. F., Schulman, J., & Mane, D.
“Concrete Problems in AI Safety”, ArXiv, 2016.
46. Grieves, M., & Vickers, J. “Digital Twin: Mitigating Unpredictable, Undesirable
Emergent Behavior in Complex Systems”, ResearchGate, 2017.
47. Riazul Islam, S. M., Kwak, D., Humaun Kabir, M., Hossain, M., & Kwak, K.-S. “The
Internet of Things for Health Care: A Comprehensive Survey”, IEEE Access, pp. 678–
708, 2015.
10 Design and Analysis
of Embedded
Sensors for IIoT
A Systematic Review
Kogila Raghu and Macharla Mounika

10.1 INTRODUCTION
In today’s automated environment, embedded systems have grown in significance.
Embedded systems are crucial for accelerating production and managing factory sys-
tems, especially in the area of automation. Systems that are embedded have become
a vital part of many businesses in recent years, revolutionizing the automation of
industrial processes. Manufacturing procedures are streamlined, performance is
improved, and efficiency is optimized. This rise is being fueled by an increase in the
need for automation, the IoT and smart electronic devices crossway various indus-
tries. This post will address embedded systems in industrial automation as well as
some exciting possibilities. A maintenance worker receives real-time data from an
Industrial Internet of Things (IoT) (IIoT) sensor that keeps an eye on equipment and
systems. IIoT sensors offer 24/7 “eyes” on vital assets as opposed to depending on
sporadic inspections by maintenance specialists. As a result, machinery runs more
consistently, minor issues are found quickly, and major breakdowns are avoided.
What separates IoT and IIoT is the level of rigor in the standards. The term “Internet
of Things” refers to any item that can be connected to the Internet, like a conven-
tional smartphone. The IIoT is a more technical subset of IoT that adheres to tight
standards. The security of data flowing from complex machinery or proprietary sys-
tems is ensured by these standards.
Business applications for embedded systems come with a wide range of benefits
and capabilities. Process optimization, well-informed decision-making, and prompt
response to anomalies are all made possible by real-time control and monitoring
capabilities are shown in Figure 10.1. Automating routine tasks and streamlining
procedures boosts the efficiency and productivity of embedded systems. By fusing
cutting-edge technologies like artificial intelligence (AI), machine learning (ML),
and IoT, the embedded system provides proactive monitoring, early danger identifi-
cation, and efficient risk management. Additionally, it opens up new possibilities for
complex analytics, preventative maintenance, and autonomous judgment.
Generally, the systems that are embedded are a crucial part of engineering auto-
mation, fostering innovation and allowing companies to prosper in a fast-paced,

164 DOI: 10.1201/9781003432319-10


Design and Analysis of Embedded Sensors for IIoT 165

FIGURE 10.1 Embedded system roles in IIoT.

cutthroat environment. Machine control and machine monitoring are the two main
subcategories of embedded industrial automation systems.

• Machine control: It is one of the primary industrial automation applications


for embedded systems since it allows control over a variety of machinery
and procedures. Manufacturing tools, sensors, and devices may be accurately
controlled and coordinated when embedded systems operate as the hub.
These systems collect input from sensors, process the data, and generate out-
put signals to regulate the functioning of motors, valves, actuators, and other
components. By controlling and optimizing the control systems, embedded
systems enable the exact and effective execution of industrial operations.

Let’s examine a few use scenarios for machine control:

i. Manufacturing: Machine control with embedded systems is widely


employed in manufacturing processes. They control and plan how machin-
ery, including assembly lines, CNC (computer numerical control) machines,
and industrial robots, operate. For effective and exact production, embed-
ded systems assurance accurate control over variables including speedi-
ness, timing, synchronization, and location.
ii. Robotics: Controlling robotic systems requires the use of embedded sys-
tems. They control how robots move, behave, and interact in fields including
healthcare, logistics for warehouses, and the production of cars. Robots can
carry out activities like packaging, welding, and inspection with extreme
accuracy.
iii. Energy management: Embedded systems are used in energy management
systems to regulate and monitor energy usage in industrial amenities. Based
on demand and efficiency, they manage energy consumption, control power
166 AI-Driven IoT Systems for Industry 4.0

distribution, and optimize energy use. Businesses may track and analyze
energy data using embedded systems, find potential for energy savings, and
put energy conservation measures into action. Various energy consumption
indicators, including equipment efficiency, operational patterns, and power
usages, are continuously monitored by these systems. Embedded systems
can identify patterns and trends in the acquired data that point to potential
opportunities for energy savings.

For instance, they can spot instances of excessive energy use during particular
times of the day or energy-wasting device problems. Businesses can optimize energy
use and cut waste thanks to this information.
Several illustrations of machine monitoring:

i. Predictive maintenance: Smart embedded technologies make it possible to


track the performance and health of a machine in real time. These systems
may analyze machine factors such as temperature, vibration, and operat-
ing conditions by gathering data from sensors built into the machinery.
The gathered data is used to spot anomalies and foresee potential prob-
lems, enabling preventive maintenance procedures and reducing unplanned
downtime.
ii. Quality control: In machine monitoring, embedded systems put a strong
emphasis on the consistency and quality of manufactured goods. To uphold
constant quality standards, they keep an eye on factors like pressure, speed,
size, and other pertinent characteristics. For instance, to verify that the
components produced adhere to the necessary criteria, an embedded sys-
tem may monitor pressure levels throughout the injection molding proce-
dure. The system may sound an alarm or take corrective action to address
the problem if the pressure deviates from the permissible range. By doing
this, the high level of product quality will be preserved.
iii. Safety and fault detection: In production settings, machine monitoring sys-
tems can identify potential problems or harmful circumstances. To spot
departures from typical operating norms, they regularly monitor machine
performance and operational circumstances. For instance, the embedded
system can activate an alert or safety precaution if an abnormal temperature
rise is noticed in a motor, signaling a potential failure or overheating. This
will stop additional harm or mishaps. Here, maintaining a secure work-
ing environment and safeguarding both persons and equipment are the key
concerns.

The IoT, which was created because of the improvement of wireless technology,
has recently undergone major development to improve its usability for both current
and upcoming real-life applications. The initial introduction of IoT was given by a
corporate organization that was a radiofrequency identification (RFID) member in
1999. Then electronic devices, AI, ML, and data analytics gained acceptance and
started to draw interest for practical applications [1]. IoT nodes are physical objects
with sensors that are utilized for online data communication. In light of this, there is
Design and Analysis of Embedded Sensors for IIoT 167

FIGURE 10.2 IoT and IIoT.

no doubt about this network system, which is built on a variety of well-established


technologies, including sensor networks, embedded systems, and ubiquitous infor-
matics. Besides the many advantages, it has already been provided in the environ-
ments of health care, smart home, and transportation sectors which significantly
affecting in particular the company by enabling better organized, controlled, moni-
tored, and optimized at a lesser price. Businesses can run more cheaply thanks to
the IIoT, which cuts operational and capital expenditures (CAPEX) [2]. The term
“computer-linked, networked equipment” also applies to production and energy
management systems in the industrial sector. A smart network system for large- and
small-scale networks is used in Figure 10.2 to show how industrial IoT and consumer
IoT interact to maintain the digital link. IIoT is a crucial step in improving the opera-
tional performance of manufacturers in this context because it is tied to the industrial
sector while IoT is active in the retail sector.
IIoT, a sophisticated subcategory of IoT that merges interacted devices and tech-
nology with automation requests, is specifically created for automation engineering
communication. Before the industrial IoT, the majority of ad hoc solutions served as
the foundation for industrial wireless technologies, but the currently more adaptable
and cost-effective technologies are being developed, notably WirelessHART and
ISA100.11a [3]. The IIoT is an inspiring concept that links numerous environmental
sensors through the Internet, incredible prospects for experiencing a pretty way of
life [4]. The IIoT may merge the information technology (IT) and operational tech-
nology (OT) domains, resulting in lower operational expenses and greater signifi-
cance for system-to-system communication, making it the most effective technique
for achieving production performance.
In this situation, it is possible to carry out applications for industry, including
industrial machinery production and energy operation in distribution systems,
more affordably. IIoT use cases support the creation of “Smart Factories,” which
168 AI-Driven IoT Systems for Industry 4.0

link all sensor-equipped equipment via the Internet to give progressive activity,
perceptibility over instantaneous data. IoT environmental monitoring in industry
aids in preserving a secure working location by sensing smoke and other impuri-
ties and evaluating the quality of the air and water. Each cluster uses Bluetooth to
send information gathered from the working environment to the main server [4].
Industrial automation, smart grid, and smart power are examples of IIoT applications
that handle massive volumes of data rapidly and effectively to raise production at a
little charge. Due to IIoT’s promise of enhanced customer practices, data privacy,
increased GDP, unremitting data flooding, and many other benefits, industry sectors
are growing faster than they would on their own. Industrial machines and systems
will be able to detect their surroundings and other factors and react accordingly as
this technology advances. By 2025, it is anticipated that business-to-business (B2B)
solutions will bring in $11 trillion in annual income, or 70% of the total amount [5].
With the help of smart IIoT gadgets, machines can continuously change their modu-
lation strategy in order to operate and address mechanical faults without the need
for manual labor.
The IIoT technology is in its still early days; hence, the problems listed above are
to be anticipated, despite the recent emergence of several IIoT applications. A num-
ber of communication-related issues have been mentioned in prior IIoT assessments,
but they didn’t go into detail about how these issues might be resolved by IIoT-
enabled technology. Furthermore, a lot of studies have only focused on the problems
and applications that now exist, ignoring the chances that this expertise presents for
corporate growth and amplified consumer happiness. In order to achieve its objec-
tive, this research intends to shed light on the infrastructure and technology that
enable the IIoT, with an emphasis on their relevance in fostering worldwide indus-
trial progress, their claims, encounters, and future proposals on fixing present issues
and opening up new predictions. The present review’s precise goals are as follows:

i. Describe the current level of acquaintance in the domain to aid academics in


understanding IIoT theories, models, practical research, and advancements.
ii. By combining and condensing the findings of pertinent studies on IIoT, it is
possible to provide a thorough outline of the modern expertise in the sector
and spot trends.
iii. Highlight the unanswered issues, unexplored regions, and emerging trends
related to IIoT by identifying research gaps and chances for additional
investigation.
iv. Research findings, gaps, and an assessment of the current level of knowl-
edge suggest prospective tactics and approaches for advancing the topic.

This analysis will help company leaders understand how to use IIoT technology
to grow their industries, create new business models, and become more competitive
in the marketplace.

10.1.1 Industrial Automation’s Use of Embedded Systems in the Future


The demand for automation is expected to increase, and embedded systems are posi-
tioned to play a significant part in the automation of the industry. These solutions
Design and Analysis of Embedded Sensors for IIoT 169

could boost production, boost efficiency, and stimulate innovation in industrial pro-
cesses. Additionally, it is anticipated that the combination of systems with embedded
devices like cutting-edge technologies like AI and IoT would further improve their
capabilities. In general, embedded systems are crucial for giving firms the chance to
prosper in the dynamic and cutthroat world of industrial automation.
Recent industrial systems are not complete without the feeding of sensor informa-
tion to the controllers, monitors, and other functioning technologies powering the
industry. Despite the fact that sensor networking has been used for a while, the ini-
tiation of cyberspace has enhanced both the chances and trials related to employing
sensor systems. The IIoT now includes sensors, which has expanded both the design
potential and challenges.
Sensors serve a variety of purposes in modern manufacturing. Along with pro-
viding information for method control, they also support with importance of assur-
ance, asset monitoring, worker safety, and other things. Additionally, the initiation of
potent, AI and cloud-based analytical software tools have made it possible to utilize
sensor data to reduce production costs through process improvement and preventive
maintenance. After being transferred to the internet, that data can be utilized for
a number of things, in addition to supply management and international resource
coordination.
There are many different sensor types that can be used for these various applica-
tions, and new and better models are always being developed. Among the most popu-
lar sensor kinds are those for light, motion, temperature, position, presence, vision,
etc., which are shown in Figure 10.3.

FIGURE 10.3 Various sensors used in IoT.


170 AI-Driven IoT Systems for Industry 4.0

The information provided by this variety of sensor categories has typically


been a limited resource, restricted to a certain ability or possibly a single appli-
ance. However, everybody in the globe now has access to sensor data because of
the development of low-cost, wide-area connection technologies. This accessibil-
ity has substantially increased the potential applications for sensor data and infor-
mation. Predictive maintenance, which observes sensor data to track the status of
equipment almost instantly and spot critical maintenance needs before equipment
fails, sparked immediate and widespread attention. However, there are a ton of other
ways to make use of the ability to connect any industrial sensor to every resource
accessible. Unfortunately, connecting a sensor to the internet and using the data is
not always simple, especially when working with older sensor systems. Industrial
sensors convey their data via several protocols to their typical customers. However,
protocol conversion is at least required because the internet entails data in the TCP/
IP (Transmission Control Protocol/ Internet Protocol) setup. If there are numer-
ous sensors that require connectivity, a router or gateway could be required to con-
centrate and aggregate the data. Sensor layer architecture is shown in Figure 10.4.
Additionally, the data must be stored somewhere for subsequent retrieval and, fre-
quently, processing, which requires employing some sort of cloud service.
Naturally, since the link is bidirectional, connecting the sensor to the web offers
the possibility of remote configuration and operation. The safety of cooperation
with the data provided and the incoming control orders must be ensured as a result.
Additional requirements resulting from connectivity include authenticating users,
setting up new devices, and keeping track of a large number of sensors. Because of
this, an IIoT sensor device essentially comprises a composite unification of software
and hardware that connects the device at the edge with the cloud services. The pro-
cess for generating this collection varies greatly depending on the circumstance.

FIGURE 10.4 Sensor system layers.


Design and Analysis of Embedded Sensors for IIoT 171

10.1.2 Most Common Sensors for Industrial Automation


There are hundreds of sensors available for various industrial uses. Many of these
fall into broad categories, and there are particular sensors available for different pur-
poses. Here is a list of the most popular sensors on the market right now.

a. Vibration sensors: Certain assets’ vibration levels are monitored by vibra-


tion sensors. Vibration frequently serves as a precursor to impending fail-
ure. A maintenance worker can be sent out to inspect vibrating equipment
and perform any necessary adjustments or repairs right away.
b. Temperature sensors: Typically, temperature sensors make sure that a piece
of equipment remains within a safe temperature range. This can stop a
boiler or other asset from overheating. Products must be kept in a freezer or
refrigeration equipment within a safe range for the food business.
c. Proximity sensors: These sensors help to alert an operator when one piece
of apparatus is too close to another piece. One environment where it is
widely employed is fleet automobiles. If a vehicle or forklift is going to
crash with something, a sensor will alert the operator.
d. Gas sensors: If smoke or any unpleasant gas leaks into an area, these sen-
sors can notify the maintenance team. This modification can significantly
improve the health and safety of workers when it comes to hazardous gases.
e. Security sensors: To monitor motion in specific places, security sensors can
be installed next to important windows and doors. Security sensors can aid
in the detection of unauthorized visitors if an organization demands high-
security requirements.
f. Humidity sensors: A facility might need to keep an eye on the humidity
levels in the immediate area depending on the sensitivity of the nearby
equipment. When levels deviate from a predetermined acceptable range,
notifications can be sent right away.
g. Pressure sensors: A maintenance specialist can specify the maximum pres-
sure permitted for a specific asset during installation. As a safety measure,
the asset may be programmed to automatically shut off if the equipment
exceeds that pressure. The repair can then be made by releasing a work
order.
h. Level sensors: These sensors can keep an eye on a piece of equipment’s
fluid level. A warning can be provided when low levels endanger an asset’s
functionality so that fluids can be replaced. Level sensors can also detect
powder, other materials, or even rubbish in a skip to initiate a maintenance
procedure.
i. Infrared sensors: These sensors measure released heat or emit or detect
infrared radiation. Infrared sensors can be used to detect things like remote
control signals or blood flow in medical applications.
j. Theft sensors: Theft sensors, which are widely used in retail, can be fas-
tened to pricey objects to make sure they remain within a predetermined
area. Theft sensors may be installed in facilities on pricey tools or other
goods that are prone to disappearing.
172 AI-Driven IoT Systems for Industry 4.0

10.1.3 Sectors Requiring Industrial Sensors


Industrial sensors have used across a wide range of sectors. Agriculture- and health-
care-related businesses need equipment and asset monitoring. Sensors can be used
in manufacturing, fleet management, and oil and gas facilities to monitor equipment
performance.

• Agriculture: IIoT sensors are frequently used by businesses in the agricul-


ture sector to keep an eye on a range of machinery. Weather stations, irriga-
tion systems, and crop machinery can all benefit from sensors.
• Healthcare: Healthcare organizations can use IIoT sensors to monitor
employees, patients, and inventories in addition to equipment and inventory.
IoT devices are widely used in the medical sector to track patients’ vital
signs and overall health. For long-term care and treatment, this information
may occasionally be sent to a healthcare provider.
• Manufacturing: A wide range of IIoT sensors can be used in manufactur-
ing facilities to monitor assets, machinery, and production lines. To avoid
malfunctions, costly damages, or accidents, machines can occasionally be
configured to automatically turn off when sensors approach thresholds.
• Fleet control: Sensors can support effective tracking of everything, whether
you’re managing a fleet of cars, trucks, or forklift equipment. The need for
maintenance may be indicated by sensors when specific usage or mileage
criteria are reached. Machines can be protected from collisions with other
vehicles or machinery by using proximity sensors.
• Gas and oil: IIoT sensors that can find leaks and track assets are useful
for oil and gas rigs, pipelines, and other infrastructure. Additionally, warn-
ings can be set up if any number of measurements go outside of acceptable
bounds, requesting necessary preventive maintenance.

10.2 METHODOLOGY
The development of embedded sensors for IIoT applications is a crucial undertaking
that requires careful attention to a number of important factors. These sensors are
the primary data collectors in industrial settings; thus, reliability, precision, effi-
ciency, and connection should be prioritized in their design. Here is a step-by-step
tutorial for creating embedded sensors for the IIoT:

i. Determine the sensing needs: Begin by comprehending the precise specifi-


cations of the industrial process or application. Decide what measurements
are required, such as those of chemical characteristics, vibration, tempera-
ture, pressure, humidity, or any other relevant variables.
ii. Selecting a sensor: Based on the variables you must measure, select the
right kind of sensor. Take into account elements like accuracy, precision,
range, and environmental circumstances (such as extremes in temperature,
humidity, or the presence of corrosive substances).
iii. Power efficiency: When designing the sensor system, consider how much
power it will use, especially if the sensors will be installed in inhospitable
Design and Analysis of Embedded Sensors for IIoT 173

or distant areas. Use energy-efficient components and low-power microcon-


trollers whenever possible. To increase the lifespan of sensor batteries, use
power management strategies such as sleep modes, duty cycling, or energy
harvesting (such as solar panels).
iv. Sensor integration: Make sure the sensor can be seamlessly incorporated
into the machinery or procedure used in the industry. To make the sensor
durable and resistant to environmental conditions like dust, water, or vibra-
tions, mechanical design considerations may be necessary.
v. Data acquisition and processing: Analog-to-digital conversion (ADC)
circuitry should be used for data acquisition and processing to transform
analog sensor signals into digital information. To enhance signal quality
and lower noise, use signal conditioning techniques (such as amplification
and filtering). For data processing and preliminary analysis, if necessary,
microcontrollers or microprocessors should be employed.
vi. Protocols for communication: To send sensor data to the main IIoT system,
use the best communication protocols. Common choices include cellular
networks, Wi-Fi, ethernet, Bluetooth, Zigbee, and LoRaWAN. Encryption
and authentication techniques ensure secure data transmission.
vii. Data compression: Reduce the amount of data transmission overhead by
effectively compressing and packing sensor data. Data transmission is sped
up, and bandwidth requirements are decreased.
viii. Security precautions: Put in place strong security mechanisms to safeguard
both the sensor and the data it collects. This covers data encryption, firm-
ware updates, and secure boot. Control access to sensor data using authen-
tication and authorization procedures.
ix. Maintenance and calibration: To keep the sensors’ accuracy, calibrate
them frequently. Use self-diagnostic tools to find sensor drift or failures.
Create sensors that are simple to maintain and replace, especially in harsh
or distant situations.
x. Scalability and interoperability: When designing sensors, keep in mind
that the IIoT network may eventually grow or receive additional sensors. In
order to improve interoperability with other components and systems, make
sure the sensors adhere to industry standards and protocols.
xi. Testing and validation: To verify the embedded sensors’ dependability and
performance, put them through a thorough testing process in actual indus-
trial settings. To make sure the sensors match the requirements, conduct
field tests and gather data.
xii. Data processing at the edge: Take into account integrating edge computing
capabilities into the sensors to carry out early data processing and filtering.
This will ease the load on the central IIoT system and reduce latency for
crucial decisions.

10.2.1 IoT in Industry and Industry 4.0


IIoT is known as Industry 4.0 or the fourth industrial revolution as a result of the
development of IoT, automation systems, internet of services, incorporation of
174 AI-Driven IoT Systems for Industry 4.0

FIGURE 10.5 History of industry revolutions.

cyber-physical systems (CPSs), wireless technologies, cloud computing, augmented


reality, and more [6]. The idea of Industry 4.0 was established in Germany’s indus-
trial sector and is a fragment of the government’s technological evolution objective.
In the initial industrial uprising, also known as Industry 1.0, mechanization, elec-
tricity, and IT were all introduced. In the fourth industrial revolution, also known as
Industry 4.0, the growth of IoT technology is bringing about digital transformation.
Figure 10.5 shows how industrial digitization has changed over time. In essence,
Industry 4.0 is an advancement in digital signal transmission technology built on
state-of-the-art CPSs created for manufacturing techniques used in industry. As a
result of the use of intelligent machines constructed by CPS for industry produc-
tion goals, Industry 4.0 is a concept that is also identified as the “Smart Factory” or
the “Factory of the Future” (FoF). It establishes a corporate ecosystem that helps
tie up the resources, infrastructure, and other marketing strategies employed by
many companies so they can cooperate and exercise decentralized control over
one another. Intelligent manufacturing has taken the place of the conventional pro-
duction system as a result of this industrial revolution, and as a consequence, the
countrywide economy is advancing significantly by means of the aid of futuristic
devices. IIoT is one of the key elements used to achieve such advancement, whereas
Industry 4.0 is a comprehensive notion of industrial development in terms of digital
transformation. According to Figure 10.5, whereas Industry 4.0 seeks to address
very industrial interacting challenges and open up new business prospects, IIoT
is merely among the crucial components helping Industry 4.0 to do this. Industry
4.0, which is described by Pereira et al. [7] has transitioned from centralized indus-
trial domains to providing ubiquitous connectivity, efficient analytics on data, cloud
storage and security. Without the IIoT, which was made expressly to install equip-
ment utilizing distributed intelligence, networks to decentralized networks cannot
function. The result evidences that IIoT, a crucial aspect of Industry 4.0, consider-
ably contributes to advanced industrialization.
Design and Analysis of Embedded Sensors for IIoT 175

10.2.2 Technology-Enabled Industrial IoT


The newly developed industrial setting, frequently referred to as Industry 4.0, dem-
onstrates flexible, stretchy, and interactive operations by combining digital and phys-
ical technology. Businesses use the internet and other related technologies to quickly
decide on actions throughout smart factories and the supply chain. IIoT solutions use
connected sensors with edge technologies to boost production efficiency and product
quality in real time. The globalization, openness, interoperability, and socialization
of the internet all support the IoT notion. IoT systems emphasize the relationships
between the various sectors, whether they are commercial or industrial products
during physical addresses, as opposed to the industrial net, which forecasts industry
trends based on emerging technology. Cloud computing, the IoT, and big data ana-
lytics should be the main topics of future research. Data management and archiving
are made incredibly simple and effective by the use of AI and data mining. AI uses
neural networks and fuzzy logic. Data mining involves the pre-processing, filtering,
selection, conversion, clustering, and pattern analysis of data. Finding, seeking for,
and exchanging information and resources account for about 40% of all industrial
applications’ IoT benefits [8]. Instantaneous data mining using a variety of meth-
ods, tools, and knowledges, including data analytics, ML, data mining, blockchain,
edge, and fog computing, can minimize and control IoT while maximizing its value.
However, it is obvious that the IIoT cannot realize its fullest extent without sufficient
security.

10.2.2.1 Cyber-Physical System


One of the core technologies of Industry 4.0, the CPS leverages embedded data
mining systems in manufacturing machines to link the industrialized sector to the
real world [9]. It integrates with IoT to connect the physical and digital industrial
worlds. CPS enables an interactive industrial environment by integrating network-
ing, computing, and storage to develop smart factories. From this view position,
smart items may be identified and tracked more frequently. CPSs are programmed
disseminated systems that attach to communication grids, computer infrastruc-
tures, and the real world. Because they prioritize Industry 4.0 device network-
ing more than traditional embedded systems, they are unique from those systems.
They consequently feature a controller unit that can handle a sensor as well as
an actuator that allows interaction with the outside world. Once processed, they
use a communication route to share the data with other applications and services.
During service, these systems must be reliable, dependable, secure, and efficient.
One of Industry 4.0’s goals is to provide the best security maintenance possible
across all CPS network layers for this reason, protecting sensitive data while guar-
anteeing data anonymity [10].
Thakur and Kumar Seghal [11] developed a heterogeneous model for a smart
CPS system that includes diverse electrical, pneumatic, and hydraulic processes in
order to facilitate the execution of hybrid process dynamics. In order to anticipate
potential hazards and delays in the process, the proposed architecture might segre-
gate the moving parts of a process, such as computation, communication, control,
and actuation.
176 AI-Driven IoT Systems for Industry 4.0

10.2.2.2 Block Chain Technology


Sharing data securely across every participant in a network of IIoT is difficult. For
the IIoT, blockchain technology is perfectly suited, due to its distributed features like
certainty, durability, trackability, reliability, and data origin. A specific type of data
structure called a blockchain uses (PKC) public key cryptography to enable safe
network transactions between peers and implements distributed ledger technology
with Bitcoin. The hash of the preceding link is referenced in each link of the chain.
Because of the powerful benefits of blockchain, IoT security is enhanced and decen-
tralized access to IoT data is made possible, enhancing the data’s ability to expose
fraud. The major Blockchain technology’s inability to scale is a hurdle to adop-
tion, a problem that will get shoddier as more blockchain initiatives are launched.
Conflicts over transactions could consequently increase. This leads to higher latency.
Furthermore, it takes work to keep encryption keys and hash codes up to date. To
ensure dependable and safe operations in the sector, Latif et al. [11] created a block-
chain-based architecture. Real-time cryptographic algorithms were used, and the
blockchain is outfitted with proof of authenticity, rapid, energy-efficient, and highly
scalable consensus approach. Numerous tests and analyses have been conducted on
the architecture created for this project, proving its value. At last, the researchers
implemented the recommended architecture and achieved outstanding outcomes of
blockchain technology, converting a standard good processing resource into a secure
and smart industrial proposal. This work absorbed a variety of processes in intel-
ligent industry systems as opposed to other studies that were more concerned with
the loading space of device facts utilizing blockchain technology. The study signifi-
cantly improved data security outside of networks of the blockchain domain also.
A blockchain technique was developed by Umran et al. [12] to be employed in the
cement industry. Methods and machinery can efficiently control access to crucial
data generated by sensors and actuators, consume little power, are scalable, and offer
excellent security. A low-power ARM Cortex-M processor was employed because of
its efficiency in executing cryptographic algorithms, greatly boosting the computa-
tional performance of the suggested architecture. In addition, the blockchain design
replaces proof of work (PoW) with proof of authentication as a consensus method to
ensure safe authentication, scalability, speed, and energy efficiency.
Rathee et al. [13] created a hybrid blockchain solution to guarantee security for a
corporation employing IIoT with branches located across many countries. The sug-
gested method was thoroughly tested against conventional techniques using a range
of security factors. According to the simulation results, the suggested method 94%
lessens the risk of distributed denial-of-service (DDoS) attacks, authentication prob-
lems, etc. This approach involves various consumers from various zones. In view of
the fact that user information is stored in many locations, there may be a variety of
security risks, including data loss in transit, the creation of false data, and additional
costs.
10.2.2.3 Fog Computing
The majority of commercial apps and activities may now connect with IoT thanks to
fog computing, which is thought of as an extension of cloud computing. When wire-
less and decentralized devices work together with a network to complete necessary
Design and Analysis of Embedded Sensors for IIoT 177

tasks in business applications devoid of the interference of other parties, this circum-
stance is referred to as fog computing. Fog computing is the term for web service
delivery across IT infrastructure. The storage, virtual network connectivity, and pro-
cessing power provided by the fog nodes, which also include other components, are
essential to the computation. System optimization, ML-driven trials, and system fail-
ure prediction can all be done using these nodes. If a link breaks anywhere between
the local fog nodes and the remote cloud, all fog nodes that are located before that
hop are always reachable and can thus keep providing local system support. Fog
computation acts as a bridge between static calculation area and dynamic computa-
tion area for many industrial manufacturing and appliances. A specific study was
directed by Tange et al. [14] to determine how the IIoT’s security requirements might
be satisfied using the newly emerging concept of fog computing. The chapter, which
offers a wealth of research opportunities in the IIoT security field, shows how fog
computing, as an emergent computing paradigm, might become a powerful tool in
protecting a number of connected industrial environments.
In order to predict maintenance in fog computing, Teoh et al. [15] presented the
management of several resources with the ML and genetic algorithm (GA) approach.
The fog workflow replicates the measurement of GA’s time, cost, and energy along
with the methods FCFS (First Come First Serve), Round Robin, Max-Min, and
Min-Min. Real-time data and two kinds of logistic regression are used to build the
maintenance prediction model. The results showed that the technique outperformed
Min-Max, Round Robin in terms of execution time, cost, and energy usage. The
accuracy of the prediction model is 95.1% during training and 94.5% during testing.
A schematic planning related to the fog computing idea was created by Benomar
et al. [16] with industrial monitoring needs in mind. A layer of fog formed to give
a single-valued decomposition-based strategy for data aggregation. Using the
OMNET++ simulator, the advantages of the recommended method for monitoring
performance, including latency and packet delivery rate, were investigated. As an
illustration, a networked monitoring system for a real-world industrial motor was
used. In the future, additional domains that are essential to the industry can be actu-
ated using the recommended approach.

10.2.2.4 Cloud Computing


The massive volume of information produced by IIoT must be processed, analyzed,
and stored using high-speed, widely spread computer systems. Technology for cloud
computing provides storage, processing capabilities, and networking across all IIoT
system components. All relevant hardware and software are directly interfaced with
backend clouds. Rendering and computational capacity have greatly risen with the
introduction of cloud computing, processing units, and visualization, becoming
more and more popular. Additionally, AI and IIoT are continuously integrating, 5G
is becoming more widely used in business, education, and healthcare training, and
other emerging technologies like virtual reality (VR), augmented reality (AR), and
mixed reality (MR) are also becoming more prevalent. There are three types of
cloud service options: public, private, and hybrid. Due to the high cost of setting up
data centers and hiring people, for newly established or small/medium-sized busi-
nesses, having their own cloud is not a realistic solution.
178 AI-Driven IoT Systems for Industry 4.0

Recurrent neural networks (RNN) and model-based engineering (MBE) were


coupled in the work done by Senthilkumar and Rajesh [17] to control the flow
of jobs in a cloud system. RNN predicts upcoming workloads in the context of
cloud computing and provides the required resources to maintain workflow bal-
ance. Making forecasts depends entirely on how quickly IIoT devices are ingesting
data. In a cloud environment, input data is stored or processed by virtual machines.
On the control plane, resources are assigned and tasks are scheduled using RNN
to enhance the procedure. To finish the optimization task repeatedly, the RNN is
restarted. The methodology developed in this study successfully balances cloud
workflow and data.
A virtual surgical education system powered by AR and motivated by brain tumor
neurosurgery was created by Liu et al. [18] and includes an immersive VR simula-
tor. The tool enables remote surgery instruction in real time via 5G transmission.
Six experts and 18 novices were allowed to participate in the experiment to validate
the system. The two simulators were then evaluated using methods for construction
and face validation. The results of the face and content validation demonstrated that
the AR simulator was superior to the VR simulator in terms of vision and scene
authentication for enhancing surgical abilities. However, there are considerable dis-
advantages to the approach.

10.2.2.5 Edge Computing


It is a novel concept that evaluates and manages data using computer, network
sources, and storage dispersed connects data sources and cloud computing centers
through the channels. In order to process data locally, a of couple rapid decisions,
and transfer, edge computing makes use of edge devices with adequate processing
power to provide computation results. It significantly improves system performance
by reducing overall latency and bandwidth requirements. The smart industry’s use
of edge computing enables companies to put in place strong security measures close
to reducing security risks by reducing the possibility of data breaches that are stored
in the cloud center.
Deep reinforcement learning (deep RL) was employed in Wu et al.’s approach
to the scheduling issue in edge computing [19] to raise the caliber of services
offered to customers in IIoT applications. They considered the different central-
edged computing frameworks and developed a hierarchical scheduling mecha-
nism. After that, it was suggested to use a deep intelligence scheduling algorithm
(DISA) to make communication scheduling choices that match the traits of the
model. Several performance measures were used in the study to compare DISA
and other standard systems. In terms of scheduling, bandwidth utilization, and
latency analysis, the proposed solution performs noticeably better than the current
methodologies.
Deng et al. [20] recommended an autonomous partial offloading system in their
study for tasks requiring delay-responsive computation in multi-user IIoT mobile
edge computing systems. To increase the quality of services (QoS), it was intended
to provide off-load services with no delay. Two RL-based offloading approaches
were supplied by this method to automatically enhance delay performance.
Design and Analysis of Embedded Sensors for IIoT 179

First, the Q-learning method was created to provide a discrete partial offloading
choice. The system’s performance was then optimized by using looser job offload-
ing, which was given as a continuous built-on deep deterministic policy gradient
(DDPG). The simulation findings show that the DDPG approach and the Q-learning
scheme both reduce latency by 30% and 23%, respectively.

10.2.2.6 Big Data Analytics


IIoT systems and devices generate enormous amounts of data, making the usage of
extremely complicated, high-performance computer platforms for big data analysis
necessary. Traditional data processing methods couldn’t be used because of the IoT’s
significantly improved data volume. The IoT relies on AI and big data to form infer-
ences and judgments from sensory input, since many entities of interests (EoIs) and
smart things (SThs) are connected to the cloud. IoT big data has special qualities as
opposed to usual big data problems. IIoT systems make a wide range of technologies
and methods for gathering enormous amounts of data, storing, organizing, process-
ing, and compiling numerous big data analytics processes possible. The data collec-
tion technologies provide connectivity to several data sources, sensors, and digital
devices in IIoT.
For the manufacturing industry, Ren et al. [21] developed and implemented a strat-
egy with intelligent IIoT using edge computing to monitor sensors and AI with big
data computing. Inside the industrial manufacturing sector, IoT sensors collect data
on environmental factors like temperature and vibration. The adoption of intelligent
and scalable manufacturing by the IIoT is made possible in large part by condition
data. These disorder facts are acquired for extra systematic analysis and investiga-
tion and can inform plant managers of any faults that may be about to appear. The
system comprises both the alert indicated above and a forecast examination for the
proportion of imperfect items the industrial machine will yield. Despite the advance-
ments, there are still certain security and protection gaps.

10.3 INDUSTRIAL IoT: CHALLENGES, ADVANTAGES,


AND FUTURE DIRECTIONS
The idea of the IIoT introduces a network of sensors, machines, and devices linked
together for industrial applications including energy management and programmed
equipment monitoring. IIoT makes use of communications in IoT and focuses on
machine interoperability. However, due to several challenges brought on by hetero-
geneity and data volume are essential for industrialized applications, IoT services
may start to become inconsistent. According to recent studies, enabling various
techniques like big data, data fusion, ML, cloud computing, fog computing, and
blockchain can have positive effects in this field. Furthermore, current developments
in big data analytics may have an impact on the creation of mechanical and intel-
ligent embedded-IoT systems in business applications (BDA). Implementing IIoT has
many advantages, but there are also disadvantages. Numerous academics contend
that professional labor and user knowledge can solve these issues and open doors for
addressing industrial needs.
180 AI-Driven IoT Systems for Industry 4.0

Alabadi et al.’s [22] creative taxonomy of industrial IoT challenges considers


not only the issues at hand but also the lingo and approaches employed to solve
them. These problems are real-time processing, heterogeneity, mobility, connectiv-
ity, resource limitations, and scalability, according to the authors. AI and BDA can
have a substantial impact in a variety of IIoT application areas. However, there are
drawbacks to BDA and AI-based technologies in these revolutionary applications in
terms of integrity, availability, interpretability, security/privacy, and auditing con-
cerns. In IoT infrastructures decentralized Communication, also known as peer-to-
peer (P2P) communication, can also be advantageous for well-organized industrial
infrastructures.

10.3.1 Real-Time Performance
By utilizing AI, blockchain technology, big data analytics, edge computing, and
cloud computing, sensors, and IoT devices can collect data from many sources
and act in response. They are sophisticated enough to automate procedures,
anticipate issues, and recognize security concerns in a present IoT-based sys-
tem. Remote real-time control of heating and cooling equipment is possible
with the aid of a monitoring and controlling system equipped with humidity
and temperature sensors. Applications for the cloud and Android can also show
users the outcomes of the sensed data. A cell-phone-enabled application and
Arduino device can be used to implement a real-time scheduling approach in
an IoT-smart automated system. High performance can be obtained because the
approach is reasonably able to modify the system setting using the switching
features available.

10.3.2 Energy Efficiency
IIoT will continue to support sustainability activities in two areas: improved resource
management and increased energy efficiency. In order to monitor keenly of pro-
cesses and manage various operations across various applications, current IoT sys-
tems contain wireless, radio sensors that collect data. Energy efficiency must be
taken into account when developing IoT devices if they are to maintain functionality
over time. The energy consumption of IoT devices is often optimized in order to
increase system effectiveness. By taking energy efficiency, system integrity, com-
munication capabilities, and safety measures into account, there are well-known
wireless technologies.
Additionally, mixed integer linear programming (MILP) [23] used in a built-
in IoT context can reduce power dissipation for processing and communication
by 27% and 36%, respectively. However, using fog computing in a large IoT net-
work comes with some energy consumption concerns. Future research should take
these issues into account when building saving energy solutions for such frame-
works. Future IIoT applications should make use of automation and data insights
in real time to boost sustainability overall, minimize waste, and increase energy
efficiency. IoT imagines a time when a smart environment’s sensing, processing,
and networking capabilities will require little to no human interaction. In such
Design and Analysis of Embedded Sensors for IIoT 181

intelligent surroundings with battery-operated sensors and gadgets, energy effi-


ciency is a big challenge.

10.3.3 Industrial Internet of Things (IIoT) Security and Safety


Compared to a small number of devices in consumer IoT installations, IIoT networks
are typically greater in scale. As a result, security and safety are more important
as IIoT adoption grows. The network architecture becomes more sophisticated as a
result of the growing and dynamic interaction between IIoT devices. As a result, in
order to build security in contemporary consumer IoT, a comprehensive awareness
of the shortcomings is crucial. To maintain a IIoT network with secure and ensure
the connection protocols’ coverage is precise to fulfill safety standards, it is crucial
to take into account the unique needs from an industry standards standpoint. The
IIoT wireless service ecosystem includes some traction thanks to the current 5G
architecture’s industrial-focused capabilities. The IIoT-edge-cloud ecosystem’s vari-
ous components, including transformational technologies, data analytics, and real-
time off-load, were covered. In order to develop value-centric industrial wireless
networks, research difficulties related to current design trends must be addressed.
Future IIoT systems should prioritize safeguarding susceptible data, preventing
unauthorized access, and upholding the honesty of business transactions.
Jiang et al.’s experimental study [24] on the potential dangers of a typical gadget
in this field was given. They determined the points of entry for an embedded IoT
system using gadgets that are frequently employed in this industry. Additionally, an
examination of the security-related problems with existing IIoT technologies was
carried out, with a focus on the susceptibility of recent production systems. The
report included several recommendations for further IIoT research in terms of poten-
tial mitigation techniques to avoid unforeseen outcomes. However, the main flaws in
this effort included the unsecured server interface wasn’t properly analyzed, privacy
issues, flaws linked to buffer overflows, and a failure to adequately take system secu-
rity into account. Secure autonomous functioning and prevent intermediate intru-
sions in IIoT necessitate an efficient security strategy.

10.3.4 Convergence of Operational Technology and


ICT Has Received Limited Research
Recent studies indicate that the IIoT is the result of the fusion of OT with conven-
tional information and communication technologies (ICT). In order to integrate
these two distinct sectors, benchmarks must be well-specified and scalable. It is so
crucial to ensure that the benchmarks are secure. Otherwise, important and precious
operational assets may be exposed. Though there are distinctions between them, OT
has not developed as quickly as IT. It can be instructive to be aware of the variations
surrounding industrial challenges and privacy issues. Future IIoT research has to
address issues both with conventional automation frameworks and the distinction
between present and upcoming networking system architectures. Because IIoT is a
new idea in an automated factory, interoperability problems and cyber threats are a
serious concern there.
182 AI-Driven IoT Systems for Industry 4.0

In their work, Dhirani et al. [25] examined these problems regardless of the appli-
cation of cybersecurity standards and norms. Their research also offered valuable
advice on how to minimize the disparities between security standards and controls
and the fusion of compatible standards for OT and ICT security architectures. They
suggested the essential steps for the protection of the diverse producing environment,
addressing the new threats related to cyber security, different degrees of protec-
tion being enforced, and establishing unified cybersecurity benchmarks to provide
OT/ICT convergence. This study also included guidelines for mapping and putting
into practice a variety of standards, including those for 5G and machine-to-machine
networks, Platforms enabling cloud computing, IoT-based ecosystems, and lowering
security checks and problems.
In conclusion, the IIoT field is one that is fast evolving and has a wide range of
potential future paths and developments. Future applications of IIoT that hold the
most promise are edge computing, integrating AI, and enhancing security measures.
AI algorithms may examine data gathered by IIoT devices to spot abnormalities,
spot patterns, and make wise predictions.

10.4 CONCLUSION
Designing embedded sensors for IIoT involves a multidisciplinary approach,
encompassing electrical engineering, mechanical engineering, software devel-
opment, and data science. Collaboration among experts in these domains is
essential to create sensors that meet the demands of industrial applications while
maintaining high reliability and security. Embedded sensors are the backbone
of IIoT systems, enabling industries to harness the power of data for improved
efficiency and competitiveness. Designing and analyzing these sensors require
careful consideration of factors such as sensor selection, power efficiency, data
analysis, and security. As IIoT continues to evolve, addressing challenges and
embracing emerging technologies will be crucial for its success in various indus-
trial sectors.

REFERENCES
1. Patel KK, Patel SM, Scholar PG. Internet of Things-IOT: Definition, Characteristics,
Architecture, Enabling Technologies, Application & Future Challenges. Int J Eng Sci
Comput. 2016;6:1–10.
2. Khan WZ, Rehman MH, Zangoti HM, Afzal MK, Armi N, Salah K. Industrial Internet
of Things: Recent Advances, Enabling Technologies and Open Challenges. Comput
Electr Eng. 2020;81. https://doi.org/10.1016/j.compeleceng.2019.106522.
3. Kiesel R, van Roessel J, Schmitt RH. Quantification of Economic Potential of 5G for
Latency Critical Applications in Production. Procedia Manuf. 2020;52:113–20. https://
doi.org/10.1016/j.promfg.2020.11.021.
4. Malik PK, Sharma R, Singh R, Gehlot A, Satapathy SC, Alnumay WS, et al. Industrial
Internet of Things and Its Applications in Industry 4.0: State of The Art. Comput
Commun. 2021;166:125–39.
5. Sign´e L, Heitzig C. Effective engagement with Africa Capitalizing on shifts in busi-
ness, technology, and global partnerships; 2022.
Design and Analysis of Embedded Sensors for IIoT 183

6. ur Rehman MH, Yaqoob I, Salah K, Imran M, Jayaraman PP, Perera C. The Role
of Big Data Analytics in Industrial Internet of Things. Futur Gener Comput Syst.
2019;99:247–59. https://doi.org/10.1016/j.future.2019.04.020.
7. Gebremichael T, Ledwaba LPI, Eldefrawy MH, Hancke GP, Pereira N, Gidlund M,
et al. Security and Privacy in the Industrial Internet of Things: Current Standards and
Future Challenges. IEEE Access 2020;8:152351–66. https://doi.org/10.1109/ACCESS.
2020.3016937.
8. Younan M, Houssein EH, Elhoseny M, Ali AA. Challenges and Recommended
Technologies for the Industrial Internet of Things: A Comprehensive Review. Meas J
Int Meas Confed. 2020;151. https://doi.org/10.1016/j.measurement.2019.107198.
9. Abikoye OC, Bajeh AO, Awotunde JB, Ameen AO, Mojeed HA, Abdulraheem M,
et al. Application of Internet of Thing and Cyber Physical System in Industry 4.0 Smart
Manufacturing. Adv Sci Technol Innov. 2021. https://doi.org/10.1007/978-3-030-
66222-6_14.
10. Thakur P, Kumar Sehgal V. Emerging Architecture for Heterogeneous Smart Cyber-
Physical Systems for Industry 5.0. Comput Ind Eng. 2021;162. https://doi.org/10.1016/j.
cie.2021.107750.
11. Latif S, Idrees Z, Ahmad J, Zheng L, Zou Z. A Blockchain-Based Architecture for
Secure and Trustworthy Operations in the Industrial Internet of Things. J Ind Inf
Integr. 2021;21. https://doi.org/10.1016/j.jii.2020.100190.
12. Umran SM, Lu S, Abduljabbar ZA, Zhu J, Wu J. Secure Data of Industrial Internet of
Things in a Cement Factory Based on a Blockchain Technology. Appl Sci. 2021;11.
https://doi.org/10.3390/app11146376.
13. Rathee G, Ahmad F, Sandhu R, Kerrache CA, Azad MA. On the Design and
Implementation of a Secure Blockchain-Based Hybrid Framework for Industrial Internet-
of-Things. Inf Process Manag. 2021;58. https://doi.org/10.1016/j.ipm.2021.102526.
14. Tange K, De Donno M, Fafoutis X, Dragoni N. A Systematic Survey of Industrial
Internet of Things Security: Requirements and Fog Computing Opportunities. IEEE
Commun Surv Tutorials 2020;22. https://doi.org/10.1109/COMST.2020.3011208.
15. Teoh YK, Gill SS, Parlikad AK. IoT and Fog Computing Based Predictive Maintenance
Model for Effective Asset Management in Industry 4.0 Using Machine Learning. IEEE
Internet Things J. 2021. https://doi.org/10.1109/JIOT.2021.3050441.
16. Benomar Z, Campobello G, Segreto A, Battaglia F, Longo F, Merlino G, et al. A Fog-
Based Architecture for Latency-Sensitive Monitoring Applications in Industrial Internet
of Things. IEEE Internet Things J. 2021. https://doi.org/10.1109/JIOT.2021.3138691.
17. Senthilkumar P, Rajesh K. Design of a Model Based Engineering Deep Learning
Scheduler in Cloud Computing Environment Using Industrial Internet of Things (IIOT).
J Ambient Intell Humaniz Comput. 2021. https://doi.org/10.1007/s12652-020-02862-7.
18. Liu J, Qian K, Qin Z, Alshehri MD, Li Q, Tai Y. Cloud Computing-Enabled IIOT
System for Neurosurgical Simulation Using Augmented Reality Data Access. Digit
Commun Networks. 2022;11(2): 347–57. https://doi.org/10.1016/j.dcan.2022.04.019
19. Wu J, Zhang G, Nie J, Peng Y, Zhang Y. Deep Reinforcement Learning for Scheduling
in an Edge Computing-Based Industrial Internet of Things. Wirel Commun Mob
Comput. 2021;2021. https://doi.org/10.1155/2021/8017334.
20. Deng X, Yin J, Guan P, Xiong NN, Zhang L, Mumtaz S. Intelligent Delay-Aware Partial
Computing Task Offloading for Multi-User Industrial Internet of Things through Edge
Computing. IEEE Internet Things J. 2021. https://doi.org/10.1109/JIOT.2021.3123406.
21. Ren S, Kim JS, Cho WS, Soeng S, Kong S, Lee KH. Big Data Platform for Intelligence
Industrial IoT Sensor Monitoring System Based on Edge Computing and AI. 3rd
Int. Conf. Artif. Intell. Inf. Commun. ICAIIC 2021;2021. https://doi.org/10.1109/
ICAIIC51459.2021.9415189.
184 AI-Driven IoT Systems for Industry 4.0

22. Alabadi M, Habbal A, Wei X. Industrial Internet of Things: Requirements, Architecture,


Challenges, and Future Research Directions. IEEE Access. 2022;10: 66374–6400.
24. Jiang X, Lora M, Chattopadhyay S. An Experimental Analysis of Security
Vulnerabilities in Industrial IoT Devices. ACM Trans Internet Technol. 2020;20.
https://doi.org/10.1145/3379542.
23. Al-Shammari HQ, Lawey A, El-Gorashi T, Elmirghani JMH. Energy Efficient Service
Embedding in IoT Networks. 2018 27th Wirel. Opt. Commun. Conf. WOCC 2018;2018.
https://doi.org/10.1109/WOCC.2018.8372741.
25. Dhirani LL, Armstrong E, Newe T. Industrial IoT, Cyber Threats, and Standards
Landscape: Evaluation and Roadmap. Sensors. 2021;21. https://doi.org/10.3390/s21113901.
11 AI for Optimal
Decision-Making
in Industry 4.0
Ravichandran Bargavi

11.1 INTRODUCTION
Artificial intelligence (AI) has emerged as a transformational technology in the
context of Industry 4.0, revolutionising decision-making processes in a variety of
industries [1, 2]. This overview aims to introduce the role of AI in Industry 4.0 and
investigate its potential for optimising decision-making in complicated industrial
situations.
The integration of digital technologies is what gives rise to “Industry 4.0,” which
signifies a paradigm shift in production and technology. With the use of the Internet
of Things (IoT), big data analytics, cloud computing, and cyber-physical systems,
among other major concepts and technologies, this overview seeks to provide read-
ers with a thorough grasp of Industry 4.0 [3].
AI is the umbrella term for a variety of tools and methods that let computers
simulate human intellect, learn from experience, and take independent actions [4].
AI is essential to Industry 4.0 because it turns data into insightful knowledge and
helps decision-making in a variety of fields [5]. By transforming decision-making
through data analysis, predictive analytics, process optimisation, decision support
systems (DSSs), and autonomous systems, AI plays a critical role in Industry 4.0
[6, 7]. Organisations can achieve higher levels of productivity, efficiency, and com-
petitiveness in the ever-changing world of Industry 4.0 by utilising AI to its full
potential [8].
Knowledge of the main concepts and technology of Industry 4.0 is necessary.
Industry 4.0 is built on a combination of interconnectedness, information open-
ness, technical support, decentralised decision-making, and technologies, includ-
ing the IoT, big data analytics, cloud computing, and cyber-physical systems [9].
Organisations may promote innovation, streamline operations, and boost competi-
tiveness in the fast-changing environment of Industry 4.0 by utilising these ideas
and technologies [10]. A new wave of opportunities and developments for industries
around the world is brought about by the Industry 4.0 framework [11]. To be success-
ful in this rapidly changing environment, organisations must overcome a number of
problems that are presented by technology [12]. The main difficulties that industries
confront when making decisions within the context of Industry 4.0 are highlighted
in this overview of issues [13].

DOI: 10.1201/9781003432319-11 185


186 AI-Driven IoT Systems for Industry 4.0

Organisations must overcome a number of obstacles in order to make decisions


inside the Industry 4.0 framework [14]. The key challenges that industries must
overcome include resolving issues with data management and quality, tackling data
complexity and analysis, ensuring integration and interoperability, enhancing cyber-
security and data privacy, developing the necessary workforce and skills, manag-
ing change, and managing costs and investments. Organisations may make use of
Industry 4.0’s full potential and make wise choices for sustained growth and com-
petitiveness by proactively tackling these issues [15].
AI approaches, such as machine learning, deep learning, and natural language
processing (NLP), have the potential to significantly improve decision-making in the
context of Industry 4.0 [13]. Organisations may gain useful insights from enormous
amounts of data using their data analysis and pattern recognition capabilities, enabling
data-driven decision-making [16]. Additionally, predictive analytics and maintenance
are strengthened by AI techniques, which optimise workflows and lower unsched-
uled downtime. AI improves productivity and quality in manufacturing processes
through process optimisation and control. Additionally, AI-driven DSSs offer timely
recommendations and insights, simplifying complex decisions in fields like supply
chain optimisation and production planning [17]. AI systems can learn from data and
generate precise predictions thanks to machine-learning algorithms, whereas deep-
learning algorithms are superior at challenging pattern recognition tasks [18]. In the
Industry 4.0 environment, DSSs are now necessary instruments. Decision makers
can negotiate complexity, handle real-time data, optimise resource allocation, and
apply predictive and prescriptive analytics thanks to the integration of data manage-
ment, models, analytical methodologies, and user interfaces provided by DSS. DSS
equips decision makers with current data, insights, and interactive visualisations to
enable well-informed and effective decision-making in Industry 4.0’s dynamic and
data-driven environment. The importance of DSS will increase as more organisations
embrace Industry 4.0’s transformative potential, enabling them to prosper in an indus-
trial environment that is becoming more complicated and linked [19]. AI-driven deci-
sion-making in Industry 4.0 requires ethical practices, including addressing biases,
transparency, privacy, accountability, and governance frameworks. By proactively
addressing these challenges, organisations can harness AI’s transformative power,
uphold ethical standards, promote fairness, and build trust among stakeholders [19].

11.2 INDUSTRIES FACE CHALLENGES IN INDUSTRY 4.0


DECISION-MAKING
These challenges faced by industries in making effective decisions within the
Industry 4.0 framework are shown in Figure 11.1.

1. Data management and quality: Numerous sources, including sensors,


equipment, and processes, provide enormous amounts of data as part of
Industry 4.0. Managing and assuring the quality of this enormous volume
of data is one of the major issues. Industries face hurdles from problems
including data integration, storage, and security. Effective decision-making
depends on data that is accurate, reliable, and consistent [20].
AI for Optimal Decision-Making in Industry 4.0 187

FIGURE 11.1 Industries face challenges in Industry 4.0 decision-making.

2. Data complexity and analysis: Organisations are challenged with Industry


4.0’s complicated data. To derive relevant insights from data gathered from
many sources, in various formats, and with varying structures, advanced
analysis techniques are needed. Finding patterns, correlations, and anoma-
lies can be tough due to the diversity, velocity, and amount of data. For
organisations to successfully analyse and interpret the data for decision-
making reasons, they need advanced data analytics skills and qualified data
scientists.
3. Integration and interoperability: Systems, devices, and platforms from many
industries are being integrated and connected to create Industry 4.0. The
varying protocols, data formats, and standards used by various technologies
and vendors make it difficult to achieve seamless integration and interoper-
ability. For precise and efficient decision-making, it is crucial to guarantee
compatibility and smooth communication between various systems.
4. Cybersecurity and data privacy: Industry 4.0’s growing connection and
interconnectedness expose businesses to cybersecurity risks and threats.
It is a tremendous task to defend against cyberattacks while protecting
sensitive data, intellectual property, and essential infrastructure. To pro-
tect their systems and data, businesses must put strong cybersecurity mea-
sures in place, such as encryption, access controls, and ongoing monitoring.
Industry players in international markets face additional difficulties in
assuring compliance with data privacy laws like General Data Protection
Regulation (GDPR).
5. Skills and workforce development: Industry 4.0 requires a trained and flex-
ible workforce that can use cutting-edge technologies while making wise
decisions. To meet the evolving demands of the digital world, organisations
188 AI-Driven IoT Systems for Industry 4.0

must face challenges in reskilling and upskilling their employees. It can be


difficult to find and keep personnel with expertise in data analytics, AI, IoT,
and cybersecurity because there is frequently a shortage of qualified work-
ers in these fields.
6. Change management and cultural shift: Major organisational transforma-
tions and a cultural shift are required to make the transition to Industry 4.0.
Employee resistance may be encountered since it necessitates modifications
to workflows, processes, and mindsets. To meet this challenge, organisa-
tions must promote a culture of innovation and continuous learning, suc-
cessfully manage change, and explain the advantages of Industry 4.0.
7. Cost and investment: Adopting Industry 4.0 technology and setting up the
required infrastructure might be expensive. Securing the requisite financial
resources and making significant expenditures in infrastructure, technol-
ogy, and training are difficult tasks for organisations. Making decisions
within the framework of Industry 4.0 presents a significant challenge in
balancing costs and anticipated returns on investment.

11.3 ROLE OF AI IN INDUSTRY 4.0


AI plays a pivotal role in transforming data into valuable insights and supporting
decision-making across multiple domains as shown in Figure 11.2.

1. Data analysis and pattern recognition: One of AI’s primary advantages in


Industry 4.0 is its capacity to analyse enormous volumes of data and spot
patterns, correlations, and anomalies that human operators would miss. AI
algorithms can automatically extract useful insights from a variety of data

FIGURE 11.2 Role of AI in Industry 4.0.


AI for Optimal Decision-Making in Industry 4.0 189

sources, such as sensor readings, production logs, and customer feedback,


using techniques like machine learning and deep learning. Organisations
can use this analysis to optimise many elements of their operations and
make data-driven decisions.
2. Predictive analytics and maintenance: Predictive analytics is made pos-
sible by AI, giving businesses the ability to foresee problems before they
arise and take preventative action. AI models can forecast equipment break-
downs, maintenance requirements, and quality problems using historical
data and machine-learning algorithms, enabling prompt intervention. By
minimising unplanned downtime, lowering maintenance costs, and opti-
mising resource allocation, this strategy improves efficiency.
3. Process optimisation and control: AI-based systems that continuously
analyse data and make real-time adjustments to parameters might optimise
difficult manufacturing processes. To increase productivity and product
quality, AI can dynamically optimise production factors like temperature,
pressure, and speed through adaptive control methods. AI-driven systems
can adapt to changing circumstances and improve performance by auto-
matically modifying process variables based on real-time feedback.
4. Decision support systems: DSSs driven by AI are becoming important
instruments in Industry 4.0. To offer real-time advice and insights to human
decision makers, these systems mix AI algorithms, data analytics, and
domain knowledge. Supply chain optimisation, resource allocation, demand
forecasting, and production planning are just a few complicated decision-
making scenarios that DSS can help with. Utilising AI capabilities, DSS
improves decision-making efficiency and accuracy, helping organisations
to successfully navigate settings that are dynamic and complex.
5. Autonomous systems and robotics: Industry 4.0 is revolutionising produc-
tion processes using AI-driven autonomous systems and robotics. These
systems can carry out tasks precisely, consistently, and effectively, mini-
mising human involvement and any errors. AI-enabled autonomous robots
can manage difficult assembly procedures, quality assurance, material han-
dling, and logistics, resulting in increased productivity and adaptability.

11.4 DECISION SUPPORT SYSTEMS (DSSs) TO INDUSTRY 4.0


Decision-making procedures have grown more complicated in the era of Industry 4.0
as a result of the abundance of data, interconnected systems, and dynamic operating
contexts. DSSs have become essential instruments for navigating these challenges. The
notion of DSS is explained in this section, along with how they apply to Industry 4.0.

11.4.1 Understanding Decision Support Systems


A DSS is a collaborative computer-based technology that helps decision makers nav-
igate complicated issues and reach well-informed choices. To give decision makers
pertinent information and help throughout the decision-making process, DSS com-
bines data, models, analytical approaches, and user interfaces. DSSs are intended
190 AI-Driven IoT Systems for Industry 4.0

to improve decision quality, increase the capacity for human decision-making, and
improve organisational performance [21].

11.4.2 Key Components of DSS


1. Data management: Decision-making in DSS is mostly dependent on data.
They combine and manage information from many sources, including inter-
nal databases, outside sources, and real-time data streams from Industry 4.0
technologies like IoT devices and sensors. To give decision makers current
and reliable information, data management entails data collection, storage,
processing, and retrieval [22].
2. Models and analytical techniques: DSSs use models and analytical meth-
ods to analyse data and produce insights for decision-making [23]. These
models might range from straightforward statistical models to complex
machine-learning methods. To extract patterns, detect trends, and simulate
scenarios, analytical tools including data mining, optimisation, simulation,
and predictive analytics are used, allowing decision makers to weigh risks
and probable outcomes.
3. User interfaces and visualisation: Decision makers can interact with DSS
and visually explore data because of its user-friendly interfaces. Charts,
graphs, dashboards, and interactive maps are examples of visualisations
that are used to convey data and insights in an easily comprehensible way.
Decision makers can modify data, run simulations, perform “what-if” anal-
ysis, and base choices on the outcomes thanks to user interfaces [24].

11.4.3 Relevance of DSS to Industry 4.0


1. Complexity management: An operational environment that is dynamic and
complex, with connected systems and a wealth of data, is introduced by
Industry 4.0. By integrating, analysing, and presenting data from diverse
sources, DSSs give decision makers the tools they need to handle this com-
plexity. DSSs give decision makers the ability to grasp the effects of their
choices and take well-informed actions in Industry 4.0 by demystifying
complex information [25].
2. Real-time decision support: Real-time data streams are produced by
Industry 4.0 technologies from sensors, machines, and other sources. This
data can be processed and analysed by DSSs in real time, giving decision
makers the most recent information for effective decision-making. In the
fast-paced Industry 4.0 environment, real-time decision support enables
organisations to adapt swiftly to changing circumstances, optimise opera-
tions, and grasp opportunities [26].
3. Optimisation and resource allocation: DSSs can improve decision-making
by using analytical methods like optimisation algorithms. These algorithms
assist businesses in effectively allocating resources, streamlining produc-
tion schedules, managing inventories, and balancing workloads. In the
context of Industry 4.0 [27], DSSs empower decision makers to take into
AI for Optimal Decision-Making in Industry 4.0 191

account a variety of factors, restrictions, and goals in order to make choices


that maximise efficiency and cut costs.
4. Predictive and prescriptive analytics: Decision-making in Industry 4.0 is
supported by DSSs using predictive and prescriptive analytics. Organisations
can estimate upcoming events, spot trends, and spot potential dangers
thanks to predictive analytics tools like machine-learning models [28].
Prescriptive analytics methods go beyond forecasts to offer decision makers
advice and the best courses of action based on simulated situations, restric-
tions, and corporate goals.

11.4.3.1 Potential for Optimising Decision-Making


Industry 4.0 has enormous promise for using AI to improve decision-making. The
following advantages can be attained by organisations by utilising AI technologies:

• Real-time data analysis, preventative maintenance, and process optimisa-


tion have all increased operational efficiency.
• Better product quality and fewer flaws thanks to real-time tracking and
adaptive controls.
• Decision-making that is quicker and more accurate thanks to insights and
suggestions derived from data from an AI-powered DSS.
• Better inventory management, demand planning, and resource allocation
thanks to AI-driven forecasting.
• Using automation and autonomous systems, reduced expenses, and boosted
productivity.

11.5 AI TECHNIQUES ENHANCING DECISION-MAKING


1. Machine learning: AI systems may learn from data and make predictions
or take actions without explicit programming thanks to machine-learning
techniques [29]. Because machine-learning models can be trained on his-
torical data in Industry 4.0 to find patterns, categorise data, and gener-
ate precise predictions [30]. With this capacity, decision makers can use
machine-learning algorithms to improve decision-making processes for
tasks like demand forecasting, predictive maintenance, quality control, and
anomaly detection.
2. Deep learning: Deep learning, a kind of machine learning, processes
and analyses complicated data structures using artificial neural networks.
Deep-learning algorithms are excellent at tasks in Industry 4.0 such as
complicated pattern recognition, audio and picture recognition, and NLP
[31]. Deep learning can be used to enhance decision-making processes with
sophisticated features, including real-time camera visual data analysis,
automated voice processing for customer interactions, and sentiment analy-
sis for interpreting consumer feedback [31].
3. Natural language processing (NLP): NLP is focused on making it possible for
computers to comprehend and process human language [32]. Unstructured
textual data, such as emails, reports, and customer evaluations, can be
192 AI-Driven IoT Systems for Industry 4.0

interpreted and have their meaning extracted by decision-making systems


using NLP approaches in Industry 4.0. Decision-making is improved by
NLP’s automated text summarisation, sentiment analysis, information
retrieval, and automated language translation capabilities, which extract
insightful information from textual input.

11.6 AI-BASED DATA COLLECTION AND PREPROCESSING


IN INDUSTRY 4.0
In order for businesses to benefit from Industry 4.0, relevant insights, wise decisions,
process optimisation, and data gathering and preprocessing are essential.
The following factors help to clarify how important these steps are:

1. Quality decision-making: Organisations may be confident they have access


to current and accurate information for data collecting. Organisations can
make data-driven decisions by gathering data from numerous sources in
the industrial environment, such as sensors, machines, and production pro-
cesses. By cleaning and organising the data, preprocessing improves data
quality even more, ensuring that decision makers have access to accurate
and consistent information for wise decision-making.
2. Predictive analytics and optimisation: Industry 4.0 data gathered from
many sources can offer insightful information for predictive analytics and
optimisation. Data can be integrated and transformed using pre-processing
procedures, resulting in the generation of a single dataset. The cornerstone
for building AI-driven models is this unified and well-organised data,
which enables precise forecasting and optimisation in fields like predictive
maintenance, demand forecasting, and resource allocation.
3. Improved efficiency and productivity: Efficient data collection and pre-
processing enable organisations to identify bottlenecks, inefficiencies, and
opportunities for improvement within their processes. By collecting data in
real time and preprocessing it to remove outliers and irrelevant information,
organisations can gain insights into process optimisation, reduce downtime,
minimise waste, and enhance overall operational efficiency. This ultimately
leads to improved productivity and cost savings.

11.6.1 Various Data Sources and Challenges in Data Collection


Industry 4.0 refers to a diverse set of data sources that contribute to the overall digital
transformation. Some of the important data sources are as follows:

1. Sensors and IoT devices: Sensors incorporated in machinery and gad-


gets produce real-time data on a variety of factors, including temperature,
pressure, vibration, and energy consumption. However, obstacles occur
when it comes to managing the huge number and variety of sensor data,
assuring data correctness, and dealing with issues like sensor failure and
calibration [33].
AI for Optimal Decision-Making in Industry 4.0 193

2. Production systems and processes: Data acquired from manufacturing


systems, such as manufacturing execution systems (MES) and supervisory
control and data acquisition (SCADA) systems, gives information about
production performance, cycle times, downtime, and quality. Integrating
data from disparate production systems, coping with data incompatibility,
and preserving data integrity and consistency are all challenges.
3. Supply chain and logistics: Data from supply chain and logistics activi-
ties, such as inventory levels, order monitoring, and transportation data,
is critical for making informed decisions. Integrating data from multiple
stakeholders and systems, synchronising data across the supply chain, and
guaranteeing data security and privacy are all challenges.
4. Customer interactions and feedback: Understanding consumer behaviour
and making wise company decisions depend heavily on customer data, which
includes purchase history, comments, and preferences. Managing massive
amounts of customer data, protecting data privacy and compliance, and getting
useful insights from unstructured data sources like social media are difficult.

11.6.2 Pre-Processing Techniques for Data Cleaning and Aggregation


Pre-processing techniques are used to guarantee data quality and get it ready for
AI-driven decision-making algorithms are shown in Figure 11.3.
Pre-processing procedures that are frequently used include:

1. Data cleaning: Data cleaning involves the elimination of duplicate data,


fixing the errors, handling the missing numbers, and dealing with the outli-
ers. To guarantee the accuracy and consistency of the data, methods like
data imputation, outlier detection, and data validation are used [34].

FIGURE 11.3 Pre-processing techniques for data cleaning and aggregation.


194 AI-Driven IoT Systems for Industry 4.0

2. Data aggregation: Data aggregation is the process of fusing information


from several sources or levels of resolution to produce an integrated and
complete dataset [35]. Data can be compressed while retaining its key
information by using aggregation techniques like averaging, summarising,
or grouping data based on important properties.
3. Data transformation: Data transformation is the process of transforming
data into a format that is acceptable for modelling and analysis [36]. To
standardise data and bring it to a similar scale, methods like normalisation,
scaling, and encoding are utilised. This ensures fair comparisons and com-
patibility across various variables.
4. Feature engineering: The process of developing new features or altering
existing ones to capture pertinent data for modelling decision-making is
known as feature engineering [37]. Techniques like feature selection,
dimensionality reduction, or developing derived features based on domain
knowledge can be used for this.

11.7 AI MODELS FOR DECISION-MAKING IN INDUSTRY 4.0


AI models and algorithms have transformed decision-making processes in the con-
text of Industry 4.0. This section introduces several AI models suited for decision-
making, covers the usage of supervised, unsupervised, and reinforcement learning
methods, and investigates case studies where AI models have been successfully
deployed in Industry 4.0.
There are several AI models used for decision-making across various domains.
Here are some commonly used models shown in Figure 11.4

Decision trees: When making decisions based on input features, hierarchical


models called decision trees adopt a tree-like structure. They can manage
both numerical and category data and are simple to interpret. For classifica-
tion and regression problems, decision trees are widely utilised in a variety
of sectors, including banking, health care, and marketing.
Random forest: Multiple decision trees are combined in the collective learning
model known as random forest. In order to arrive at a final choice, it gener-
ates a lot of decision trees and aggregates their forecasts. The durability,
precision, and aptitude of random forest in handling high-dimensional data
are well known. It is frequently used in e-commerce, finance, and fraud
detection.
Support vector machines (SVM): SVM is a supervised learning model applied
to regression and classification tasks. It functions by identifying the best
hyperplane to use for dividing data points into several classifications. SVM
is used in fields like bioinformatics, image classification, and text classifi-
cation because it performs well in situations where there are distinct class
borders.
Neural networks: The structure and operation of the human brain served as the
inspiration for the class of models known as neural networks. They are made
up of interconnected nodes (neurons) arranged in layers. Deep-learning
AI for Optimal Decision-Making in Industry 4.0 195

FIGURE 11.4 AI models for decision-making in Industry 4.0.

models are deep neural networks with numerous hidden layers. Many deci-
sion-making processes, including speech and image recognition, NLP, and
recommendation systems, rely on neural networks.
Bayesian networks: Bayesian networks use directed acyclic graphs to represent
probability interactions between variables. They can describe both causal and
conditional interdependence between variables. Bayesian networks are used for
uncertain decision-making tasks like risk assessment, diagnosis, and prediction.
Reinforcement learning: Decision-making in reinforcement learning mod-
els develops through engagement with the environment. By experiment-
ing with various activities and taking feedback into account, they seek to
maximise a reward signal. Applications for reinforcement learning include
robotics, gaming, and autonomous systems.
Genetic algorithms: Natural selection serves as the inspiration for a certain
kind of optimisation method known as genetic algorithms. They employ a
population-based strategy that reflects the development of biological spe-
cies. In fields like scheduling, resource allocation, and logistics, genetic
algorithms are employed to solve complicated optimisation problems.
Fuzzy logic: Fuzzy logic models take into account and make sense of ambigu-
ity and imperfect information. To deal with subjective or ambiguous data,
they employ linguistic variables and fuzzy rules. Expert systems, control
systems, and pattern recognition are a few examples of decision-making
processes that make use of fuzzy logic.

11.8 IMPROVING DECISION-MAKING EFFECTIVENESS,


EFFICIENCY, AND ACCURACY WITH AI-POWERED DSS
Decision-making procedures have been revolutionised by AI-powered DSSs,
which provide businesses with previously unattainable capabilities to increase
196 AI-Driven IoT Systems for Industry 4.0

accuracy, efficiency, and effectiveness. This section examines how using advanced
analytics, automation, and beneficial insights, an AI-powered DSS may improve
decision-making.

11.8.1 Improved Decision-Making Accuracy


1. Advanced analytics and data analysis: DSSs driven by AI use machine
learning, deep learning, and other advanced analytics methods to analyse
massive volumes of data precisely. DSSs can find patterns, connections, and
insights that people would miss by analysing and interpreting complex data
from diverse sources. As a result, projections are more accurate, trends are
better understood, and decision-making is more accurate.
2. Predictive analytics and forecasting: Predictive analytics and forecasting
can be carried out using real-time and historical data using AI models linked
with DSS. These models can forecast future results and foresee potential
risks or opportunities by spotting patterns and trends in data. AI-powered
DSSs enable proactive decision-making and lessen dependency on reactive
tactics by giving decision makers accurate forecasts.
3. Real-time insights and decision support: DSS is able to deliver current and
relevant insights because AI algorithms continuously analyse real-time
data streams. Decision makers receive timely information, enabling them
to make data-driven decisions in changing contexts. Real-time insights give
businesses a competitive edge by enabling them to react swiftly to changing
circumstances, optimise processes, and capture opportunities before they
are no longer viable.

11.8.2 Enhanced Decision-Making Efficiency


1. Automation and streamlined processes: Decision makers may concen-
trate on more strategic, high-level duties thanks to AI-powered DSS,
which automate repetitive, time-consuming procedures. Data collection,
preparation, and analysis are all automated by DSS, freeing up time and
resources. By minimising errors, expediting the decision-making process,
and eliminating manual work, this automation increases the efficiency of
decision-making.
2. Optimisation and resource allocation: AI algorithms implemented in DSSs
can optimise decision-making processes by taking into account many fac-
tors, limitations, and objectives. Optimisation algorithms, for example, can
aid in the effective allocation of resources, the optimisation of production
schedules, and the balancing of workloads. AI-powered DSSs boost effi-
ciency and maximise resource utilisation by streamlining resource alloca-
tion and optimising operational operations.
3. Personalised recommendations and adaptive support: Based on unique
user profiles and preferences, AI-powered DSSs can offer personalised
recommendations. DSSs can provide information, insights, and decision
assistance that are customised to each decision maker by comprehending
AI for Optimal Decision-Making in Industry 4.0 197

their unique needs. Additionally, AI systems may adjust and learn from
user interactions, progressively enhancing the accuracy and usefulness of
recommendations over time.

11.8.3 Increased Decision-Making Effectiveness


1. Simulations and scenario analysis: Simulations and scenario analysis are
made possible by AI-powered DSS, allowing users to assess the effects of
various options and predict future outcomes. Decision makers can investi-
gate alternative methods, evaluate risks, and select the best course of action
by modelling numerous situations. By taking into account a variety of pos-
sibilities and their implications, this proactive strategy improves decision-
making efficiency.
2. Intelligent insights and interpretability: Decision makers can better compre-
hend the underlying elements driving outcomes with the aid of AI-powered
DSSs, which offers insightful data. The understanding and interpretation
of information by decision makers are improved by these insights, which
are supported by AI models’ capacity to analyse complicated relationships
within data. By offering concise justifications and explanations for sug-
gestions, transparent and interpretable AI models further aid in successful
decision-making.
3. Collaboration and collective decision-making: AI-powered DSSs encour-
age collaboration and group decision-making by easing information
exchange and communication amongst decision makers. DSSs promote
collaboration and ensure that decisions are based on collective intelligence
by offering a shared platform for accessing and analysing data. Utilising
the knowledge and viewpoints of numerous stakeholders, this collaborative
approach improves the effectiveness of decision-making.

11.9 ETHICAL CONSIDERATIONS AND CHALLENGES IN


AI-DRIVEN DECISION-MAKING IN INDUSTRY 4.0
Industry 4.0’s incorporation of AI-driven decision-making systems has many advan-
tages, but it also presents ethical problems and issues that need to be resolved. This
section examines the ethical ramifications of Industry 4.0’s AI-driven decision-mak-
ing and underlines the difficulties that organisations encounter in implementing ethi-
cal and responsible AI practices.

11.9.1 Ethical Implications of AI-Driven Decision-Making


1. Bias and discrimination: AI models can reinforce and magnify biases in
the data if they are trained on biased or inadequate datasets. This may lead
to unjust loan approvals, biased hiring practices, or unequal treatment of
persons, among other discriminatory decision-making results. Addressing
bias in AI systems is essential, as is ensuring fairness and equitable chances
for all parties involved.
198 AI-Driven IoT Systems for Industry 4.0

2. Lack of explainability and transparency: Understanding how decisions are


made using AI algorithms can be difficult because they can be complicated
and transparent, especially those based on deep learning. Accountability
issues arise from a need for more transparency and explicability since deci-
sion makers might not be able to defend the reasoning behind AI-driven
decisions. For stakeholders to develop a sense of confidence and under-
standing, AI models must be transparent and easy to interpret.
3. Privacy and data protection: Systems for generating decisions that are
AI-driven rely on enormous volumes of data, including sensitive and per-
sonal data. To avoid unauthorised access, misuse, or breaches that may
jeopardise people’s right to privacy, organisations must place a high pri-
ority on data privacy and protection. To keep the trust and confidence of
users and stakeholders, compliance with pertinent rules, such as GDPR, is
crucial.
4. Accountability and responsibility: AI-powered decision-making systems
may be able to release decision makers from responsibility for the results.
Organisations must establish clear lines of authority and make sure that
decision makers are aware of the drawbacks and dangers of AI mod-
els. Establishing appropriate governance structures and holding people
accountable for AI-driven outcomes are essential for ethical decision-
making practices.

11.9.2 Challenges in Ensuring Responsible and Ethical AI Practices


1. Data quality and bias mitigation: A fundamental problem in AI-driven
decision-making is ensuring high-quality and impartial data. To reduce
biases and guarantee that AI models are trained on representative and var-
ied data, organisations must carefully choose, clean, and validate their data-
sets. To recognise and eliminate any bias that may emerge during system
deployment, AI systems must undergo continuous monitoring and auditing.
2. Interpretable and explainable AI: It can be difficult to create AI models
that are understandable and interpretable, especially in sophisticated deep-
learning architectures. To produce transparent AI models that can offer
comprehensive justifications for their choices, organisations must spend
money on research and development. By ensuring interpretability, decision
makers may comprehend and question AI-driven judgements, encouraging
moral decision-making.
3. Governance and regulation: The creation of suitable governance structures
and rules is being outpaced by the speed at which AI technology is develop-
ing. To regulate the use of AI in decision-making, it is essential to establish
ethical standards, business norms, and legal frameworks. To develop strong
and enforceable legislation that supports ethical AI practices, cooperation
between legislators, industry professionals, and ethicists is essential.
4. Human oversight and decision: While AI algorithms can offer insightful
analysis, human oversight is essential to ensure moral decision-making.
The limitations, biases, and ethical ramifications of AI-driven systems
AI for Optimal Decision-Making in Industry 4.0 199

require training and education for decision makers. Building a strong


human-AI collaboration that allows humans to use AI as a tool rather than
just depending on automated decisions encourages moral and responsible
decision-making.

11.9.3 Privacy, Bias, Transparency, and Accountability


in Decision-Making Systems

A number of important problems arise that require attention when decision-making


systems depend more and more on AI and advanced analytics. The topics of privacy,
bias, transparency, and accountability in decision-making systems are discussed in
this section. It emphasises how critical it is to address these issues if we want to pro-
mote morally sound and accountable decision-making.
Privacy concerns in decision-making systems:

1. Data privacy and security: Systems for making decisions depend on enor-
mous volumes of data, including sensitive and private data. When people’s
data is gathered, saved, and analysed without their knowledge or agree-
ment, privacy concerns arise. To safeguard information from unauthorised
access or breaches, organisations must prioritise data privacy and have
strong security measures in place.
2. Informed consent and data usage: In decision-making processes, open
communication and securing people’s informed consent are crucial. The
collection, usage, and sharing of a person’s data should be under their con-
trol. Organisations must acquire informed consent, make sure data protec-
tion laws are followed, and provide clear explanations of how data will be
used.

Bias in decision-making systems:

1. Bias in training data: Decision-making systems may unintentionally con-


tain bias if the training data is biased or not representative. As a result, there
may be biased effects that worsen disparities already present in society. To
ensure fair and objective decision-making processes, organisations must
critically assess and correct biases in training data.
2. Algorithmic bias: Bias can be introduced by AI algorithms if they are not
carefully created and taught. Skewed training data, bad algorithm design,
or poor feature selection can all lead to bias. In order to reduce the effects of
discrimination in decision-making systems, it is critical to regularly moni-
tor and evaluate algorithms for bias and implement remedial measures.

Transparency in decision-making systems:

1. Explainability and interpretability: Decision-making processes ought


to be open and offer justifications for their results. Decision-making may
be difficult to comprehend when using complex AI algorithms, such as
200 AI-Driven IoT Systems for Industry 4.0

deep-learning models, because these algorithms may be difficult to inter-


pret. In order to provide openness and give people access to the elements
impacting decisions, organisations should work to create AI models that
can explain things.
2. Access to information: Giving decision makers and those who will be
impacted access to pertinent information is another aspect of transparency.
Making data sources, methodology, and algorithms available to stake-
holders so they may assess and contest decisions is crucial. Openness and
accountability in decision-making processes promote trust.

Accountability in decision-making systems:

1. Human oversight and decision makers’ responsibility: Human oversight


and responsibility are still essential in decision-making, despite the grow-
ing importance of AI. Decision makers need to use critical thinking and
keep AI-driven systems’ limits, biases, and ethical consequences in mind.
They are accountable for the final choices made in light of the results of
decision-making systems.
2. Auditing and governance: Strong governance procedures must be
established by organisations, and decision-making systems need to be
regularly audited. By doing this, responsibility is guaranteed, potential
biases or inaccuracies are exposed, and chances for remedial action are
presented. Organisations can exhibit ethical decision-making proce-
dures and uphold accountability with the aid of auditing and governance
systems.

11.9.4 Strategies and Best Practices for Addressing Ethical Concerns


in Decision-Making Systems

Organisations need to implement strategies and best practices to successfully man-


age these issues as ethical concerns regarding decision-making systems driven by
AI and advanced analytics grow increasingly prevalent. In order to address pri-
vacy issues and bias, promote transparency, and ensure accountability in decision-
making systems, the next section will explore some of the most effective tactics and
practices.

1. Privacy protection strategies:


a. Data minimisation: Organisations should adopt a data minimisation
approach, collecting only the necessary data required for decision-
making processes. This reduces the risk of unauthorised access, data
breaches, and potential privacy infringements.
b. Anonymisation and aggregation: Utilise methods to protect individual
privacy, such as aggregation and anonymisation. Organisations can pre-
serve privacy while still gaining useful insights by aggregating data and
deleting individually identifiable information.
AI for Optimal Decision-Making in Industry 4.0 201

c. Privacy by design: When developing decision-making systems, privacy


considerations should be addressed right away. Data privacy must be
integrated into the system design using privacy-preserving technolo-
gies, improvement of technology, and adoption of privacy-maintaining
practices, such as differential privacy.
2. Bias mitigation strategies:
a. Diverse and representative data: Make sure that the training datasets
are representative, varied, and bias-free. Balance data sources to lessen
underrepresentation and discriminatory consequences by actively mon-
itoring biases in the data collection process.
b. Regular algorithmic auditing: To find and reduce any biases, conduct
routine audits, and reviews of algorithms. Implement methods to mea-
sure and address bias throughout the lifecycle of the system, such as
fairness measurements and algorithmic effect evaluations.
c. Ethical AI frameworks: Create ethical AI frameworks that direct the
creation, implementation, and oversight of AI models. These frame-
works should place a strong emphasis on fairness, equality, and the
proper application of AI to reduce biases in decision-making processes.
3. Transparency promotion strategies:
a. Explainable AI: Make the creation of AI models that offer justifica-
tions and explanations for their choices a top priority. In order to help
decision makers and those who will be impacted by them comprehend
the variables impacting decisions, explainable AI emphasises interpret-
ability and transparency.
b. Openness and access: Encourage an open culture and give everyone
access to pertinent data, processes, and algorithms related to the deci-
sion-making process. encourage participants to dispute and reevaluate
decisions made using transparent and easily accessible information.
c. Algorithmic documentation: Keep thorough records of all the algo-
rithms used in decision-making systems, including design decisions
and the thinking behind them. Understanding, accountability, and
transparency are all aided by this material.
4. Accountability enhancement strategies:
a. Human-in-the-loop approach: Adopt a “human in the loop” strategy to
ensure human monitoring and responsibility. When making decisions
based on the results of AI-driven systems, decision makers should use
critical judgement and take responsibility for their choices.
b. Auditing and evaluation: Conduct routine reviews and audits of deci-
sion-making processes to look for any potential biases, mistakes, or
unintended consequences. Create systems for tracking and resolving
problems that are discovered during auditing procedures.
c. Ethical governance frameworks: Establish solid ethical governance
frameworks that offer rules, regulations, and processes for ethical deci-
sion-making. Clarify the roles, duties, and procedures for addressing
ethical issues and maintaining responsibility.
202 AI-Driven IoT Systems for Industry 4.0

11.10 FUTURE DIRECTIONS AND CONCLUSION OF AI-DRIVEN


DECISION-MAKING IN INDUSTRY 4.0
As AI-driven decision-making advances within the context of Industry 4.0, it is criti-
cal to investigate the future trends, emerging technologies, and research initiatives
that will define this subject. The significant areas of progress are highlighted, and
the main ideas covered in this chapter are summarised in this section, which offers
insights into the future of AI-driven decision-making.

Future directions of AI-driven decision-making in Industry 4.0:


Augmented intelligence: AI-driven decision-making’s future depends on
enhancing human intelligence rather than displacing it. The use of aug-
mented intelligence systems, which incorporate the advantages of both
humans and AI algorithms, will enable decision makers to reach better
informed, practical, and moral decisions. These technologies will improve
human capacities, offer real-time insights, and facilitate group decision-
making in Industry 4.0.
Ethical AI governance: In order to achieve ethical and responsible AI-driven
decision-making, the future calls for strong ethical frameworks and gov-
ernance structures. Organisations and policymakers will need to use legal
frameworks, business standards, and moral principles to address issues
including privacy, prejudice, transparency, and accountability. This will
foster societal acceptance of AI-driven decision-making systems and pro-
mote trust and justice in them.
Trustworthy AI: The creation of reliable AI systems will be given priority in
decision-making that is driven by AI in the future. This involves develop-
ing AI models that are open, comprehensible, equitable, and impartial. AI
that is trustworthy upholds moral behaviour, conforms to social norms, and
inspires confidence in users, stakeholders, and the general public.

11.10.1 Emerging Trends, Technologies, and Research Directions


Federated learning: AI models can be trained on distributed datasets via feder-
ated learning without sending raw data [38]. This method permits models to
learn from decentralised sources while maintaining data privacy. Federated
learning will be essential to AI-driven decision-making, especially in deli-
cate industries like finance and health care.
Responsible AI and explainability: In order to overcome the “black box” dif-
ficulty, research will concentrate on creating more comprehensible and
explicable AI models. To enable users and stakeholders to understand and
contest judgements made by sophisticated AI models, efforts will be made
to reveal their decision-making process. Addressing biases, justice, and
responsibility in AI systems will also be a primary emphasis of responsible
AI research [39].
Hybrid decision-making systems: The combination of AI and human deci-
sion makers will be a key area of study. Systems for making decisions that
AI for Optimal Decision-Making in Industry 4.0 203

combine AI and human expertise will make use of their complementary


qualities. With the use of these technologies, humans will be able to exert
more control over judgements made using AI while also utilising AI to
strengthen decision-making procedures [40].

11.11 CONCLUSION
AI-driven decision-making in Industry 4.0 has enormous potential to change organ-
isations’ decision-making capacities. The creation and use of decision-making
systems will be influenced by enhanced intelligence, moral AI governance, and
trustworthy AI. As important research directions, federated learning, responsible
AI, and hybrid decision-making systems will develop. In conclusion, this chapter
examined the use of AI in Industry 4.0, the incorporation of AI models into DSS
frameworks, and the ethical issues surrounding AI-driven decision-making. We cov-
ered the significance of data gathering and preprocessing, highlighted several AI
models appropriate for making decisions, and looked at the difficulties and solutions
for resolving ethical issues. In order to make sure that AI-driven decision-making
benefits society in the era of Industry 4.0, it is critical to keep advancing ethical AI
governance frameworks, fostering transparency, and pushing responsible AI prac-
tices. Organisations may embrace the revolutionary power of AI and make wise
decisions that lead to success in the dynamic and data-rich environment of Industry
4.0 by embracing these future directions.

ACKNOWLEDGEMENTS
My thanks go to the editor of the volume, Professor Arun Kumar Sangaiah, for
giving me the chance to write a chapter on the area of AI-driven IoT systems for
Industry 4.0 and making helpful suggestions.

AUTHOR BIOGRAPHIES
R. Bargavi is a research scholar at the Faculty of Management at SRM Institute of
Science and Technology, Kattankulathur Campus, pursuing doctoral studies in edu-
cational technology and sustainability.

REFERENCES
1. Mhlanga, D., “Artificial intelligence in the Industry 4.0, and its impact on poverty, inno-
vation, infrastructure development, and the sustainable development goals: Lessons
from emerging economies?” Sustainability, vol. 13, no. 11, p. 5788, 2021.
2. Jan, Z. et al., “Artificial intelligence for Industry 4.0: Systematic review of applications,
challenges, and opportunities,” Expert Syst. Appl., vol. 32, p. 119456, 2022.
3. Haleem, A., Javaid, M., Singh, R. P. and Suman, R., “Medical 4.0 technologies for
healthcare: Features, capabilities, and applications,” Internet Things Cyber-Physical
Syst., vol. 2, pp. 12–30, 2022.
4. Sodhi, H., “When Industry 4.0 meets Lean Six Sigma: A review,” Ind. Eng. J., vol. 13,
no. 1, pp. 1–12, 2020.
204 AI-Driven IoT Systems for Industry 4.0

5. Gill, S. S. et al., “AI for next generation computing: Emerging trends and future direc-
tions,” Internet Things, vol. 19, p. 100514, 2022.
6. Ustundag, A. and Cevikcan, E., Industry 4.0: Managing the digital transformation.
Springer, 2017.
7. Terziyan, V., Gryshko, S. and Golovianko, M., “Patented intelligence: Cloning human
decision models for Industry 4.0,” J. Manuf. Syst., vol. 48, pp. 204–217, 2018.
8. Miśkiewicz, R. and Wolniak, R., “Practical application of the Industry 4.0 concept in a
steel company,” Sustainability, vol. 12, no. 14, p. 5776, 2020.
9. Pereira, A. C. and Romero, F., “A review of the meanings and the implications of the
Industry 4.0 concept,” Procedia Manuf., vol. 13, pp. 1206–1214, 2017.
10. Tseng, M.-L., Tran, T. P. T., Ha, H. M., Bui, T.-D. and Lim, M. K., “Sustainable indus-
trial and operation engineering trends and challenges toward Industry 4.0: A data
driven analysis,” J. Ind. Prod. Eng., vol. 38, no. 8, pp. 581–598, 2021.
11. Kagermann, H., “Change through digitization—Value creation in the age of Industry
4.0,” in Management of permanent change, Springer, 2014, pp. 23–45.
12. Bakthavatchalam, K. et al., “IoT framework for measurement and precision agriculture:
Predicting the crop using machine learning algorithms,” Technologies, vol. 10, no. 1,
2022, https://doi.org/10.3390/technologies10010013.
13. Da Xu, L., Xu, E. L. and Li, L., “Industry 4.0: State of the art and future trends,” Int. J.
Prod. Res., vol. 56, no. 8, pp. 2941–2962, 2018.
14. Oztemel, E. and Gursev, S., “Literature review of Industry 4.0 and related technolo-
gies,” J. Intell. Manuf., vol. 31, pp. 127–182, 2020.
15. Zizic, M. C., Mladineo, M., Gjeldum, N. and Celent, L., “From Industry 4.0 towards
industry 5.0: A review and analysis of paradigm shift for the people, organization and
technology,” Energies, vol. 15, no. 14, p. 5221, 2022.
16. Sarker, I. H., “Deep learning: A comprehensive overview on techniques, taxonomy,
applications and research directions,” SN Comput. Sci., vol. 2, no. 6, p. 420, 2021.
17. Jarrahi, M. H., “Artificial intelligence and the future of work: Human-AI symbiosis in
organizational decision making,” Bus. Horiz., vol. 61, no. 4, pp. 577–586, 2018.
18. Janiesch, C., Zschech, P. and Heinrich, K., “Machine learning and deep learning,”
Electron. Mark., vol. 31, no. 3, pp. 685–695, 2021.
19. Zhou, L., Pan, S., Wang, J. and Vasilakos, A. V., “Machine learning on big data:
Opportunities and challenges,” Neurocomputing, vol. 237, pp. 350–361, 2017.
20. Huh, Y. U., Keller, F. R., Redman, T. C. and Watkins, A. R., “Data quality,” Inf. Softw.
Technol., vol. 32, no. 8, pp. 559–565, 1990.
21. Zhong, R. Y., Newman, S. T., Huang, G. Q. and Lan, S., “Big data for supply chain
management in the service and manufacturing sectors: Challenges, opportunities,
and future perspectives,” Comput. Ind. Eng., vol. 101, pp. 572–591, 2016, https://doi.
org/10.1016/j.cie.2016.07.013.
22. Huberman, A. M. and Miles, M. B., “Data management and analysis methods,” 1994.
23. Sun, G.-D., Wu, Y.-C., Liang, R.-H. and Liu, S.-X., “A survey of visual analytics tech-
niques and applications: State-of-the-art research and future challenges,” J. Comput.
Sci. Technol., vol. 28, pp. 852–867, 2013.
24. Hearst, M. A., “User interfaces and visualization,” Mod. Inf. Retr., vol. 15, no. 16, pp.
257–323, 1999.
25. Fang, C. and Marle, F., “A simulation-based risk network model for decision support in
project risk management,” Decis. Support Syst., vol. 52, no. 3, pp. 635–644, 2012.
26. Lilien, G. L., Rangaswamy, A., Van Bruggen, G. H. and Starke, K., “DSS effective-
ness in marketing resource allocation decisions: Reality vs. perception,” Inf. Syst. Res.,
vol. 15, no. 3, pp. 216–235, 2004.
AI for Optimal Decision-Making in Industry 4.0 205

27. Dahl, S. and Derigs, U., “Cooperative planning in express carrier networks—An
empirical study on the effectiveness of a real-time decision support system,” Decis.
Support Syst., vol. 51, no. 3, pp. 620–626, 2011.
28. Deshpande, P. S., Sharma, S. C., Peddoju, S. K., Deshpande, P. S., Sharma, S. C. and
Peddoju, S. K., “Predictive and prescriptive analytics in big-data era,” Secur. Data
Storage Asp. Cloud Comput., vol. 1025, pp. 71–81, 2019.
29. Pawar, V. and Jose, D. V., Machine Learning Methods to Identify Aggressive Behavior in
Social Media BT. In: Dutta, P., Chakrabarti, S., Bhattacharya, A., Dutta, S., Shahnaz, C.
(eds) Emerging Technologies in Data Mining and Information Security, 2023, vol. 490,
pp. 507–513. Springer, Singapore. https://doi.org/10.1007/978-981-19-4052-1_50
30. Zhou, Z.-H., Machine learning. Springer Nature, 2021.
31. LeCun, Y., Bengio, Y. and Hinton, G., “Deep learning,” Nature, vol. 521, no. 7553,
pp. 436–444, 2015.
32. Kang, Y., Cai, Z., Tan, C.-W., Huang, Q. and Liu, H., “Natural language processing
(NLP) in management research: A literature review,” J. Manag. Anal., vol. 7, no. 2,
pp. 139–172, 2020.
33. Abooraghavan, R., Perumal, G. A., Rahuman, K. K., Suneel, C. and Jose, D., “Smart
driving license system in RTO using IoT technology,” AIP Conf. Proc., vol. 2444,
no. 1, p. 70004, Mar. 2022, doi: 10.1063/5.0078325.
34. Rahm, E. and Do, H. H., “Data cleaning: Problems and current approaches,” IEEE
Data Eng. Bull., vol. 23, no. 4, pp. 3–13, 2000.
35. Clark, W. A. V. and Avery, K. L., “The effects of data aggregation in statistical analy-
sis,” Geogr. Anal., vol. 8, no. 4, pp. 428–438, 1976.
36. Osborne, J., “Notes on the use of data transformations,” Pract. Assessment, Res. Eval.,
vol. 8, no. 1, p. 6, 2002.
37. Turner, C. R., Fuggetta, A., Lavazza, L. and Wolf, A. L., “A conceptual basis for feature
engineering,” J. Syst. Softw., vol. 49, no. 1, pp. 3–15, 1999.
38. Li, L., Fan, Y., Tse, M. and Lin, K.-Y., “A review of applications in federated learning,”
Comput. Ind. Eng., vol. 149, p. 106854, 2020.
39. Arrieta, A. B. et al., “Explainable artificial intelligence (XAI): Concepts, taxonomies,
opportunities and challenges toward responsible AI,” Inf. Fusion, vol. 58, pp. 82–115,
2020.
40. Akyuz, E. and Celik, M., “A hybrid decision-making approach to measure effective-
ness of safety management system implementations on-board ships,” Saf. Sci., vol. 68,
pp. 169–179, 2014.
12 Challenges in Lunar
Crater Detection for
TMC-2 Obtained DEM
Image Using Ensemble
Learning Techniques
Sanchita Paul, Chinmayee Chaini,
and Sachi Nandan Mohanty

12.1 INTRODUCTION
Deep learning is a branch of machine learning (ML) that focuses on training
multi-layered artificial neural networks (ANNs) to learn from and make sense of
enormous volumes of data. Neurons, the interconnected nodes that make up these
networks, mimic the behavior of real neurons. Each neuron in the network receives
input signals, carries out calculations, and then generates an output signal that
can be transmitted to other neurons. The influence of one neuron on another is
determined by the strength of the connections between neurons, which are rep-
resented by weights. It draws inspiration from the way that the neural networks
(NNs) in the human brain are organized and operate. The input and output layers
of deep learning models sometimes referred to as deep NNs (DNNs), are separated
by several hidden layers. These complex designs enable feature extraction from
unstructured data and hierarchical learning. Each layer of the network represents a
different degree of abstraction and is capable of learning. Deeper layers learn more
sophisticated and abstract properties, such as shapes or objects, whereas the top
layers collect low-level features like edges or corners in images. The network’s last
layers deliver the required results, like classification or prediction. Future-looking,
the deep learning discipline is developing quickly. To enhance performance and
address the issues, researchers are investigating novel designs, optimization meth-
ods, and learning algorithms. Research is ongoing in the domain of unsupervised
learning, which involves training models using unlabeled data. Deep learning algo-
rithms have proven to perform exceptionally well in a variety of challenging tasks,
including speech recognition, computer vision, and natural language processing
(NLP). The outstanding successes of deep learning in applications for artificial
intelligence have been considerably aided by this learning of hierarchical represen-
tations straight from data [1, 2].

206 DOI: 10.1201/9781003432319-12


Lunar Crater Detection 207

However, lunar craters hold paramount importance in lunar surface explora-


tion, facilitating our understanding of lunar geology and aiding in the identifica-
tion of suitable landing sites for future missions. Nonetheless, the recognition of
lunar craters in digital elevation model (DEM) images acquired by the Terrain
Mapping Camera-2 (TMC-2) presents formidable challenges, owing to factors such
as data noise, variable lighting conditions, and data complexity. To surmount these
obstacles, this research endeavors to capitalize on the potency of ensemble learning
strategies. Ensemble learning techniques have garnered significant acclaim for their
ability to enhance the accuracy and robustness of classification tasks. In this study,
we introduce an innovative ensemble learning framework designed explicitly for the
detection of lunar craters. By employing this ensemble approach, we seek to over-
come the limitations of individual classifiers and enhance the overall performance
of crater detection. Our research delves into the design and implementation of the
ensemble learning system, exploring how it can effectively handle the intricacies of
lunar crater recognition in challenging environmental conditions. In the subsequent
sections, we delve into the ensemble learning framework, the integration of fuzzy
systems with deep learning classifiers (DLCs), and the utilization of real-time sensor
data with a Raspberry Pi [3].
The structure of this chapter is as follows: Section 12.2 provides an Overview
of Deep Learning and existing Machine Learning; Deep Learning Classifier and
Challenges in Deep Learning and Lunar Crater Detection; and Advancement in
Deep Learning for Lunar Crater Detection. Section 12.3 describes the Basics of
Fuzzy Learning, Types of Fuzzy Learning, Challenges of Fuzzy Learning and
Advancement in Fuzzy Learning, Section 12.4 contains a keen description of
Ensemble Learning. Section 12.5 gives a clear idea about the Annotation of DEM
Images and the Basics of IoT. Section 12.6 contains a description of the Basics of
TMC-2. Section 12.7 describes the IoT-Based Fusion Model. Finally, Section 12.8
gives the Conclusion and Future Perspective of the chapter.

12.2 OVERVIEW OF DEEP LEARNING


Numerous architectures for deep learning have emerged, but the convolutional NN
(CNN) stands out as the most popular because it resembles traditional NNs. A CNN
receives an image as input and arranges its neurons in a three-dimensional (3D) con-
figuration, as opposed to a traditional NN, which connects every neuron in one layer
to every neuron in the next. Instead of connecting to the complete previous layer,
each neuron only attaches to a small portion of it. The convolutional layer, nonlinear
activation layer (such as the rectified linear unit, ReLU), pooling layer, and fully con-
nected layer are some of the layers that make up a CNN. Convolution operations are
carried out between pixels in the input image and a filter by the convolutional layer,
producing volumes of feature maps that include the features that were extracted.
To strengthen the network’s nonlinearity and accelerate training, the ReLU layer
uses a nonlinear activation function. The pooling layer improves computing effi-
ciency and guards against overfitting by downsampling the input data to lower spa-
tial dimensionality. Because calculations are dependent on nearby pixels, this layer
is translation invariant. Each neuron in the fully connected layer, which is often the
208 AI-Driven IoT Systems for Industry 4.0

last layer, is connected to every other neuron in the layer above it, like the hidden
layers of CNNs. CNNs can be modified for semantic segmentation even though they
are frequently employed for classification problems. The input image is separated
into equal-sized, tiny patches for semantic segmentation. Each patch’s center pixel is
classified by the CNN, and then the patch is moved to categorize the following cen-
ter pixel. The loss of spatial information as features travel into the final fully linked
network layers makes this method ineffective because the overlapping features of
the sliding patches are not reused. The fully CNN (FCNN) was developed to address
this problem. Transposed convolutional layers are used in FCNNs in place of the
final fully connected layers of the CNN. These layers conduct semantic segmenta-
tion and up-sampling on the low-resolution feature maps, recovering the original
spatial dimensions at the same time. The backpropagation algorithm is frequently
combined with an optimization algorithm-like gradient descent for training DNNs.
The first step in the procedure is to establish the gradient of a loss function, which
gauges the difference between the desired and expected outputs. The optimization
technique then uses this gradient data to iteratively update the network weights to
minimize the loss function’s value. The network is optimized iteratively until it per-
forms at a satisfactory level [3, 4].

Key Concepts:
• Neural networks (NNs): ANNs, which are made up of interconnected nodes
called neurons, are a key component of deep learning. Together, neurons
from several layers process and transform data.
• Deep neural networks(DNNs): DNNs are NNs that include several hidden
layers and are used in deep learning models. These complex designs enable
feature extraction from unstructured data and hierarchical learning.
• Feature learning: It is also known as representation learning, which is the
automatic learning of high-level, abstract representations of data features
by deep learning models.
• Training and optimization: With the use of backpropagation algorithms
and big datasets, deep-learning models are trained. Updates to the network
parameters and the reduction of the discrepancy between expected and
actual outputs are achieved using optimization techniques like stochastic
gradient descent.
• Big data and computing power: Big data is essential for deep learning since
it needs a ton of labeled training data to pick up complicated patterns. It also
gains from the availability of strong hardware accelerators, such as graph-
ics processing units (GPUs), to handle the intensive computations required.

Machine learning: An ML-based image segmentation approach’s typical use is


to categorize the region of interest (ROI), such as a diseased region or a healthy
region. The preprocessing stage, which may involve the use of a filter to remove any
noise or for the goal of enhancing contrast, is the first step in the process of build-
ing such an application. The image is segmented using a segmentation technique
such as thresholding, a clustering-based approach, or edge-based segmentation after
the preprocessing stage. Following segmentation, features are taken from the ROI
Lunar Crater Detection 209

based on size, texture, contrast, and color information. Following that, dominating
characteristics are identified using statistical analysis or feature selection methods
like principal component analysis (PCA). The chosen characteristics are then fed
into a support vector machine (SVM) or NN-based ML classifier as inputs. The ideal
boundary separating each class is chosen by the ML classifier using the input feature
vector and target class labels. After being trained, the ML classifier can be used
to categorize fresh, ambiguous data to establish its class. The selection of suitable
features, the length of the feature vector, the type of classifier, and the necessary pre-
processing needs depending on raw image attributes are examples of typical issues.
Deep learning classifiers: Since DLCs can process raw images directly, prepro-
cessing, segmentation, and feature extraction should not be necessary. The limitation
on the number of deep-learning algorithms, however, necessitates image resizing
input data. Using data augmentation techniques during training can help to avoid
the need for some techniques’ intensity normalization and contrast enhancement.
DLC provides a greater level of classification accuracy since it may prevent mistakes
brought on by inaccurate feature vectors or sloppy segmentation. The fact that DLC
networks frequently include numerous hidden layers means that, in comparison to
ML-based techniques, more mathematical operations are carried out, making the
models for DLC networks more computationally costly. While the DLC uses the
image as input and outputs the object class, the ML classifier uses the feature vec-
tor as input. Because deep learning has more layers than traditional ANNs, theo-
retically it can be stated to be an improvement over ANN. As each layer translates
the input data from the preceding layer into a new representation at a higher and
slightly greater abstraction level, it is regarded as a type of representational learn-
ing. Multiple layers of NNs are used in deep-learning models to identify both local
and global relationships in data. Early layers extract features like edges and their
positions, by using nonlinear functions to alter the data in each layer. Deeper lay-
ers ignore small differences and take into account edge arrangements to discover
more complex patterns. These patterns come together to create intricate combina-
tions that depict related objects. Due to CNNs’ efficiency in processing picture data,
deep learning has seen a rise in their use. CNNs are well suited for tasks like image
classification and semantic segmentation because of the hierarchical design of their
layers, which enables the extraction of useful characteristics from images. Due to the
network-wide preservation of spatial information brought about by the development
of FCNNs, semantic segmentation has been further enhanced. Utilizing the back-
propagation algorithm and optimization methods, weights are iteratively adjusted dur-
ing the training of DNNs. Deep learning has significant promise for solving complex
challenges across a variety of disciplines with further research and breakthroughs.

12.2.1 Challenges in Deep Learning and Detecting Lunar Craters


Deep learning has its share of problems and restrictions, despite its achievements.
Interpretability is one difficulty. Because it can be challenging to comprehend how
DNNs make their predictions, they are frequently referred to as “black boxes.” Deep-
learning model interpretation and justification strategies are now being intensively
researched.
210 AI-Driven IoT Systems for Industry 4.0

a. Interpretability: The inability of deep-learning models to be interpreted


is one of their key problems. DNNs are frequently referred to as “black
boxes” since it can be difficult to comprehend how they make predictions.
This lack of interpretability makes it difficult to believe in and completely
understand how these models make decisions. Deep-learning algorithms
make decisions, and researchers are actively investigating methods to inter-
pret and explain these decisions. To illuminate the inner workings of these
intricate models, tools like learning feature visualization, attribution meth-
ods, and model-agnostic interpretability approaches have been developed.
b. Labeled data: For efficient training, deep learning models often need a lot
of labeled data. Such dataset acquisition and labeling can be costly, time-
consuming, and occasionally unfeasible, especially in fields with a dearth
of labeled data. This is a problem, particularly when working with special-
ized or niche fields where it could be difficult to collect labeled data. To
reduce the requirement for large amounts of labeled data, researchers are
investigating methods like transfer learning and few-shot learning, which
would allow models to learn from smaller, more varied datasets.
c. Computational requirements: Deep-learning models require a significant
amount of computer resources, including processing speed and memory.
DNN training can be laborious and time-consuming when applied to
huge datasets. Although GPUs have proved crucial in speeding up train-
ing, memory limitations may still be an issue. High-performance computer
infrastructure and effective parallel processing strategies are needed to
scale deep-learning models to accommodate larger datasets and more com-
plicated architectures. To meet these computing hurdles, hardware technol-
ogy and optimization techniques must continue to progress.

Let’s examine the difficulties encountered notably when utilizing deep learning
to recognize lunar craters. Due to the unique properties of the lunar surface and the
dearth of labeled data, deep learning for the detection of lunar craters faces particu-
lar difficulties. Among the difficulties are:

a. Dataset availability: It can be difficult to acquire extensive and varied data-


sets specifically for lunar crater detection. The size of labeled lunar crater
databases may be constrained, and there may be significant variations in the
location of craters on the lunar surface. There are logistical and resource
issues associated with obtaining high-resolution lunar pictures and profes-
sional annotations for deep-learning model training.
b. Variability of crater characteristics: The surrounding landscape, as well as
the size, form, and depth of lunar craters, vary. Robust feature representa-
tions and generalization skills are necessary for designing a deep-learning
network that can effectively detect craters across this heterogeneity. On the
Moon, there are various lighting conditions, surface textures, and geologi-
cal differences that models must learn and take into account.
c. Generalization to unseen data: Deep-learning models must generalize suc-
cessfully to parts of the Moon that have not been observed. This is crucial
Lunar Crater Detection 211

for precise crater recognition in less-explored or researched lunar regions.


A significant problem is making sure that models can adjust and find craters
in various lunar landscapes.
d. Labeling consistency and subjectivity: As the definition and identification
of craters might vary among experts, the process of annotating lunar cra-
ters may include subjective judgments. Training trustworthy deep-learning
models requires ensuring labeling consistency across various datasets and
annotators. To solve this problem, precise standards and rules for crater
markings must be established.
e. Preprocessing and noise handling: Due to issues with picture acquisition,
resolution, and sensor limits, lunar imagery may have noise, artifacts, and
fluctuations. For effective training, preprocessing techniques must be used
to improve image quality, eliminate noise, and normalize the data. To over-
come these obstacles, it is important to carefully study the unique qualities
of lunar photography.

To overcome these difficulties in lunar crater detection using deep learning, a mix
of cutting-edge methods, teamwork between researchers and subject-matter special-
ists, and the accessibility of extensive and carefully curated datasets are essential.
Deeper insights into the lunar surface and its geological history will be made pos-
sible by improved crater detection techniques made possible by ongoing research and
developments in deep-learning methodology [5, 6].

12.2.2 Advancements in Deep Learning for


Lunar Crater Detection
Deep learning has significantly improved several sectors, including the detection of
lunar craters. The application of deep-learning models to this task provides several
benefits and the ability to get around some of the issues that standard feature-based
classifiers have. Here are some significant developments in deep learning specifically
for finding lunar craters:

1. Improved performance: CNN-based deep-learning models, in par-


ticular, have outperformed more conventional feature-based classifi-
ers at spotting lunar craters. Deep-learning models can reduce errors
and information loss by using pixel values directly instead of having
to calculate and segment features. This improves detection results and
increases accuracy.
2. End-to-end learning: This is where the complete pipeline from raw image
data-to-crater identification is automatically learned and is made possible
by deep-learning models. This streamlines the development process and
reduces human bias by doing away with the requirement for manual feature
engineering and segmentation.
3. Feature representation learning: When learning hierarchical representa-
tions from data, deep-learning models shine. Deep-learning algorithms
can learn complex patterns and features that are essential for precise lunar
212 AI-Driven IoT Systems for Industry 4.0

crater recognition. These models may capture both local and global cor-
relations in the data through many layers of NNs, improving the ability to
recognize craters in various lunar landscapes.
4. Transfer learning: Transfer learning is a method that applies models that
have already been trained on large-scale datasets to new problems with
little data. Deep-learning models can learn general features by being pre-
trained on large datasets, such as general image collections, and then being
fine-tuned on smaller lunar crater datasets. This strategy can enhance per-
formance and lessen the issue of scarce labeled data that is unique to lunar
craters.
5. Ensemble techniques: Multiple deep-learning models are combined using
ensemble learning to increase the reliability and accuracy of predictions.
Ensemble approaches can get over the constraints of a single model and
improve generalization and classification performance by combining the
results of individual models. By capturing various crater traits and boost-
ing overall detection capability, ensemble approaches have demonstrated
promise in the detection of lunar craters.
6. Domain-specific architectures: To detect lunar craters, researchers are
using specialized deep-learning architectures. To maximize performance,
these systems take into account particular aspects of lunar imaging and
include domain expertise. The accuracy and dependability of lunar cra-
ter detection can be further improved by developing models that take into
consideration elements like illumination conditions, surface textures, and
geological fluctuations.

A deeper understanding of the lunar surface’s geological history and surface fea-
tures could be obtained thanks to these developments in deep learning for lunar cra-
ter detection. We should anticipate further improvement in precisely detecting and
analyzing lunar craters as researchers continue to push the limits of deep-learning
algorithms and improve their applicability to crater detection [7, 8].

12.2.3 Real-Time Application of Deep Learning in Lunar Crater Detection


Deep learning has revolutionized various domains by achieving state-of-the-art per-
formance in numerous applications:

a. Computer vision: In image classification, object identification, picture seg-


mentation, and facial recognition applications, deep learning has achieved
outstanding results. In this field, CNNs are frequently employed.
b. Natural language processing: Sentiment analysis, machine translation,
text production, and question-answering systems have all considerably
improved thanks to deep-learning models. Transformers and recurrent NNs
(RNNs) are often employed as architectural types.
c. Speech and audio processing: Deep learning has made voice-controlled sys-
tems and virtual assistants more advanced by improving speech recognition,
speaker identification, speech synthesis, and audio categorization tasks.
Lunar Crater Detection 213

d. Recommender systems: Deep learning has improved personalized sugges-


tions in e-commerce, social networking, and multimedia streaming plat-
forms by understanding customer preferences and trends from massive
amounts of data.
e. Healthcare and biomedicine: Deep learning is advancing the diagnosis and
treatment of diseases, the discovery of new drugs, the analysis of medical
images, and genomics.

Deep-learning methods have been also used in multiple real-time lunar crater
recognition applications, advancing our knowledge of the lunar surface and assist-
ing numerous lunar exploration missions. Here are some significant real-time deep-
learning applications in this area:

a. Autonomous lunar exploration: For real-time crater detection and classifi-


cation in autonomous lunar exploration missions, deep-learning algorithms
can be used. Deep-learning algorithms are used with onboard cameras and
sensors on lunar rovers or landers to create a system that can swiftly ana-
lyze photos and give feedback on the presence and characteristics of craters
in real time. This makes navigation, risk mitigation, and decision-making
during lunar exploration missions efficient.
b. Lunar crater mapping: Real-time mapping of lunar craters can be done
using deep-learning techniques. Deep-learning algorithms can accurately
detect and locate craters by analyzing photos taken by orbiting satellites or
lunar missions. Real-time crater mapping offers useful data for construct-
ing thorough maps of the lunar surface, analyzing the terrain, and selecting
probable landing locations for upcoming missions.
c. Lunar geology research: For the geological study, deep learning can help
with real-time analysis of lunar crater properties. Deep-learning algorithms
can shed light on the geological history and impact processes on the lunar
surface by automatically identifying and analyzing crater properties like
size, shape, distribution, and depth. Studying the lunar evolution, impact
dynamics, and surface composition is made easier thanks to this real-time
analysis.
d. Lunar resource exploration: Real-time identification of possible lunar
resources, including water ice within craters, can be supported by deep-
learning approaches. Deep-learning algorithms can find signs suggestive
of significant resources by examining multi- or hyperspectral data gathered
from lunar missions. Real-time resource discovery promotes sustainable
lunar exploration and helps future lunar resource utilization.
e. Planetary defense: The real-time detection and tracking of near-Earth
objects (NEOs) and possible impactors can benefit from deep-learning
methods. Deep-learning models can help early warning systems for
monitoring NEOs and determining potential impact risks by analyzing
astronomical imagery and spotting crater-like formations. Real-time
detection enables effective mitigation and reaction measures to safe-
guard our planet.
214 AI-Driven IoT Systems for Industry 4.0

f. Lunar habitat planning: Deep-learning methods can help with the real-
time evaluation and choice of viable lunar habitats. Deep-learning models
can help determine the best locations for lunar colonies by examining pho-
tos and data relating to topographical features, radiation levels, and geologi-
cal stability. Future Moon colonization and human exploration are made
possible by real-time habitat planning.

The objective of this discussion [9] is to provide insights into various change
detection techniques and their limitations. It emphasizes that a good change detec-
tion technique should provide information about change areas, categorize change
types effectively, and offer an accurate assessment of change detection outcomes.
The advantage lies in the categorization of change detection techniques into those
conveying change information through binary maps and those showing detailed
transitional changes. The disadvantage is the challenge of determining appropriate
threshold values for segmentation, which can affect the accuracy of change detec-
tion results. It is acknowledged that post-classification-based techniques tend to offer
higher accuracy compared to algebra-based approaches, but the selection of the best
technique remains challenging and depends on factors, such as remote sensing data
and the characteristics of the area under study. Researchers continue to explore dif-
ferent techniques and base their decisions on performance evaluations and accuracy
assessments. These real-time deep-learning applications for lunar crater recognition
show how important this technology is to the advancement of lunar exploration,
scientific inquiry, and future missions. Deep learning advances space exploration by
enabling accurate and effective processing of data from the lunar surface, furthering
our understanding of the Moon [6, 10].

12.3 BASICS OF FUZZY LEARNING


The process of educating and modifying fuzzy systems based on data is known as
fuzzy learning. Fuzzy systems are mathematical models that can handle ambiguous
or inaccurate information. They are based on the fuzzy set theory. These systems can
enhance their performance by learning from examples and modifying their param-
eters thanks to fuzzy learning. There are several ways to go about fuzzy learning, but
supervised learning is one widely used method. In supervised learning, input-output
pairs that describe the expected behavior of the system are provided as a training
dataset. Finding a fuzzy system that correctly translates the inputs to the associated
outputs is the objective [11, 12].
The following steps are commonly included in the fuzzy learning process:

a. Fuzzification: To express the level of membership of each input value to the


fuzzy linguistic words, the input data is transformed into fuzzy sets. This
process aids in capturing the data’s inherent ambiguity and imprecision.
b. Rule generation: Based on the training data, fuzzy rules are constructed.
Both antecedents (conditions) and consequents (outputs) make up these
rules. The consequents define the desired output fuzzy sets, whereas the
antecedents define the fuzzy sets to which the input values belong.
Lunar Crater Detection 215

c. Rule evaluation: To calculate the overall output fuzzy set based on the input
fuzzy sets and their accompanying rule weights, the fuzzy rules are evalu-
ated using inference techniques. Combining the fuzzy sets and using fuzzy
logic operations like conjunction, disjunction, and implication are part of
this step.
d. Defuzzification: Through a defuzzification procedure, the resulting fuzzy
set is transformed back into a crisp value or set of crisp values. The final
output of the fuzzy system is generated in this stage. To determine whether
or not a region is a crater, the output of the fuzzy inference method is
defuzzified to produce a clear classification determination.
e. Training and adaptation: A learning algorithm is used to modify the fuzzy
system’s parameters, including the membership functions and rule weights.
The learning algorithm reduces the error between the system’s output and
the expected output using the training data.

The benefits of fuzzy learning are numerous. It is suitable for modeling and
control in a variety of disciplines since it can manage complicated, ambiguous,
and inaccurate data. Fuzzy systems are interpretable and valuable for knowl-
edge representation because they can capture human-like decision-making and
reasoning processes. Fuzzy learning also enables continual optimization by
allowing the fuzzy system to adjust to new information or situations that are
changing.

12.3.1 Fuzzy Learning
“In human cognition, almost all classes have unsharp (fuzzy) boundaries,” claims
Lofti Zadeh (informally known as the fuzzifier of the crisp domain). We can there-
fore conform to Zadeh’s claim by coupling, embedding, or meshing fuzzy compo-
nents into NNs with bivalent logic. Fuzzy NNs were created through the union of
fuzzy logic’s knowledge representation capabilities and NNs’ learning capabilities.
As a result, the weakness of learning in fuzzy logic as well as the problem of NNs
being a “black box”—the inability to explain decisions (lack of transparency)—
have been overcome. The fundamental idea behind this neuro-fuzzy system (NFS)
is that it blends the learning and connectionist structure of NNs with the human-
like reasoning style of fuzzy systems. NFS offers strong and adaptable universal
approximations with the capacity to investigate comprehensible IF-THEN rules. In
many spheres of our social and technological existence, NFS usage is expanding.
Techniques for fuzzy learning can be used for a variety of feature extraction and
image processing tasks. These methods deal with uncertainty and imprecision in
picture data by using fuzzy logic and fuzzy sets. Here are a few common approaches
for fuzzy learning in this field:

a. Fuzzy clustering for image segmentation: For picture segmentation tasks,


fuzzy clustering techniques like fuzzy c-means (FCM) can be utilized.
Each pixel in an image is given a cluster by FCM, each of which has vary-
ing degrees of membership. This enables flexible borders between regions
216 AI-Driven IoT Systems for Industry 4.0

and accounts for picture segmentation errors. Meaningful areas from pho-
tos can be extracted via fuzzy clustering, allowing for later analysis and
feature extraction.
b. Fuzzy rule-based classification: For picture recognition and classification
problems, fuzzy rule-based classification is a well-liked method. The links
between the characteristics and class labels are captured by fuzzy rules that
are constructed based on features retrieved from images. The rules are then
assessed, and classification choices are made using fuzzy inferences. For
tasks like emotion recognition, face recognition, and object classification,
this strategy is helpful.
c. Fuzzy feature selection: To minimize dimensionality and improve classifi-
cation accuracy, feature selection is a crucial stage in image analysis. When
choosing which features to use, fuzzy feature selection approaches take into
account the features’ relevance, redundancy, and uncertainty. The level of
relevance and redundancy is represented by fuzzy sets, and the utility of
features is assessed using fuzzy measures. To choose the most informa-
tive features for upcoming processing and classification tasks, fuzzy feature
selection is helpful.
d. Fuzzy-based noise reduction: Noise reduction techniques for photos can be
used fuzzy logic. The uncertainty and imprecision of noise-affected visual
data can be modeled by fuzzy systems. Denoising techniques and fuzzy
filters can successfully reduce noise while retaining crucial image features.
These methods make adaptive adjustments to the filter parameters based on
the properties of the image and the noise levels. Applications like medical
image processing and the enhancement of blurry images benefit greatly
from fuzzy-based noise reduction techniques.
e. Fuzzy compression and reconstruction: Image reduction and reconstruc-
tion can also make use of fuzzy-learning techniques. Fuzzy systems are
capable of capturing an image’s perceptual properties and optimizing
the compression process in accordance. Fuzzy-based compression tech-
niques make use of fuzzy sets to represent image attributes and make use
of the image data’s duplication and irrelevance for effective compres-
sion. By using fuzzy inference and interpolation methods, fuzzy logic
can also help in the reconstruction of high-quality images from com-
pressed versions.
f. Handling uncertainty: The encoding and manipulation of uncertainty
in picture data are possible with fuzzy logic. The assignment of partial
memberships to various classes or concepts is possible with fuzzy logic,
as opposed to classical binary logic, which only accepts values of 0 or 1.
Uncertainty in image analysis might develop as a result of changing illu-
mination conditions, occlusions, or murky object borders. This uncertainty
can be modeled and reasoned using fuzzy learning approaches, allowing
for more flexible and reliable analysis and recognition.
g. Enhanced analysis and recognition: By capturing the intricate dependen-
cies and interactions between picture attributes and class labels, fuzzy
learning techniques can enhance analysis and recognition tasks. For
Lunar Crater Detection 217

instance, fuzzy rule-based categorization enables the creation of fuzzy


rules that capture the ambiguous connections between features and classes.
The inherent uncertainties and variances in picture data can be captured by
these principles, enabling more reliable and precise categorization. Tasks
that benefit from fuzzy learning techniques include item classification, face
recognition, and emotion recognition.

This is a method that entails developing a set of fuzzy rules that may dif-
ferentiate between crater and non-crater regions based on particular proper-
ties. These fuzzy learning methods for crater detection on the Moon have
the benefit of handling ambiguous and imprecise data, enabling more reli-
able and exact identification of craters on the lunar surface. Fuzzy logic and
learning algorithms are used in these techniques to enable them to adapt to
changing situations and enhance performance over time. In general, fuzzy
learning approaches adaptability comes from their capacity to deal with
ambiguity, noise, and imprecision in visual data. In comparison to conven-
tional approaches, these strategies offer more adaptable and flexible solutions
because they make use of fuzzy logic and fuzzy sets. This enables enhanced
image processing and feature extraction capabilities, resulting in more precise
and significant outcomes across a range of applications and domains. These
fuzzy learning methods for crater detection on the Moon have the benefit of
handling ambiguous and imprecise data, enabling more reliable and exact iden-
tification of craters on the lunar surface. Fuzzy logic and learning algorithms
are used in these techniques to enable them to adapt to changing situations and
enhance performance over time [13, 14].

12.3.2 Challenges in Fuzzy Learning


Additionally, fuzzy learning has difficulties. To establish appropriate membership
functions and rules, fuzzy system designers need domain knowledge and skill.
When working with huge datasets or intricate systems, the learning process can be
computationally demanding. Fuzzy systems can perform poorly due to overfitting
and underfitting, both of which are significant problems in ML.
Describe the difficulties in detecting lunar craters and the difficulties faced by the
fuzzy model in further detail:

a. Variability in the lunar surface: Lunar craters differ greatly in terms of


their size, shape, depth, and surroundings. Fuzzy models find it difficult
to effectively represent and recognize various crater types because of
this diversity. To get reliable detection results, fuzzy membership func-
tions must be created that capture the various traits of craters and their
surroundings.
b. Data availability and quality: High-resolution imaging and elevation
information are necessary for lunar crater discovery. Due to restrictions
on data availability and quality, gathering such data can be difficult. The
performance of fuzzy models can be impacted by the resolution, noise, and
218 AI-Driven IoT Systems for Industry 4.0

artifacts found in lunar photos. To enhance the quality of the input data,
preprocessing methods like noise reduction and picture enhancement are
frequently required.
c. Handling imbalanced data: Due to the relative rarity of lunar craters in
comparison to the surrounding lunar surface, datasets are skewed such
that positive crater samples are vastly outnumbered by negative samples.
The training and performance of fuzzy models may be impacted by this
class imbalance since these models may be biased in favor of the dominant
class. This problem can be solved by using strategies like oversampling,
undersampling, or class weighting, which increase the detection precision
for minority classes.
d. Generalizations to a new environment: Models for detecting lunar craters
must be able to generalize adequately to various lunar topographies and
lighting situations. However, due to variations in lighting angles, surface
textures, or illumination circumstances, the models developed using par-
ticular datasets may have trouble adjusting to other surroundings. Fuzzy
model resilience and adaptation in varied lunar conditions continue to be
a difficulty.
e. Design complexity: To provide proper membership functions and rules, one
needs subject knowledge and skill while developing fuzzy systems. The
right variables, the right number and form of fuzzy sets, and the right rule
base must all be decided upon when designing an efficient fuzzy model.
This procedure can be difficult since it necessitates a thorough comprehen-
sion of the domain of the problem and the capacity to faithfully portray the
underlying relationships.
f. Computational intensity: Fuzzy systems learning can be computationally
demanding, particularly when working with large datasets or intricate sys-
tems. Iterative methods are frequently used to optimize fuzzy models, and
these approaches consume a lot of processing power. Fuzzy-model training
and tuning can be time-consuming as a result; thus, it is important to bal-
ance model complexity and computing efficiency.
g. Overfitting and underfitting: Like other ML models, fuzzy models are
prone to both overfitting and underfitting. When a model gets too compli-
cated and works well with training data but not with untrained data, this is
called overfitting. On the other side, underfitting occurs when the model is
too straightforward and fails to identify the underlying trends in the data.
To minimize these issues and get the best performance, it is essential to
strike a balance between the complexity of the fuzzy model and the quan-
tity of training data that is readily available.

It takes a mix of domain knowledge, thorough data pretreatment, model build-


ing, and evaluation methodologies to overcome these obstacles. The availability of
increasingly diversified and high-quality lunar data, along with ongoing improve-
ments in fuzzy-learning algorithms and approaches, can assist in overcoming these
obstacles and boost the precision and dependability of lunar crater detection systems
[15, 16].
Lunar Crater Detection 219

12.3.3 Advancements in Fuzzy Learning


Fuzzy learning developments for lunar crater identification have concentrated on
enhancing the detection systems’ precision, effectiveness, and adaptability. Some
significant developments include the following:

Enhanced feature extraction: The extraction of more informative and discrim-


inative features from lunar photographs has been done using fuzzy-learning
techniques. These methods can capture the small variations in crater prop-
erties, such as size, form, texture, and context, by combining fuzzy sets
and fuzzy rules. As a result, feature representation and discrimination are
enhanced, allowing for more precise lunar crater recognition.
Adaptive fuzzy systems: Based on the features of the input data, adaptive fuzzy
systems have been designed to automatically modify the membership func-
tions and rule base of the fuzzy models. These techniques enable the models
to generalize to new surroundings more effectively since they can adjust to
different lunar terrains, lighting conditions, and picture quality. The detec-
tion algorithm is kept reliable and accurate over various lunar missions and
datasets thanks to adaptive fuzzy algorithms [17, 18].
Integration of multiple data sources: For more thorough crater recognition,
fuzzy-learning algorithms have been expanded to combine several data
sources, including photos, elevation maps, and spectral data. The models
can make use of the complementing capabilities of each data type by fusing
information from several sources using fuzzy fusion techniques, resulting
in better accuracy and dependability. The detection algorithms can now
locate lunar craters using both optical and topography cues thanks to this
integration.
Fuzzy clustering and segmentation: Lunar images have been divided into use-
ful parts, like craters and non-crater areas, using fuzzy clustering and seg-
mentation techniques. These methods take into account the ambiguity and
uncertainty in the data and use the fuzzy membership notion to categorize
pixels or image patches. The accurate localization and delineation of lunar
craters are made possible by fuzzy grouping and segmentation, which sim-
plifies tasks involving their subsequent detection and analysis.
Ensemble and hybrid approaches: Fuzzy learning combined with other ML
approaches has been proposed in ensemble and hybrid models to enhance
the overall performance of lunar crater recognition. These strategies aim
to improve the accuracy, robustness, and computational efficiency of the
detection systems by integrating the strengths of various algorithms, such
as fuzzy logic, NNs, or evolutionary algorithms. Due to their ability to
extract more varied and complementary information from the data, ensem-
ble, and hybrid models also aid in addressing the issues of overfitting and
underfitting.
Deep Fuzzy learning: Deep fuzzy learning approaches have recently been
developed as a result of the integration of deep-learning techniques with
fuzzy systems. These strategies combine the interpretability and uncertainty
220 AI-Driven IoT Systems for Industry 4.0

management of fuzzy logic with the representation learning capabilities of


DNNs. Deep fuzzy learning models can automatically identify crater exis-
tence from input data and extract hierarchical characteristics from lunar
photos. By integrating these technologies, it is possible to undertake lunar
crater identification jobs at the cutting edge.
Transfer learning and pre-trained model: Fuzzy learning has been used to
detect lunar craters using transfer-learning techniques. The first fuzzy
model can gain from the learned features and information by using pre-
trained models from similar image recognition tasks, such as object detec-
tion or semantic segmentation. This method increases the effectiveness of
the detection system and helps to lessen the requirement for huge labeled
datasets specifically for lunar crater detection.
Real-time advancement: The development of real-time and onboard imple-
mentation methodologies has been the main focus of fuzzy learning
advancements for lunar crater recognition. To enable autonomous crater
identification and analysis during lunar missions, these strategies seek to
place the detection devices directly on lunar rovers or spacecraft. To meet
the hardware requirements of space exploration, real-time and onboard
implementation require efficient processing, memory management, and low
power consumption [19, 20].

Together, these developments improve the precision, effectiveness, adaptability,


handling of uncertainty, real-time implementation, integration of auxiliary data,
and incorporation of human knowledge into fuzzy learning for lunar crater recogni-
tion. Fuzzy-learning algorithms continue to advance and offer helpful insights into
the lunar surface and its geological features by taking into account these factors.
Overall, improvements in feature extraction, adaptability, integration of various data
sources, grouping and segmentation, ensemble and hybrid models, and the incorpo-
ration of deep learning methods have been the main emphasis of fuzzy learning for
lunar crater detection. These developments are intended to improve the precision,
effectiveness, and sturdiness of lunar crater-detecting technologies, thereby advanc-
ing knowledge of the lunar surface and its geological processes.
Based on the data in “Landing site selection using Fuzzy Rule-Based Reasoning,”
some of the advances in fuzzy learning that were employed in the chapter are as
follows:

Linguistic fuzzy rule-based reasoning: To assess the quality of landing sites


and the safety of the terrain, the article uses a linguistic fuzzy rule-based
reasoning engine. When dealing with uncertainty and imprecision in sensor
measurements, fuzzy logic allows for approximation reasoning and deci-
sion-making, which is especially helpful.
Integration of multiple onboard sensors: The system includes several onboard
sensors, including a scanning lidar, a descending camera, and a phased-
array terrain radar. These sensors offer measurements of the landscape that
are used to judge the safety of the terrain and identify potential risks during
landing.
Lunar Crater Detection 221

Spatial and temporal fuzzy reasoning: The reasoning process is discussed in


terms of spatial and temporal dependence. To take into account landing
scores from nearby sites and at various points during the descent, fuzzy rule
sets are used. The precision and adaptability of the landing site selection
process are enhanced by the integration of spatial and temporal information.
Simulation studies: The study provides simulation tests to demonstrate how
the fuzzy reasoning approach can be used to choose landing sites. These
simulations aid in assessing the efficacy and functionality of the fuzzy
learning system in various planetary environments.
The system integrates data from several onboard sensors using a fuzzy
rule-based methodology to evaluate terrain safety and identify approachable
zones depending on fuel use. Additionally, the decision-making process is
improved by taking into account scientific returns by letting mission scien-
tists choose locations of interest in advance. The reasoning process is made
more robust by the addition of spatial and temporal information. Fuzzy
learning has been successfully used in simulation tests to choose landing
sites that combine safety, fuel efficiency, and scientific criteria in the best
possible way. It is clear from carefully reading the presented explanation
that the fuzzy rule-based approach has great utility when used to explore
remote sensing data. This model’s use, as reported in the chapter, illustrates
how well it can analyze and extract useful information from remote sensing
data, especially in the area of lunar crater detection and landing site selec-
tion for autonomous spacecraft descent. Further validation through exact
simulations and tests will help to increase the model’s dependability and
broaden its applicability in remote sensing and related domains as technol-
ogy develops and more data becomes accessible.
Case studies for validating the fuzzy-learning techniques: This category of
change detection involves utilizing a combination of remote sensing tech-
nology, NN, and fuzzy modeling to evaluate change areas. Asokan and
Anitha [9] proposed a fuzzy clustering method for SAR image change
detection, effectively addressing noise removal and detailed preservation.
Tian and Gong introduced an edge-weighted fuzzy clustering method for
change detection [21]. Garcia-Jimenez et al. proposed a fuzzy logic sys-
tem for forest fire change detection [22]. Huang et al. (2019) introduced
a tensor and deep-learning-based change detection method, offering high
accuracy [23].

12.4 ENSEMBLE LEARNING


The ensemble learning method uses several ML algorithms to produce unreliable
predictions based on attributes collected from several data projections. Then, to get
performance that is superior to what can be produced from any one algorithm alone,
these weak predictions are aggregated using a variety of voting processes. However,
the total error initially continually reduces until it reaches a minimum as the com-
plexity of the model rises, after which it grows quickly. Two parts of the total error,
bias and variance, have contrasting trends: bias declines sharply before stabilizing,
222 AI-Driven IoT Systems for Industry 4.0

but variance stays steady before growing noticeably. This conclusion emphasizes
how, to increase model complexity and boost performance, bias and variance must
be carefully balanced, both playing a vital role.
Also, when working with complicated data, such as imbalanced, high-dimen-
sional, and noisy data, typical ML techniques have some drawbacks that ensemble
learning seeks to address. But traditional approaches frequently perform poorly
because they are unable to adequately capture the complex properties and underlying
structures of such data. In response, ensemble learning combines data fusion, data
modeling, and data mining into a single framework to build an effective knowledge
discovery and mining model. The extraction of a diverse range of features using
different transformations is the first step in the ensemble learning process. These
learned features are used as inputs for several learning algorithms, each of which
produces subpar predicted outcomes. The main principle of ensemble learning is
to combine the outputs of these underperforming learners and to take advantage
of the valuable information they have to provide [24, 25]. This combination is fre-
quently accomplished using voting processes, in which each learner contributes their
respective forecast, and the final prediction is produced by adaptively aggregating
the individual predictions. The authors of this study review the common approaches
to ensemble learning and categorize them according to their various traits. They talk
about the developments in each strategy and point out problems that still need to be
solved [18]. The research also considers how ensemble learning might be combined
with other ML technologies like deep learning and reinforcement learning. These
combinations provide opportunities to improve the functionality and performance of
ensemble learning models as well as new directions for future research. In general,
ensemble learning provides a viable framework for knowledge exploration and min-
ing in scenarios involving complicated data. Ensemble learning can enhance predic-
tive performance and capture the complex patterns and structures present in the data
by utilizing the advantages of many learning algorithms and efficiently combining
their outputs. The chapter gives a thorough overview of ensemble learning, empha-
sizing its importance, the state of the art in research, and probable prospects for
developments in this area. The goal of ensemble learning is to seamlessly combine
various ML algorithms into a single framework, making use of the complementary
data from each approach to enhance the performance of the overall model. Its inher-
ent extensibility enables the blending of many ML models for various applications,
such as classification and clustering. It has a long research history and provides flex-
ibility when combining different models for different ML applications.

12.4.1 Fusion of Deep Learning and Fuzzy Model


We now talk about the combination of deep learning with fuzzy systems to deal
with ambiguous and imprecise situations. Although deep learning has demonstrated
enormous promise for artificial intelligence, its conventional models have difficulty
dealing with uncertainty. On the other hand, fuzzy systems can handle the ambigu-
ous and uncertain ideas found in the real world. The integration of fuzzy systems
and deep learning is justified for three main reasons: (1) effectively representing
ambiguity and uncertainty in practical applications; (2) handling boundary points
Lunar Crater Detection 223

and low-frequency points in image processing; and (3) displaying ambiguity and
imprecision in natural language through linguistic variables. To improve a variety of
applications, including classification, prediction, NLP, and autonomous control, deep
learning and fuzzy systems are combined. Fuzzy approaches come in a variety of
forms and are frequently used in conjunction with deep learning:

a. Uncertain information, duplicated informative blocks, boundary point


categorization, and overfitting are all dealt with by the first type of fuzzy
set. CNNs, RNNs, and deep-belief networks are a few examples of deep-
learning models that it has been effectively used. Medical image fusion,
high-order fuzzy time series forecasting, big data categorization, decision
support systems, facial expression identification, and other applications
have used the first type of fuzzy set with deep belief network (DBN).
b. Because the membership functions have a larger footprint of uncertainty,
the second type of fuzzy set offers more flexibility. Alongside deep-learn-
ing models like DBN, CNN, and RNN, it has been employed. Image fusion,
facial expression detection, deep learning for financial prediction, uncer-
tain boundary identification, and more applications have used the second
type of fuzzy set.
c. To deal with ambiguous language-based information, linguistic variables
are used. In fields like information retrieval, text-to-speech systems, emo-
tion analysis, sentiment analysis, financial trading, and remote sensing
image analysis, deep learning models in conjunction with linguistic vari-
ables have been used.

For a variety of reasons, the combination of fuzzy systems and deep learning is
effective:

1. Classification: To overcome uncertainty and obtain deep-level and useful


information, the fusion improves classification accuracy. It has been used
for sentiment analysis, human action recognition, image categorization, and
many other things.
2. Prediction: The inference is improved by the use of fuzzy systems and deep
learning while managing voluminous and imprecise assessment data. It
has been used in time-series forecasting, sentiment analysis, economic and
financial forecasting, and transportation forecasting.
3. Natural language processing: Fuzzy identification and logical judgment in
natural language understanding and generation are improved by fusion. It
has been used in clinical decision support systems, text retrieval, sentiment
analysis, text-to-SQL, and machine translation.
4. Automatic control: The fusion improves parameter fitting and feature extrac-
tion for automatic control systems. Numerous control applications have used it.

Overall, the combination of fuzzy systems and deep learning offers enhanced
performance and robustness in a variety of applications [26]. The flowchart below
illustrates the process of selecting features for the proposed fusion model, which
224 AI-Driven IoT Systems for Industry 4.0

integrates the fuzzy feature extraction and deep-learning architecture using DEM
images. Additionally, the model incorporates an IoT component to display the loca-
tion of the crater on the lunar surface. By combining fuzzy feature extraction, deep
learning architecture, and IoT technology, the proposed fusion model presents a
robust and efficient approach for accurate lunar crater detection and location visual-
ization. For better understanding, refer Figure 12.1.

FIGURE 12.1 Flowchart of the proposed fusion model utilizing DEM image.
Lunar Crater Detection 225

12.4.2 Pseudocode of Fusion Model


A high-level description of a computer program or algorithm that employs a condensed
syntax resembling a programming language is known as pseudocode. It aids in outlin-
ing the reasoning and procedures needed in putting a given algorithm or model into
practice but is independent of any particular programming language. Instead of being
constrained by the restrictions and specifics of a particular programming language,
pseudocode allows programmers to concentrate on the conceptual features of an algo-
rithm and its logical flow. Because of their ability to iterate and make changes without
difficulty before turning the pseudocode into actual code, they are better equipped to
create and improve the algorithm thanks to their flexibility. Designers need to have a
thorough understanding of the algorithm or process they are attempting to represent to
produce pseudocodes. Direct observation, already-existing documentation, or the prob-
lem definition and analysis phases of the system development life cycle can all be used
to get this information. Designers can make sure that crucial elements like file handling,
variable initialization, and conditional statements are explicitly coded by capturing the
key actions and phases in the pseudocode. Before developing any actual code, it acts as a
communication tool between designers and developers, enabling them to work together
and go through the structure and functioning of the algorithm. Overall, pseudocode is a
useful tool for software development since it makes it easier to build algorithms, stimu-
lates logical thinking, fosters teamwork, and increases process efficiency.
We begin by loading the remote sensing image dataset and importing the required
libraries. The photos are then preprocessed to get them ready for feature extraction. The
model is then initialized with pre-trained weights after the ResNet architecture is defined.
We can use the previously learned information while still fine-tuning the model for our
particular purpose by freezing the majority of the layers. We cycle over each image in the
dataset during the main loop. We run the ResNet model on each image to extract high-
level features. Then, using these extracted features, we apply fuzzy linguistic algorithms
to identify and categorize lunar features using linguistic terms. The outcomes are kept
for later review and analysis. Finally, to obtain insights and evaluate the effectiveness of
the feature extraction approach, we visualize the retrieved lunar characteristics. We cycle
over each image in the dataset during the main loop. We run the ResNet model on each
image to extract high-level features. Then, using these extracted features, we apply fuzzy
linguistic algorithms to identify and categorize lunar features using linguistic terms. The
outcomes are kept for later review and analysis. Finally, to obtain insights and evaluate
the effectiveness of the feature extraction approach, we visualize the retrieved lunar char-
acteristics. The pseudo code is added in Figure 12.2 for reference.

12.4.3 Challenges Involved in the Fusion Model


Researchers and practitioners need to overcome several issues that the deep learning
and fuzzy learning fusion paradigm present. The following are some of the main
difficulties with this fusion strategy:

a. Data integration: Different kinds and forms of data are frequently needed
for deep learning and fuzzy learning. It can be difficult to combine these
various data sources and coherently portray them. To make sure that the
226 AI-Driven IoT Systems for Industry 4.0

# Calculate the average of a list of numbers # Calculate the average of a list of numbers
def calculate _average(numbers): #Function calculate_average(numbers):
Total=0 #Total=0
Count=0 #Count=0
for num in numbers: #For each number in numbers:
total+= num #Add the number to the total
count+= 1 #Increment count by 1
average= total/count #Average= divide the total by count return average
return average #Numbers_list= [5,10,15,20,25]
numbers_list =[5,10,15,20,25] #Result= calculate_average(numbers_list)
result=calculate_average(numbers_list) #Display “The average is:”, result
print(“The average is:”, result)

(Source code written in Python) (Pseudocode written using simple English)

FIGURE 12.2 An example of Python source code and pseudocode written in English.

two methodologies work together, careful preprocessing, feature extraction,


and data transformation procedures are needed.
b. Model integration: Modeling approaches and architectures for deep learn-
ing and fuzzy learning are different from one another. It is a difficult
challenge to successfully integrate various models to make use of the com-
plementary characteristics of each strategy. It is now being researched how
to effectively mix deep-learning NNs with fuzzy systems or how to include
fuzzy logic into deep-learning frameworks.
c. Interpretability: Fuzzy systems are renowned for their transparent rule-
based design, while deep learning models are frequently criticized for their
lack of interpretability and explainability. Combining the two methods can
produce models that are more challenging to understand and explain. It is
difficult to develop techniques to improve the interpretability and explain-
ability of fusion models.
d. Training and optimization: Large-scale training datasets and sophisticated
optimization methods like backpropagation and gradient descent are fre-
quently used in deep-learning models. On the other hand, fuzzy systems
frequently employ heuristic-based learning methods. To combine these
methods, it is necessary to overcome the difficulties associated with param-
eter training, optimization, and tuning.
e. Computational complexity: Both fuzzy learning and deep learning have
computing requirements. Combining these methods could make computa-
tions much more difficult. To manage the increased computational demands
and guarantee real time or close to real-time performance, effective algo-
rithms, and hardware optimizations are required.
f. Dataset bias and generalization: While fuzzy systems may have trouble
generalizing to new data, deep-learning algorithms are vulnerable to data-
set bias. Combining the two strategies may make these problems worse.
When creating fusion models, it is essential to carefully take dataset bias,
data distribution, and generalization capabilities into account.
Lunar Crater Detection 227

g. Scalability and adaptability: Deep-learning models are renowned for being


scalable and having the capacity to handle sizable and intricate datasets.
The versatility of fuzzy systems, on the other hand, has led to their usage in
a variety of fields. A problem that needs to be solved is ensuring that fusion
models keep their qualities of scalability and adaptability.
h. Limited integration with advanced techniques: It is still difficult to com-
bine fuzzy systems with cutting-edge deep-learning methods like attention
mechanisms, transformers, and NLP algorithms. To merge advanced fuzzy
techniques with cutting-edge deep-learning algorithms, more investigation
and research are required.
i. Lack of standardization: As of right now, there is no established meth-
odology or framework for combining deep learning and fuzzy systems. It
can be difficult to compare and repeat results across studies since different
researchers may use different procedures and methodologies.

Continuous research and development into the merging of deep learning and
fuzzy learning are necessary to meet these problems. By addressing these chal-
lenges, researchers can use the advantages of both methodologies to build more reli-
able and effective models for a variety of applications [27].

12.4.4 Advancements in Fusion Model


Fusing model advancements relate to the development, enhancements, and innova-
tions made possible by the fusing of various procedures or techniques to produce
models that are more accurate and dependable. Advancements in the field of deep
learning and fuzzy systems relate to the creation and improvement of models that
integrate the benefits of both methodologies. The combination of fuzzy systems with
deep-learning results in several significant improvements. First off, these fusion mod-
els have greatly enhanced image segmentation, classification, and interpretation in
the context of image processing. More precise and reliable results can be obtained by
combining the strong feature extraction skills of deep learning with the capacity of
fuzzy systems to deal with uncertainties and fuzzy features in images. When photos
contain nuanced and unclear information, this development is especially pertinent.
Second, fusion models have produced encouraging outcomes in terms of decision-
making procedures. Data fusion, assessment, and optimization are aided by deep-
learning approaches, while membership functions and fuzzy rules are determined
by fuzzy systems. These approaches can be combined to create decision-making
models that are more flexible and trustworthy. Additionally, it enables the interpre-
tation of qualitative and ambiguous data, which is essential in situations involving
real-world decision-making. Fusion model improvements also address issues with
fuzziness and ambiguity in image processing.
The fusion models can handle uncertain and ambiguous information well by add-
ing fuzzy systems into deep-learning algorithms, which improves feature extraction,
segmentation, and interpretation of images. Fusion model improvements represent
the ongoing development and improvement of methods that integrate deep learn-
ing with fuzzy systems. These developments are meant to alleviate problems and
228 AI-Driven IoT Systems for Industry 4.0

increase both approaches’ applicability and accuracy while also enhancing their
interpretability. Future developments and advancements in this area are very likely
with further research and development in this area. Following is a brief overview of
the developments in the merging of deep learning and fuzzy learning:

a. Improved image processing: Deep learning and fuzzy systems working


together have improved image segmentation and classification. By integrat-
ing the advantages of both strategies, it is possible to handle uncertainties
and complicated information in images while still producing more reliable
segmentation results. For image classification tasks, fuzzy classifiers like
TSK (Takagi-Sugeno-Kang) fuzzy classifier and CNN with fuzzy c-means
clustering have been used successfully.
b. Enhanced decision-making: Data fusion, assessment, and optimization
stages of decision-making have all benefited from the integration of deep
learning and fuzzy approaches. Through the use of deep-learning tech-
niques, fuzzy systems can be built by learning membership functions and
fuzzy rules. To get generic weights for qualities or schemes in fuzzy deci-
sion-making processes, deep-learning models can also be used.
c. Application in various fields: Numerous fields, including computer science,
NLP, healthcare, smart energy management systems, and the industrial
industry, have found a use for the combination of deep learning and fuzzy
systems. Fuzzy qualities in typical applications can be efficiently character-
ized by incorporating fuzzy systems, uncertain or imprecise information
can be communicated using linguistic variables, and the interpretability of
models can be improved.
d. Handling fuzziness and ambiguity: The problems caused by ambiguity and
fuzziness in image processing are addressed by combining deep learning
with fuzzy systems. Deep-learning models offer strong nonlinear mapping
skills, while fuzzy techniques help handle the uncertain and fuzzy aspects
of pictures. Improved feature extraction, segmentation, and image interpre-
tation are made possible by this combination.
e. Advancements in group decision-making: Large-scale group decision-mak-
ing and consensus-building procedures have advanced as a result of the
integration of deep learning and fuzzy systems. In situations like block-
chain technology, deep-learning algorithms help build self-adaptive deci-
sion-making models and consensus-achieving methods.
f. Interpretable model: In comparison to conventional deep-learning mod-
els, fusion models that combine fuzzy systems in deep-learning algorithms
have an advantage in terms of interpretability. Fuzzy systems’ membership
functions assign linguistic word labels to feature points, which can then be
further analyzed using guided backpropagation and heat maps.

These developments show the possibility of merging deep learning and fuzzy sys-
tems to solve problems, enhance accuracy and interpretability, and broaden the scope
of both approaches’ possible applications. Future improvements and developments in
this fusion technique are probably going to come from further study and development.
Lunar Crater Detection 229

12.5 ANNOTATION OF DEM IMAGE AND BASICS OF IoT


DEM image labeling relies heavily on annotation tools. To guarantee precise and
effective labeling when annotating DEM images, annotation tools must have several
essential features:

a. Support for multiple annotation forms: To accommodate the variety of


DEM pictures, annotation technologies should offer a variety of annota-
tion forms. This provides the capability to add polygons, rectangles, circles,
lines, and points to photos as annotations. These annotation forms enable
the accurate tagging of particular DEM image elements and objects.
b. Compatibility with image formats: The ability to load binary pictures in
other commonly used DEM image formats, like JPG and PNG, should be
available in annotation tools. By doing this, you can make sure that the
tools can handle the input data without further conversions or preparation.
c. Integration with data encoding: Tools for annotation should be able to load
JSON data that contains encoded picture strings. This makes it possible
to seamlessly incorporate current data sources or encoded data related to
the DEM pictures. It enables the use of pertinent metadata throughout the
annotation process and its retention.
d. Robustness and reliability: When working with large-scale datasets, anno-
tation tools should be strong, stable, and dependable. They ought to be able
to manage big image sizes and high-resolution DEM images without sac-
rificing usefulness or performance. This guarantees effective annotation
workflows, even for large or complicated datasets.
e. User-friendly interface: An annotation tool’s user-friendly interface should
make the annotation process easier. The tools should have dynamic and
intuitive features that make it simple for annotators to browse through the
images, add annotations, and edit them as needed. The learning curve for
annotators is decreased with an intuitive interface, which also boosts over-
all productivity.
f. Flexibility for customizations: Annotation tools should ideally provide cus-
tomizability choices to address particular needs or domain-specific aspects
in DEM pictures. This includes the capability to create unique annotation
types, modify the behavior of the tool to meet the needs of a certain project,
and incorporate extra features or algorithms made specifically for DEM data.
g. Efficient time-saving features: Especially for huge datasets, annotation
tools should have capabilities that speed up the annotation process. This
could include initial annotations that are automated or semi-automatic, sug-
gestions based on ML algorithms, or real-time feedback to improve annota-
tion efficiency and cut down on manual labor.

For DEM image labeling, annotation tools should support a variety of annotation
forms, be compatible with a range of picture formats and data encoding, execute
consistently well, have an intuitive user interface, allow for customization, and have
time-saving features. Together, these characteristics support precise, effective, and
230 AI-Driven IoT Systems for Industry 4.0

simplified annotation operations for DEM pictures. Annotating a high spatial resolu-
tion dataset for the preprocessing stage typically involves assigning labels or anno-
tations to specific features or regions of interest within the dataset. When it comes
to annotating DEM images, there are a few different approaches you can consider.

1. Manual annotation: One option is to manually annotate the DEM images


by identifying and labeling the features or regions of interest. This can be
done using image annotation tools or software, where you can draw poly-
gons or outlines around the desired features and assign them specific labels.
Manual annotation allows for precise labeling but can be time-consuming
and may require expertise in interpreting the DEM data.
2. Binary mask creation: Another approach is to create binary masks for fea-
ture extraction filters. In this case, instead of manually annotating the entire
dataset, you can create binary masks that represent the areas of interest
or specific features you want to extract. These binary masks act as filters,
allowing you to focus on those particular regions during the feature extrac-
tion process. Binary masks can be generated using techniques like thresh-
olding, segmentation, or edge detection algorithms.
3. Image augmentation: Image augmentation techniques can also be employed
to generate additional annotated data for training purposes. This involves
applying various transformations to the original DEM images, such as rota-
tion, scaling, translation, or adding noise. By augmenting the dataset with
these variations, you can increase its diversity and improve the robustness
and generalization ability of your models. Augmentation can be particu-
larly useful when dealing with limited annotated data.

The choice between manual annotation, binary mask creation, or image augmen-
tation depends on the specific requirements of your application, the complexity of
the features you want to extract, and the availability of labeled data. Manual annota-
tion offers the highest level of accuracy but may be time-consuming and labor-inten-
sive. Binary masks can provide a more efficient way to focus on specific features,
while image augmentation can help increase dataset size and model robustness. We
describe it to discuss the binary mask types for better understanding:

a. Thresholding: Thresholding is a simple and commonly used technique in


image processing for image segmentation. It involves converting a grayscale
or color image into a binary image by assigning pixels to one of two classes
based on a threshold value. The threshold value determines the intensity
level at which the image is divided into foreground (object of interest) and
background. There are various types of thresholding methods, such as
global thresholding, adaptive thresholding, and Otsu’s thresholding. Global
thresholding uses a fixed threshold value that applies to the entire image,
while adaptive thresholding adjusts the threshold value locally, taking into
account the local pixel neighborhood. Otsu’s thresholding automatically
calculates the optimal threshold value based on the image’s histogram. In
the context of annotating a high spatial resolution dataset, thresholding can
Lunar Crater Detection 231

be used to create binary masks by segmenting the image into regions of


interest (foreground) and non-interest regions (background) based on the
intensity or other image attributes.
b. Segmentation: Segmentation refers to the process of partitioning an image
into multiple meaningful and homogeneous regions or objects. It aims
to group together pixels or regions that have similar characteristics, such
as color, intensity, texture, or motion. Image segmentation is crucial for
annotating datasets, as it enables the identification and separation of spe-
cific features or regions for further analysis or labeling. There are several
segmentation techniques available, including region-based segmentation,
edge-based segmentation, and clustering-based segmentation. Region-
based segmentation identifies regions based on similarity criteria, such as
color or texture, and forms connected regions within the image. Edge-based
segmentation focuses on detecting boundaries or edges between different
regions based on gradients or other edge-detection algorithms. Clustering-
based segmentation groups pixels into clusters based on similarities in fea-
ture space. In the context of annotating a high spatial resolution dataset,
segmentation algorithms can be utilized to identify and separate specific
features of interest, such as lunar craters or other geological formations.
This enables accurate annotation and analysis of the desired regions.
c. Edge detection: Edge-detection algorithms aim to locate and highlight
boundaries or edges within an image. Edges represent significant changes
in intensity or color and are useful for identifying object boundaries or
boundaries between different regions in an image. Edge detection plays a
crucial role in feature extraction and image analysis tasks. Common edge
detection techniques include gradient-based methods, such as the Sobel
operator or the Canny edge detector, and Laplacian-based methods. These
algorithms compute the gradient or the second derivative of the image
to identify areas of rapid intensity change, which typically correspond to
edges. In the context of annotating a high spatial resolution dataset, edge-
detection algorithms can be employed to locate and highlight the edges or
boundaries of specific features, such as lunar craters. This can facilitate
the creation of binary masks or provide guidance for manual annotation by
identifying the regions of interest more precisely.

Overall, thresholding, segmentation, and edge detection algorithms are funda-


mental techniques in image processing and annotation. They enable the partitioning
of images into meaningful regions, the creation of binary masks, and the detection
of boundaries or edges. These algorithms play a crucial role in extracting relevant
information from high spatial resolution datasets, aiding in annotation, analysis, and
further processing tasks.

12.5.1 Types of Annotation Tools


The annotation of DEM and ORTHO (orthorectification) images created by TMC-2
photos is possible using a variety of technologies. With the use of these tools, users
232 AI-Driven IoT Systems for Industry 4.0

can mark and name various aspects and items of interest in the photographs, aiding
in the annotation process. Here are a few illustrations:

a. QGIS: The popular geographic information system (GIS) program QGIS


offers several tools for working with spatial data. It can classify and anno-
tate features in DEM and ORTHO pictures. For annotating and displaying
geographic data, QGIS offers a user-friendly interface and accepts a variety
of file types.
b. ArcGIS: Esri created the comprehensive GIS program ArcGIS. It provides
a variety of tools for marking up and examining DEM and ORTHO pic-
tures. Labeling, symbolization, and annotation editing tools are among
the sophisticated annotation features offered by ArcGIS. Additionally, it
enables the study and 3D visualization of elevation data.
c. ERDAS IMAGINE: A remote sensing software package called ERDAS
IMAGINE has facilities for handling and analyzing geographical data. To
mark and label features in DEM and ORTHO pictures, it offers annota-
tion capabilities. To annotate and analyze spatial data, ERDAS IMAGINE
provides several annotation methods, including text, line, and polygon
annotation.
d. Global Mapper: Annotating DEM and ORTHO pictures is supported by
the flexible GIS program Global Mapper. It enables users to mark particu-
lar aspects or areas of interest in the photos by adding text labels, symbols,
lines, and polygons. Global Mapper provides a user-friendly interface and a
variety of annotation choices for quick and accurate labeling.
e. ENVI: Data from remote sensing can be analyzed and visualized using the
software package ENVI. For annotating features in DEM and ORTHO
pictures, it has annotation tools. To aid in the annotation process, ENVI
offers interactive annotation capabilities like point, line, polygon, and text
annotation.
f. PDS4 View: When viewing, analyzing, and annotating remote sensing
images in the PDS4 (Planetary Data System version 4) data format, users
use a PDS4 viewer tool. PDS4 is a standardized data format used for plan-
etary scientific data sharing and archiving, including remote sensing data
obtained by space missions.

The tools for annotating DEM and ORTHO images created from TMC-2 photos
are only a few examples. The tool selected will rely on the precise criteria, compati-
bility with data formats, and degree of capability required for the annotation activity.

12.5.2 Enhancing Spatial Data with Annotations Tools


By enabling users to annotate pictures or other media with graphics or text, annota-
tion tools are software programs that improve and enrich data. These instruments are
widely utilized across many industries, including GIS, where they are essential for
annotating and analyzing geographical data like DEM images and ORTHO images.
The software must have the ability to import and display raster-based DEM files in
Lunar Crater Detection 233

order to open a DEM image with an annotation tool. It should handle widely used file
types including GeoTIFF, ASCII Grid, USGS DEM (.dem), and other formats spe-
cialized to DEM data. Users should be able to input the DEM file into the program,
ensuring accurate georeferencing and elevation data visualization. For opening DEM
images, a variety of tools are available, each with a unique set of characteristics
and functionalities. For instance, users can convert the DEM file to a raster format
using a conversion tool provided by ArcMap Desktop 10.8. Options to input data
and choose the DEM file directly are available in ArcGIS Pro 3.0. The “Open Data
File” option in Global Mapper 22 enables users to open the DEM file, and QGIS 3.16
provides capabilities to create a raster layer by selecting the TIFF file. Similar to this,
popular formats like TIFF should be supported by annotation tools, when it comes to
opening ORTHO images. The tools should have capabilities for reliably opening and
displaying ORTHO pictures while preserving georeferencing data and guaranteeing
proper alignment with other spatial data layers. It is significant to remember that the
selection of an annotation tool depends on the tastes, needs, and particular activities
at the hand of the user. ArcGIS, QGIS, Global Mapper, and numerous other GIS
software programs are some of the frequently used tools for opening and annotating
DEM and ORTHO pictures. These tools allow users to execute intricate geospatial
analysis, produce visualizations, and produce maps with annotated data, in addition
to just opening and annotating photos.
The quick explanation in Table 12.1 explains how to open DEM and ORTHO
pictures using various annotation tools. For handling these spatial datasets, each tool
offers a different set of features and processes. In conclusion, annotating DEM and
ORTHO pictures requires the usage of annotation tools since they allow users to see
and interpret these spatial datasets. The tools should support the file types frequently
used with DEM and ORTHO pictures, give precise georeferencing, and have user-
friendly annotation and analysis options. Depending on personal preferences and
the precise capabilities needed for the task at hand, different annotation tools may
be selected.

TABLE 12.1
Annotation Tool Required to Open DEM and ORTHO Image
Annotation Tool DEM Image ORTHO Image
Arc Map Desktop 10.8 Go to Conversion tools > to Use the Copy Raster tool, open the ORTHO
raster > DEM to Raster, select image file
the DEM file
ArcGIS Pro 3.0 Use the Add Data option, select Go to Map ribbon, select Add Data, choose
the TIFF file the TIFF file
Global Mapper 22 File > Open Data File, choose File > Open Data File, select the ORTHO
the DEM file image file
QGIS 3.16 Layer > Add Layer > Add Raster Layer > Add Layer > Add Raster Layer,
Layer, select the TIFF file select the ORTHO image file
CVAT File > Rasterrio > Access File > PIL > Access properties > read the
properties > read the data data
234 AI-Driven IoT Systems for Industry 4.0

12.6 BASICS OF TMC-2


The many PDS4 data packages produced by Chandrayaan-2, India’s second lunar
exploration mission, were created by its instruments. Three categories—raw, cali-
brated, and derived—are used to classify the PDS4 data products. Raw, calibrated,
and derived data are all produced by the first instrument, Terrain Mapping Camera
(TMC-2), which also creates data products in the other two categories. The second
tool, Imaging Infra-Red Spectrometer (IIRS), produces data products in each of the
three areas as well. In contrast to other devices, the Orbiter High-Resolution Camera
(OHRC) and Dual Frequency Synthetic Aperture Radar (DFSAR) only produce raw
and calibrated data. The Solar X-ray Monitor (XSM) instrument, on the other hand,
generates raw and calibrated data but no derived data, whereas the Chandrayaan-2
Large Area Soft X-Ray Spectrometer (CLASS) instrument only produces raw data
products. Finally, the Chandrayaan-2’s Atmospheric Compositional Explorer 2
(CHACE-2) instrument only gives raw data; it does not produce calibrated or derived
data outputs. For researchers and scientists researching the lunar environment and
performing investigations based on the data gathered by the equipment aboard
Chandrayaan-2, these multiple PDS4 data sets offer various levels of processing and
analysis. TMC-2, which stands for TMC-2, is an advanced instrument employed
onboard the Chandrayaan-2 mission. It serves as a compact version of the origi-
nal Terrain Mapping Camera used during the Chandrayaan-1 mission. The primary
objective of TMC-2 is to meticulously map the lunar surface using the panchro-
matic spectral band, which ranges from 0.5 to 0.8 microns. This specific spectral
range allows TMC-2 to capture detailed images of the Moon’s topography. Equipped
with high spatial resolution capabilities, TMC-2 can discern surface features with
remarkable clarity, down to a resolution of 5 meters. This means it can capture even
subtle variations and minute details of the lunar landscape. Additionally, TMC-2
covers a wide swath of 20 kilometers from the 100-kilometer lunar polar orbit, pro-
viding extensive coverage of the Moon’s surface. One significant outcome of TMC-2’s
data is the creation of 3D maps of the lunar surface. The high-resolution images
captured by TMC-2 enable scientists to generate detailed and accurate topographic
models, which can be transformed into 3D representations of the Moon’s terrain.
These 3D maps contribute to a comprehensive understanding of the lunar landscape,
aiding in mission planning, site selection for future lunar missions, and geological
studies. TMC-2 plays a pivotal role in Chandrayaan-2’s mission objectives by pro-
viding crucial data for lunar exploration and scientific research. Its ability to capture
high-resolution images and map the lunar surface aids in unraveling the Moon’s
geological history and assists in creating accurate 3D models, ultimately expanding
our knowledge of Earth’s nearest celestial neighbor.

12.6.1 Pseudocode to Implement Model on TMC-2


Obtained DEM Image Data
We are trying to implement a model based on Faster R-CNN-50 architecture
with ImageNet as the backbone to classify craters and background in the TMC-2
orbiter payload data, specifically obtained DEM and ORTHO images. We also
Lunar Crater Detection 235

incorporated fuzzy extraction rules to enhance the classification process. To begin,


we import essential libraries and modules such as TensorFlow, Keras, NumPy, and
scikit-fuzzy. Then, we load and preprocess the image data, performing necessary
steps like reading the dataset, splitting it into training and testing sets, and apply-
ing data augmentation techniques like rotation, scaling, and flipping to the training
data. Next step, we will build the ResNet50 model with ImageNet as the backbone.
This architecture is known for its effectiveness in object detection and classification
tasks. We will customize the top layers of the model to suit our specific classifica-
tion task and compile it with appropriate loss functions, optimizers, and metrics.
We will proceed to train the model using the training image data, specifying the
number of epochs and batch size for training. In the training phase, we will monitor
the progress and evaluate the model’s performance based on the validation set. Here
is the pseudocode:

1. Import the necessary libraries and modules (e.g., TensorFlow, Keras,


NumPy, and scikit-fuzzy)
2. Load and preprocess the image data:
• Read and preprocess the image dataset
• Split the dataset into training and testing sets
• Apply any necessary data augmentation techniques (e.g., rotation, scal-
ing, and flipping) to the training image data
3. Build the ResNet50 model:
• Instantiate the ResNet50 model:
• Customize the top layers of the model based on your specific task
• Compile the model by specifying the loss function, optimizer, and
metrics
4. Train the model:
• Fit the model to the training image data
• Specify the number of epochs and batch size
• Monitor the training progress and evaluate the model’s performance on
the validation set
5. Evaluate the model:
• Evaluate the trained model on the testing set
• Classification based on the crater and background on the lunar surface
• Calculate and display relevant evaluation metrics (e.g., accuracy, preci-
sion, and recall)
6. Define fuzzy extraction rules:
• Identify the fuzzy variables and their linguistic terms based on the
problem domain
• Define membership functions for each linguistic term
• Implement fuzzy extraction rules using the fuzzy logic framework
7. Apply fuzzy extraction rules to the model’s predictions:
• For each prediction made by the model, apply the fuzzy extraction rules
to determine the final output.
• Use fuzzy logic operators (e.g., AND, OR) to combine the fuzzy outputs
and make a final decision
236 AI-Driven IoT Systems for Industry 4.0

8. Test the model with new data:


• Preprocess the new data in the same way as the training image data
• Use the trained model to make predictions on the new image data
• Apply the fuzzy extraction rules to the model’s predictions to obtain
the final outputs
9. Evaluate the performance of the fuzzy extraction model:
• Calculate and compare the performance metrics of the fuzzy extraction
model with the original deep-learning model.
10. Fine-tune the model:
• Analyze the results and fine-tune the model based on the insights
gained.
• Adjust the fuzzy extraction rules or the deep-learning model architec-
ture as needed.

Once the model is trained, we will evaluate its performance based on the testing
set; specifically we are focusing on classifying craters and background on the lunar
surface. Afterwards, we will calculate relevant evaluation metrics such as accuracy,
precision, and recall to assess the model’s effectiveness in finding out best future
model for automatic lunar crater detection. In addition to the deep-learning model,
we will incorporate fuzzy extraction rules into the classification process. We identify
fuzzy variables and linguistic terms based on the problem domain and define mem-
bership functions for each term. Using the fuzzy logic framework, we can implement
fuzzy extraction rules to refine the model’s predictions. To test the model with new
data, we will have to preprocess the data similarly to the training set and use the
trained model to make predictions. Then we will be able to apply the fuzzy extrac-
tion rules to the model’s predictions and obtain final outputs that provide classifica-
tion results enhancement. We will evaluate the performance of the fuzzy extraction
model by comparing its metrics with the original deep-learning model. Based on
the results and insights gained, we fine-tuned the model, making adjustments to the
fuzzy extraction rules or the deep-learning model architecture as needed.
Overall, this implementation of the Faster R-CNN-50 model with ImageNet as
the backbone, combined with fuzzy extraction rules, allows for accurate classifica-
tion of craters and background in the TMC-2 orbiter payload data. This approach
enhances the analysis of DEM and ORTHO images, contributing to a better under-
standing of lunar surface features, which is under implementation process and will
play the vital role to categorize uncataloged lunar crater datasets.

12.7 IoT-BASED FUSION MODEL


The chapter also covers the use of a Raspberry Pi with light-emitting diode (LED)
sensors linked for real-time data preprocessing. With the use of this ground-break-
ing technique, lunar crater images captured by the TMC-2 sensor can be used to
determine latitude and longitude. The outcomes show how well this method works
for detecting lunar craters by fusing deep learning with fuzzy systems and incorpo-
rating real-time sensor data. These discoveries make a substantial contribution to
our knowledge of the Moon’s geological past and offer insightful information about
Lunar Crater Detection 237

planetary surfaces. To fabricate the Raspberry Pi with LED sensors for real-time
data preprocessing, we would typically need to:

a. Raspberry Pi: Start by obtaining a Raspberry Pi board, which is the central


component of the setup. There are different models available, each with
varying specifications. You would need to choose a model that suits your
requirements.
b. LED sensors: Acquire LED sensors capable of capturing the necessary data
for your specific application. These sensors may need to be sensitive to
specific wavelengths of light to capture the desired information from the
lunar crater images.
c. Circuit connections: Connect the LED sensors to the GPIO (general-
purpose input/output) pins on the Raspberry Pi. The GPIO pins allow the
Raspberry Pi to interact with external components. You would need to refer
to the specifications and documentation of both the Raspberry Pi and the
LED sensors to determine the appropriate pin connections and wiring.
d. Programming: Write or adapt the necessary software programs to interface
with the LED sensors and process the captured data. The Raspberry Pi sup-
ports various programming languages, such as Python, which is commonly
used for its ease of use and extensive library support.
e. Data fusion and processing: Develop algorithms or utilize existing meth-
ods to fuse the captured data from the LED sensors with deep-learning and
fuzzy systems. This process involves preprocessing the data, training deep-
learning models, and incorporating fuzzy logic techniques to extract mean-
ingful information about the lunar craters, such as latitude and longitude.

The purpose of combining Raspberry Pi with LED sensors here is to enable real-
time data preprocessing and analysis. By capturing and processing data from the
LED sensors linked to the Raspberry Pi, it becomes possible to detect lunar craters
accurately. This fusion of technologies allows for a more advanced and automated
approach to studying the Moon’s geological past and gaining insights into planetary
surfaces.

12.8 CONCLUSION AND FUTURE PERSPECTIVE


In conclusion, the research presented in this chapter demonstrates the effectiveness
of ensemble learning strategies, fuzzy systems, and real-time data preprocessing
using a Raspberry Pi with LED sensors for the detection of lunar craters. By combin-
ing the strengths of various base classifiers and incorporating fuzzy rule-based rea-
soning, the system achieves improved accuracy and robustness in identifying lunar
craters from TMC-2 acquired DEM images. The fusion of deep learning with fuzzy
systems addresses challenges such as data noise, resource limitations, and varying
lighting conditions, which can hamper the performance of deep learning models
alone. The results demonstrate the effectiveness of ensemble learning techniques in
improving the accuracy and robustness of lunar crater detection. By leveraging mul-
tiple base classifiers, including random forests, support vector machines, and NNs,
238 AI-Driven IoT Systems for Industry 4.0

the ensemble learning framework collectively recognizes lunar craters, enabling


more reliable identification for lunar exploration missions. This approach provides
a significant contribution to the field by enhancing the quality of the classification
task and facilitating the selection of appropriate landing sites. The incorporation
of fuzzy feature extraction methods further enhances the performance of DLCs.
This research demonstrates the potential of these techniques for on-the-fly lunar
exploration and offers valuable insights into the Moon’s geological past and plan-
etary surfaces. In terms of future works, further advancements in sensor technology
can lead to higher resolution imaging and enhanced spectral sensitivity, enabling
more detailed and comprehensive data capture. Exploring advanced deep-learning
architectures, such as CNNs or RNNs, tailored specifically to lunar crater identifi-
cation, can further improve the accuracy of detection. Real-time decision-making
frameworks can be developed to utilize the detected lunar craters for autonomous
lunar exploration missions, incorporating trajectory planning algorithms, collision
avoidance strategies, and landing site selection methodologies. Integrating data from
multiple sources, such as DEM images, spectral data, and multi-angle imagery, can
provide a more comprehensive understanding of lunar craters and their geological
context. Moving forward, there are several potential directions for future work in
real-time applications of lunar crater detection:

a. Improved sensor technology: Continual advancements in sensor technol-


ogy, including higher resolution imaging and enhanced spectral sensitivity,
can further enhance the accuracy and precision of lunar crater detection.
Future research could explore the integration of state-of-the-art sensors to
capture more detailed and comprehensive data.
b. Advanced deep-learning architectures: Exploring advanced deep-learning
architectures, such as CNNs, RNNs, or attention mechanisms, could lead
to further improvements in lunar crater detection. These architectures may
be specifically tailored to handle the unique characteristics and challenges
of lunar crater identification.
c. Real-time decision-making: Developing real-time decision-making frame-
works based on the detected lunar craters can be valuable for autonomous
lunar exploration missions. This could involve incorporating trajectory plan-
ning algorithms, collision avoidance strategies, and landing site selection
methodologies to enable safe and efficient navigation on the lunar surface.
d. Integration of multi-modal data: Combining data from multiple sources,
such as DEM images, spectral data, and multi-angle imagery, can provide
a more comprehensive understanding of lunar craters and their geological
context. Future work could focus on integrating and fusing data from differ-
ent sensors to improve the accuracy and reliability of lunar crater detection.
e. Deployment on spacecraft: The research conducted on the Raspberry Pi
platform with LED sensors can serve as a stepping stone for developing
similar real-time data processing systems that can be deployed directly
on spacecraft. This would enable onboard analysis of lunar crater images,
allowing for immediate decision-making and reducing reliance on commu-
nication delays with Earth.
Lunar Crater Detection 239

Overall, the research presented in this chapter serves as a thorough reference for
researchers and practitioners interested in utilizing the potential of deep-learning
techniques, sensor technologies, and ensemble-learning strategies for lunar explora-
tion. In summary, by exploring these future directions, we can further advance the
field of lunar crater detection, enabling more accurate mapping of the Moon’s sur-
face, aiding in the selection of landing sites for future missions, and supporting our
understanding of lunar geology.

ACKNOWLEDGMENTS
The authors would like to acknowledge the financial support received by the second
author from ISRO CHANDRAYAAN-2 AO under the sponsored project entitled
“Machine learning and Deep learning Based Automatic Lunar Crater detection
using Terrain Mapping Camera-2 DEM images.” We thank SSPO, Indian Space
Research Organisation (ISRO), Bengaluru, for support and for help in using the data
obtained from TMC-2. We thank the ISRO team for inspiration and giving us finan-
cial support for conducting this project. This chapter is an outcome of a collabora-
tive research project of Principle Investigator, Research Fellow, and Department of
Computer Science and Remote Sensing, BIT Mesra, Ranchi, for assistance.

REFERENCES
1. Haque, Intisar Rizwan I., and Jeremiah Neubert, “Deep learning approaches to bio-
medical image segmentation,” Informatics in Medicine Unlocked, 18 (2020): 100297.
Overview of deep learning in medical imaging.
2. Silburt, Ari, et al., “Lunar crater identification via deep learning,” Icarus, 317 (2019):
27–38.
3. Lin, Xuxin, et al., “Lunar crater detection on digital elevation model: A complete work-
flow using deep learning and its application,” Remote Sensing, 14.3 (2022): 621.
4. Price, Stanton R., Steven R. Price, and Derek T. Anderson, “Introducing fuzzy layers
for deep learning,” 2019 IEEE International Conference on Fuzzy Systems (FUZZ-
IEEE). IEEE, 2019.
5. Babuška, Robert, and Henk B. Verbruggen, “An overview of fuzzy modeling for con-
trol,” Control Engineering Practice, 4.11 (1996): 1593–1606.
6. Salleh, Mohd Najib Mohd, Noureen Talpur, and Kashif Hussain, “Adaptive neuro-
fuzzy inference system: Overview, strengths, limitations, and solutions,” Data Mining
and Big Data: Second International Conference, DMBD 2017, Fukuoka, Japan, July
27–August 1, 2017, Proceedings 2. Springer International Publishing, 2017.
7. Zhou, Huiyu, Yuan, and Chunmei Shi, “Object tracking using SIFT features and mean
shift,” Computer Vision and Image Understanding, 113.3 (2009): 345–352.
8. Ke, Yan, and Rahul Sukthankar, “PCA-SIFT: A more distinctive representation for
local image descriptors,” Proceedings of the 2004 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, 2004. CVPR 2004. Vol. 2. IEEE, 2004.
9. Asokan, A., and J. Anitha, “Change detection techniques for remote sensing applica-
tions: a survey”,” Earth Science Informatics, 12 (2019): 143–160.
10. Kar, Samarjit, Sujit Das, and Pijush Kanti Ghosh, “Applications of neuro fuzzy sys-
tems: A brief review and future outline,” Applied Soft Computing, 15 (2014): 243–259.
11. Elmizadeh, Heeva, and Hadi Mahdipour, “Detection and monitoring of geomorphic
landforms in areas with shadow and cloud cover using remote sensing techniques and
fuzzy segmentation,” Advanced Applied Geology, 13.1 (2023): 72–89.
240 AI-Driven IoT Systems for Industry 4.0

12. Mittal, Mohit, et al., “A neuro-fuzzy approach for intrusion detection in energy effi-
cient sensor routing,” 2019 4th International Conference on Internet of Things: Smart
Innovation and Usages (IoT-SIU). IEEE, 2019.
13. Amade, Benedict, and Cosmas Ifeanyi Nwakanma, “Identifying challenges of inter-
net of things on construction projects using fuzzy approach,” Journal of Engineering,
Project & Production Management 11.3 (2021): 215–227.
14. De Oliveira, José Valente, and Witold Pedrycz, eds., Advances in fuzzy clustering and
its applications. John Wiley & Sons, 2007.
15. Esogbue, Augustine, and John A. Murrell, “Advances in fuzzy adaptive control,”
Computers & Mathematics with Applications, 27.9–10 (1994): 29–35.
16. Serrano, Navid, and Homayoun Seraji, “Landing site selection using fuzzy rule-
based reasoning,” Proceedings 2007 IEEE International Conference on Robotics and
Automation. IEEE, 2007.
17. Dong, Xibin, et al., “A survey on ensemble learning,” Frontiers of Computer Science,
14 (2020): 241–258.
18. Krawczyk, Bartosz, et al., “Ensemble learning for data stream analysis: A survey,”
Information Fusion, 37 (2017): 132–156.
19. Guan, Donghai, et al., “A review of ensemble learning based feature selection,” IETE
Technical Review, 31.3 (2014): 190–198.
20. Zheng, Yuanhang, Zeshui Xu, and Xinxin Wang, “The fusion of deep learning and
fuzzy systems: A state-of-the-art survey,” IEEE Transactions on Fuzzy Systems, 30.8
(2021): 2783–2799.
21. Tian, D., and M. Gong, “A novel edge-weight based fuzzy clustering method for change
detection in SAR images,” Information Sciences, 467 (2018): 415–430.
22. Garcia-Jimenez, S., A. Jurio, M. Pagola, L. De Miguel, E. Barrenechea, and H. Bustince,
Forest fire detection: A fuzzy system approach based on overlap indices. Applied Soft
Computing, 52 (2017): 834–842.
23. Huang, Jianxi, et al., Assimilation of remote sensing into crop growth models: Current
status and perspectives. Agricultural and Forest Meteorology, 276 (2019), 107609.
24. Lu, Yunfan, et al., “Three-dimensional model of the Moon with semantic information
of craters based on Chang’e data,” Sensors, 21.3 (2021): 959.
25. Prashar, Ajay Kumar, et al., “PDS4 data archive for Chandrayaan-2 mission payloads
(TMC2, OHRC, and IIRS),” LPI Contributions, 2678 (2022): 1016.
26. https://www.isro.gov.in/chandrayaan2-payloads.html
27. Chowdhury, Arup Roy, et al., “Terrain mapping camera-2 onboard Chandrayaan-2
orbiter,” Current Science, 118.4 (2020): 566.
13 A Framework of
Intelligent Manufacturing
Process by Integrating
Various Function
T. Rajasanthosh Kumar, Laxmaiah G.,
and S. Solomon Raj

13.1 INTRODUCTION
Manufacturers in the modern era have to juggle a number of conflicting priorities in
order to keep their operations running smoothly, such as responding quickly to shop
floor faults, maintaining an optimal inventory level while striving for customized pro-
duction, and adapting their work schedules to accommodate fluctuations in product
availability and production needs [1, 2]. Though enterprise resource planning (ERP)
systems are helpful in facilitating communication between internal and external par-
ties, ERP by itself is insufficient since it focuses on higher up concerns rather than
the shop floor [3–5]. The aforementioned difficulties may be overcome by integrating
data from the shop floor with information from enterprise systems like ERP to make
informed decisions that take into account the needs of both the shop and the company
as a whole [6]. Based on the production plan and the current shop floor scenario, the
manufacturing execution system (MES) was implemented to manage shop floor oper-
ations [7, 8]. When it comes to optimizing the whole manufacturing process, MES is
the system that does the heavy lifting from work order to final product [9]. Events like
quality issues and equipment breakdowns on the work floor must be addressed imme-
diately. There is a shortage of substructure for data collection, integration of produc-
tion data, and analysis on each capability of MES systems, which makes it difficult
for many modern manufacturing organizations to handle manufacturing management
operations utilizing MES [10]. Workers who rely on phone, email, and other electronic
forms of communication for cooperation must be well-versed in manufacturing man-
agement difficulties and their solutions. Many businesses still have trouble gathering
data from the factory floor, mostly because of restrictions on accessing the detailed
protocol of control equipment for free. This implies that sensor technologies, such as
Open Platform Communications Unified Architecture (OPC-UA) or MTConnect, are
required if one is to get usable data from the shop floor [11, 12].
To find a solution, we first perform a literature review on ways to enhance MES
systems. AUTO-ID and sensor network technologies are often used in data-col-
lecting studies. Radio-frequency identification (RFID)-based data collecting was

DOI: 10.1201/9781003432319-13 241


242 AI-Driven IoT Systems for Industry 4.0

provided. It makes an effort to conform to preexisting MES by making allusions to


a cotton-spinning business and advocating for a vendor-neutral, scalable MES built
on networked object computing. It integrated data and objects using the Common
Object Request Broker Architecture (CORBA) object-computing framework [13].
Production management is the primary example used [14, 15]. RFID tag manage-
ment is proposed for workshops. Information on works-in-progress inventories is
the primary topic. It identifies the present issue with job shop management as data-
collecting paperwork [16, 17]. WSN (wireless sensor network) is used in the automo-
tive industry to monitor air quality via the integration of sensors with a web-based
environment and RFID technology [18]. Many studies only discuss employing one or
two sensor types to collect information. There has to be a way to combine the many
kinds of sensors present in a typical production environment [19, 20]. To get over
the difficulty of integrating disparate sensors, it is recommended to use a web-based
open sensor web architecture. Shop floor data may be integrated using ontology [21].
An example of an ontology-based model of a manufacturing system is shown, in this
instance, a picking system.
The following is the outline for this chapter. In Section 13.2, we outline the needs
that will be met by the Smart MES. The requirements are used in Section 13.3 to
inform the design. Considering design factors, Section 13.4 proposes an architecture
for Smart MES. In Section 13.5, we provide a scenario for the company’s operations
that may be used while assembling automobile doors.

13.2 MES ISSUE


Smart MES design requires the identification and incorporation of issues arising
from both the state of the art in MES technology and the findings of prior studies. We
used this method to better understand the significance of field issues and incorporate
them into the planning and construction of buildings. Our problem-solving method-
ology is grounded on a thorough literature analysis and interviews with factory work-
ers, engineers, and executives at a vehicle door manufacturer and a refractory brick
producer. These are large organizations where the majority of facility automation
has been achieved but where information management and handling still leave much
to be desired. Here is a quick rundown of the most important prerequisites [22, 23].

• [PR #1] Problem workers must manually enter machine controller data into
MES or Decibel (DB). The time difference makes data use less useful.
• [PR #2] Sensors collect shop floor data. However, there are several sen-
sor communication standards and the PLC (programmable logic controller)
supports multiple sensor kinds depending on suppliers, making data inte-
gration difficult. It prevents data optimization.
• [PR #3] Some organizations invest in controller protocols to collect data
from facilities; however, diverse controller protocols make data integration
difficult. It also prevents data optimization.
• [PR #4] Once the production schedule is set, it is hard to adjust for stake-
holder needs or shop floor conditions. If anything occurs, MES operators and
shop floor personnel have to communicate, which takes time to reschedule.
A Framework of Intelligent Manufacturing Process 243

• [PR #5] MES operators have trouble with real-time tracking WIP. Manual
search is done when pieces move the wrong way, but it takes time.
• [PR #6] Due to resource conditions and manufacturing operating schedule,
scheduling preventive maintenance is difficult.
• [PR #7] Identifying machine failures is difficult. Traveling relies on employ-
ees’ tacit knowledge. Without expertise and insight, problem-solving will
take a long time. Workers vary.
• [PR #8] Many manufacturing businesses process information using facil-
ity on/off check or the x-R chart. In other circumstances, the difficulty is
compositely generated by several variables, making basic data processing
difficult. Worker understanding is key.
• [PR #9] It is difficult to identify product quality issues, their causes, and
their onset. Manual inspection relies on employees’ knowledge and compe-
tence; therefore. it might vary.
• [PR #10] Estimating production material consumption is scarcely con-
nected to SCM (supply chain management); thus, demand forecast depends
on manager expertise, and feedback work is mostly done by phone, which
increases communication load and reduces work efficiency.

13.3 MES ARCHITECTURE


The design-derived system idea that forms the basis of Smart MES’s architecture is
laid out here.

13.3.1 Smart MES Architecture and Key Components


Develop a system concept for the reference architecture design before making a formal
architecture proposal. The factors in the design are the building blocks of the struc-
ture. The system concept is influenced by design. More information is provided below.

• [SC #1] Embracing a time-sensitive communication setting Smart MES,


enterprise resource planning (ERP), and shop floor users all use the same
infrastructure. In order to have dynamic flows of data between the factory
floor and the enterprise system, this infrastructure is required (made ref-
erence to DC Comics #1, #2, #3, #4, #9). In order to effectively collect
information from many types of systems and conduct in-depth analysis, it is
necessary to set up data integration and transformation (SC #2) (taken from
DC’s 2, 3, 5, 10, 11, and 12).
• [SC #2] Big data module adoption: As a platform for storing, analyzing, and
visualizing massive amounts of data, it plays a crucial role for Smart MES,
since many MES operations rely on the module’s output to carry out their
respective tasks. This section provides the means to modify the current
analysis model, making it possible to adapt it to new shop floor data behav-
iors. In addition, the related analysis model may be updated to account for
new circumstances, such as the incorporation of analysis-derived input data
into previously established applications (as in DC5, DC6, and DC7) [24, 25].
244 AI-Driven IoT Systems for Industry 4.0

• [SC #3] Uses for the intelligent MES system: Modules of Smart MES
capability analyze data to determine what actions to take. In addition, all
departments may coordinate their efforts to boost output on the factory
floor. Whenever new features are needed in MES, we can easily configure
and reflect them by modeling them as apps, so that the system is always up-
to-date. Overall manufacturing operation management may be improved
with the help of data synchronization and a module to facilitate collabora-
tive application development [12, 25, 26].

Figure 13.1 depicts the overall traceability across PR, DC, and SC, and our
reference architecture is based on the aforementioned system idea and design
considerations.
ERP unifies the many incompatible communication protocols used in manufac-
turing. Figure 13.2 depicts the many shop floor communication options available to
workers. In order to pool information from many channels of communication, some
method of integrating these channels is required. Data from the corporate informa-
tion system and data generated by the Smart MES application are both saved and

FIGURE 13.1 Tracing PR-DC-SC.


A Framework of Intelligent Manufacturing Process 245

FIGURE 13.2 Smart MES architecture.

converted into the enterprise information database. It serves a dual purpose: if none
of them can immediately process the message, it may act as a temporary storage
space until they can. It has the ability to convert data for use in both systems [26, 27].
These days, most databases also have a data transformation tool. Since most busi-
ness systems are built separately, facilitating communication between them requires
extra effort and a tool called an enterprise service bus. Information gathered on the
factory floor must be stored and merged into a single database for effective analysis.
Transforming often occurs in a separate module from the one responsible for storing
data. However, for the speed of computation, methods have been developed to enable
the processing of data directly in the repository. After data has been stored and
integrated, it will be sent to the data analysis engine, where it will be processed in
order to extract actionable insights. The analytical procedure as a whole is shown in
Figure 13.3. The model builder decides what kinds of information are sent. A model
builder constructs an investigation worthy and defines its yield by coupling data min-
ing and the pre-processing approach most suitable to the analysis challenge in hand
with the input parameters. The pre-processing module yields pre-processing meth-
ods, whereas the data mining module yields data mining techniques. The behavior
of shop floor data might be affected by the use of different quality, manufacturing
246 AI-Driven IoT Systems for Industry 4.0

FIGURE 13.3 Data analysis.

processes, and WIP (work-in-piece) monitoring criteria for new products [28]. In
addition, they may be modified as needed in light of the addition of additional sen-
sors or infrastructure. Modelers may fix this by making new or revised models.
Afterwards, we create a visualization and report.
Data analysis results are processed by the module so that the user may interpret
them. Every user may have access to any visual content or report by uploading it to
a web interface.

• Event processing: This module receives the results of data analysis. Each
analytical outcome is treated as a discrete occurrence. The event filter mon-
itors every activity and keeps track of whether or not it represents atypical
behavior. When this happens, event abstraction takes control. The filtered
event is then abstracted into a form that Smart MES applications may use.
The application’s collaboration manager receives the abstracted event via
the application interface. Filtering rules and abstraction rules are defined in
the rule repository. Figure 13.4 depicts the format for event data.

FIGURE 13.4 Data format.


A Framework of Intelligent Manufacturing Process 247

• The MES management module and the Smart MES application: Each sub-
system in this module makes decisions about what to do on the shop floor
and what data to send up to the corporate IT based on the results of data
analyses. Enterprise information systems may be used for a variety of pur-
poses, including the thorough scheduling of operations.

The user may control and monitor the application from their own device, such as
a smartphone. Only approved users are permitted. Data Sync. Manager is in charge
of keeping everything in sync. Application cooperation manager is responsible for
the initial event reception, application cooperation, and data synchronization. Both
of these will be discussed in further detail in the next section. There are three steps
that must be taken whenever a program is updated or a new one is installed. (1) Any
data characteristics used by the new application that were not included in the old
artist data or had several names would be registered in the schema master data. We
place a higher value on adaptability than on speed and design simplicity, despite
the fact that it is simple to set up an application such that it adheres to a pre-defined
master data model via a unified data model. The application collaboration manager
should be registered with the collaboration information of multiple applications for
newly installed apps (2). New or altered data properties should be included in the
synchronization process.

• Data transformation rule repository enumerates the rules that convert


between various file types. The rule specifies how information from the
integration of the enterprise service bus, the Smart MES application,
and shop floor data stored in a distributed database enables the seamless
exchange of information among these systems, facilitating their utilization
in analytics. This integration adheres to the machine manufacturing data
standard.

13.3.2 Data Synchronization and Application Cooperation Managers


Data synchronization and application cooperation may help shop floor and enterprise
system decisions. Data synchronization is necessary because various programs using
different data are pointless data for decision-making. Each application, to prevent
access collisions, utilizes a central DB. Also, utilizing a central DB access con-
centrates communication, which creates a low pace. To navigate, put database each
application and manager data synchronization.
Because not one application can perform all of the tasks outlined in the design,
cooperation between them must be considered. To improve production management
on the shop floor as a whole, it is essential that all departments work together. We
created a centralized collaboration manager, but the architecture can be set up such
that each app uses its own logic to decide where data should be sent. Adopting a
model in which each application is responsible for its own forwarding logic might
result in a large number of interface updates, if and when certain apps need them.
Here, we create the overarching framework for the app, collaboration, and the Data
Sync. Manager, including a sequence diagram.
248 AI-Driven IoT Systems for Industry 4.0

FIGURE 13.5 Data sync. and application cooperation management architecture.

A sequence diagram is a common means of elucidating processes across several


platforms. Data Sync. Manager and app collaboration high-level architecture are
shown in Figure 13.5.
The components of a conflict resolution, a data-collecting database, a data attri-
bute repository, and a data grouping module make up a data synchronization man-
ager. Through the data interface, fresh data may enter, and after being processed
by the data grouping module and the conflict resolution, the modified data can exit.
Temporarily stored information is sent to the Data Sync Manager from the data-
gathering DB. The data-grouping module collects and organizes records that have
the same properties but have been updated at various rates or with varying values.
If there are more than two items in a set, it indicates that many apps are synchro-
nizing their updates of the same data. The attributes created in the app master data
model are kept in the data attribute repository. Data sets with more than two com-
ponents are sent to a conflict resolution. The most recently updated data is chosen
for transmission. In other words, the newest information is reflected in the applica-
tion, and an enhanced number of updates are recorded in the application for each
data attribute in each MES application DB. The data routing module, which comes
after conflict resolution, decides where data should be sent depending on the rules
given in it. Data flow is shown in a sequence diagram in Figure 13.6. Applications
1, 2, and 3, after checking for update conflicts, all transmit their revised data to
Application 4. It has two examples: (1) when there is no tension and (2) a disagree-
ment exists.
Each MES application’s DB has its own update count to ensure that the most
recent data is represented inside the application and that the total update count
keeps rising with respect to use. The data routing module is the second step in data
A Framework of Intelligent Manufacturing Process 249

FIGURE 13.6 Data sync. manager sequence diagram.

processing after conflict resolution. Regulation that is laid down there. The infor-
mational sequence diagram in Figure 13.6 shows the flow of data. It demonstrates
the flow of data from updating application 1 to updating application 2 and so on.
Data is then sent to the third program after checking for update conflicts. It has
two examples: (1) when there is no tension and (2) a disagreement exists. Message
interface, message queue, message content extraction module, message generator,
and rule-based engine for facilitating collaboration are all parts of the application
collaboration manager. Cooperation is the norm. When a program determines that it
has business communicating with other software, it generates and transmits a signal
to the software manager of cooperation to choose where to transmit its message.
The interface via which a message is sent to an application is received, processed,
and then returned to dispatch. A message queue is an intermediate storage system.

13.4 ARCHITECTURE-BASED OPERATION AND PROTOTYPE


While implementation would be ideal for demonstrating the soundness of the
architecture proposed in the preceding section, we take a different track due to
the sheer complexity of the MES. So, we describe an operational scenario and a
prototype built on top of that architecture. Among the several that may be provided
and derived, we choose a scenario involving work-in-process quality diagnosis and
repair services at a car door assembly plant. A. It takes a lot of welding to make a
250 AI-Driven IoT Systems for Industry 4.0

FIGURE 13.7 AS-IS /to-be process comparison.

vehicle door. Robots often perform spot welding. However, there are situations when
a WIP (work-in-piece) is not welded in the appropriate location. Incorrect place-
ment of the jig or spot robot is occasionally to blame. However, in this operational
context, the issue is likely the result of a facility failure, such as a worn bearing or
a faulty servo driver.
A discussion with an IT engineer revealed that the majority of the shop floor
is wired into the MES using a 1:n architecture. MES also has 1:n connections to
enterprise systems like ERP. This stems from the many protocols, interfaces, and
data formats in use today. In Figure 13.8, the AS-IS process depicts the current state
of the problem. The MES system relies on human labor, with quality inspection and
documentation being highly dependent on the skills of the workers. It takes time for
judgment to be made since there are so many factors to consider, even if employees
receive some insight from gathered shop floor data like the current value. If the spot-
welding robot is broken and requires fixing immediately, for example, because of a
faulty servo driver, a repair technician should be called in right away. Now, we will
go through the system’s benefits and the new, better situation shown in Figure 13.7.
The prototype’s implementation architecture for the operating scenario is shown in
Figure 13.8. After data collection, data mining is used to categorize quality defects.
ECMiner1, a data mining program, was employed. In this case, we used a method
called support vector machine (SVM).
In the context of data analysis and machine learning, the term “classification”
refers to the process of categorizing or assigning data points SVM have the capa-
bility to perform classification tasks by constructing hyperplanes in an n-dimen-
sional vector space, where n represents the number of x variables. Figure 13.7
depicts the interface of ECMiner. Following a thorough analysis, event process-
ing is undertaken in order to make informed judgments about whether there
A Framework of Intelligent Manufacturing Process 251

FIGURE 13.8 Operation scenario prototype architecture.

is a presence of quality issue or not. If there is an existence when a product qual-


ity issue arises, it is then communicated to the application collaboration manager
for resolution. The collaboration manager receives the message and proceeds to
interpret it. Subsequently, it transmits fresh communication to the department
of quality management if the occurrence of quality faults can be attributed to
facility-related issues, which subsequently result in the generation of such faults.
The user composes a novel message and subsequently transmits it to the collab-
orative application. The individual is responsible for transmitting the document
to the maintenance management team. In the context of quality management, it
is important to note that the updating of resource status occurs simultaneously.
The management process is subsequently manifested in resource allocation and
status management through the utilization of the data synchronization manager.
The aforementioned flow is demonstrated. After it is transmitted, it is received
by the recipient. The request for maintenance management has been submitted,
and subsequently, a modification to the schedule has been implemented. Table 13.1
illustrates the alteration in the timetable as a result of certain circumstances. In
relation to the issue of deploying a part-grasping robot thereafter, the device is
commonly referred to as a “press machine.” It is imperative to conduct initial
inspections on press machines.
Table 13.1 shows the benefits of the proposed Smart MES over the present MES
operating scenario.
MES architecture is used to highlight component data flow. Since the data model
reflects physical reality, a clever MES data model may expand on its functionalities,
entity, data, and notion. Since the Smart MES system is big, it takes time to imple-
ment. So, full Smart MES is being developed.
252 AI-Driven IoT Systems for Industry 4.0

TABLE 13.1
Smart MES Advantages over Existing MES
Task Current MES Smart MES Advantages
Inspection The inspector manually The inspector may write Smart MES module
report writes the result to the that result automatically. automatically records
MES database Event is recorded work. MES can detect
quality quicker
Find the issue The inspector uses product The inspector does not A non-skilled inspector
knowledge and graphed need skills alone. Data can tackle WIP defects
data to determine analysis may help in real time. Detection
WIP issue several hands time is diminished
Inform The MES operator should Quality management Automatically notify
maintenance contact maintenance to the service may notify maintenance
fix machine. Maintainers maintenance management service
often refuse to alter management service
schedule flexibility, if not through an application
urgent facility failure

13.5 CONCLUSION
In this study, we highlighted the issue with today’s MES systems by explaining how
there is no conducive atmosphere for analyzing and understanding shop floor data
in real-time manner. Despite the efforts of several studies and issues, they neglected
to address coordination between the different MES capabilities. For these reasons,
system Smart MES is specified, together with the requirements and design consid-
erations, for the architecture of this topic of interest design. The application case for
Smart MES in an operational setting is presented to demonstrate the viability and
value of intelligent MES against the MES of today. Using a case study of an actual
operation, we demonstrated that Smart MES is capable of fully and efficiently han-
dling the current situation on the factory floor manner. This is only one possibility,
and there is a plethora of derived situations. The research and methods presented in
this chapter may be used as a foundation upon which to build. The need for further
study requires a data model to be established and applied to the Smart MES archi-
tecture to guarantee the visibility of information exchange between all of its parts.
Smart MES’s data model development allows for the elaboration of its operations
in more depth, since the data model itself reflects the physical notion and data. In
addition, the Smart MES is a massive system; therefore, it takes some time before it
can be put into action. And so, to capacity A Smart MES system that really works is
now in the works.

REFERENCES
1. Tian, G. Y., Yin, G., and Taylor, D. “Internet-based manufacturing: A review and a
new infrastructure for distributed intelligent manufacturing.” Journal of Intelligent
Manufacturing 13, (2002): 323–338.
A Framework of Intelligent Manufacturing Process 253

2. Zhang, Luyao, Feng, Lijie, Wang, Jinfeng, and Lin, Kuo-Yi. “Integration of design,
manufacturing, and service based on digital twin to realize intelligent manufacturing.”
Machines 10, no. 4 (2022): 275.
3. Gu, Ai, Yin, Zhenyu, Fan, Chao, and Xu, Fulong. “Safety framework based on block-
chain for intelligent manufacturing cyber physical system.” In 2019 1st International
Conference on Industrial Artificial Intelligence (IAI), pp. 1–5. IEEE, 2019.
4. Hung, Min-Hsiung, Lin, Yu-Chuan, Hsiao, Hung-Chang, Chen, Chao-Chun, Lai, Kuan-
Chou, Hsieh, Yu-Ming, and Tieng, Hao et al. “A novel implementation framework of
digital twins for intelligent manufacturing based on container technology and cloud
manufacturing services.” IEEE Transactions on Automation Science and Engineering
19, no. 3 (2022): 1614–1630.
5. Oztemel, Ercan. “Intelligent manufacturing systems.” In Artificial intelligence tech-
niques for networked manufacturing enterprises management, pp. 1–41. London:
Springer London, 2010.
6. Wang, Zilin, Cui, Lizhen, Guo, Wei, Zhao, Lei, Yuan, Xin, Gu, Xiaosong, Tang,
Weizhong, Bu, Lingguo, and Huang, Weiming. “A design method for an intelligent
manufacturing and service system for rehabilitation assistive devices and special
groups.” Advanced Engineering Informatics 51, (2022): 101504.
7. Li, Ruiqi, Wei, Sha, and Li, Jia. “Study on the application framework and standardiza-
tion demands of AI in intelligent manufacturing.” In 2019 International Conference
on Artificial Intelligence and Advanced Manufacturing (AIAM), pp. 604–607. IEEE,
2019.
8. Trappey, Amy JC, Trappey, Charles V., Chao, Min-Hua, and Wu, Chun-Ting.
“VR-enabled engineering consultation chatbot for integrated and intelligent manufac-
turing services.” Journal of Industrial Information Integration 26 (2022): 100331.
9. Zhou, Ji, Zhou, Yanhong Wang, Baicun, and Zang, Jiyuan. “Human–cyber–physi-
cal systems (HCPSs) in the context of new-generation intelligent manufacturing.”
Engineering 5, no. 4 (2019): 624–636.
10. Zhou, Dongdong, Xu, Ke, Lv, Zhimin, Yang, Jianhong, Li, Min, He, Fei, and Xu, Gang.
“Intelligent manufacturing technology in the steel industry of China: a review.” Sensors
22, no. 21 (2022): 8194.
11. Devedzic, V., and Radovic, D. “A framework for building intelligent manufacturing
systems.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications
and Reviews) 29, no. 3 (1999): 422–439.
12. Liu, Zhi-feng, Zhang, Yue-ze, Yang, Cong-bin, Huang, Zu-guang, Zhang, Cai-xia, and
Xie, Fu-gui. “Generalized distributed four-domain digital twin system for intelligent
manufacturing.” Journal of Central South University 29, no. 1 (2022): 209–225.
13. Annanth, V. Kishorre, Abinash, M., and Rao, Lokavarapu Bhaskara “Intelligent manu-
facturing in the context of industry 4.0: A case study of siemens industry.” Journal of
Physics: Conference Series 1969, no. 1 (2021): 012019. IOP Publishing.
14. Sharma, P., and Bhargava, M. “Designing, implementation, evolution and execution
of an intelligent manufacturing system.” International Journal of Recent Advances in
Mechanical Engineering 3, no. 3 (2014): 159–167.
15. Aljarrah, O., Li, J., Heryudono, A., Huang, W., and Bi, J. “Predicting part distortion
field in additive manufacturing: A data-driven framework.” Journal of Intelligent
Manufacturing 34, no. 4 (2023): 1975––1993.
16. Zhang, Zhongyu, Zhu, Zhenjie, Zhang, Jinsheng, and Wang, Jingkun. Construction
of intelligent integrated model framework for the workshop manufacturing system
via digital twin. The International Journal of Advanced Manufacturing Technology
vol. 118, (2022): 3119–3132.
17. Guo, Qing lin, and Zhang, Ming. “Multiagent-based scheduling optimization for intel-
ligent manufacturing system.” The International Journal of Advanced Manufacturing
Technology 44, (2009): 595–605.
254 AI-Driven IoT Systems for Industry 4.0

18. Kamble, Sachin S., Gunasekaran, Angappa, Parekh, Harsh, Mani, Venkatesh, Belhadi,
Amine, and Sharma, Rohit. “Digital twin for sustainable manufacturing supply chains:
Current trends, future perspectives, and an implementation framework.” Technological
Forecasting and Social Change 176, (2022): 121448.
19. Shi, Miaoyuan. “Knowledge graph question and answer system for mechanical intel-
ligent manufacturing based on deep learning.” Mathematical Problems in Engineering
2021, (2021): 1–8.
20. Marques, Maria, Agostinho, Carlos, Zacharewicz, Gregory, and Jardim-Gonçalves,
Ricardo. “Decentralized decision support for intelligent manufacturing in industry
4.0.” Journal of Ambient Intelligence and Smart Environments 9, no. 3 (2017 Jan 1):
299–313.
21. Guo, W., Wang, Y., Chen, X., and Jiang, P. Federated transfer learning for auxiliary
classifier generative adversarial networks: framework and industrial application.
Journal of Intelligent Manufacturing vol. 35 (2023): 1–16 (2024): 1439–1454.
22. Liu, Yefeng, Zhao, Yuan, Tao, Lin, Zhao, Kexue, and Li, Kangju. “The application
of digital flexible intelligent manufacturing system in machine manufacturing indus-
try.” In 2018 IEEE 8th Annual International Conference on CYBER Technology in
Automation, Control, and Intelligent Systems (CYBER), pp. 664–668. IEEE, 2018.
23. Lin, Yu-Ju, Wei, Shih-Hsuan, and Huang, Chin-Yin. “Intelligent manufacturing control
systems: The core of smart factory.” Procedia Manufacturing 39, (2019): 389–397.
24. Padovano, Antonio, Longo, Francesco, Nicoletti, Letizia, Gazzaneo, Lucia, Chiurco,
Alessandro, and Talarico, Simone. “A prescriptive maintenance system for intelligent
production planning and control in a smart cyber-physical production line.” Procedia
CIRP 104, (2021): 1819–1824.
25. Romero, David, Jardim-Goncalves, Ricardo, and Grilo, Antonio. “Factories of the
future: Challenges and leading innovations in intelligent manufacturing.” International
Journal of Computer Integrated Manufacturing 30, no. 1 (2017): 1–3.
26. Chen, Gaige, Wang, Pei, Feng, Bo, Li, Yihui, and Liu, D. “The framework design of
smart factory in discrete manufacturing industry based on cyber-physical system.”
International Journal of Computer Integrated Manufacturing 33, no. 1 (2020 Jan 2):
79–101.
27. Anton, Florin, Borangiu, Theodor, Răileanu, Silviu, and Anton, Silvia. “Cloud-based
digital twin for robot integration in intelligent manufacturing systems.” In Advances in
service and industrial robotics: Results of RAAD, pp. 565–573. Springer International
Publishing, 2020. https://www.mdpi.com/2077-1312/9/3/338
28. Sulhi, Ahmad. “Data mining technology used in an internet of things-based decision
support system for information processing intelligent manufacturing.” International
Journal of Informatics and Information Systems 4, no. 3 (2021): 168–179.
14 Adaptive Supply
Chain Integration
in Smart Factories
Deepak Mathivathanan and
Sivakumar Kirubanandan

14.1 INTRODUCTION
The advent of Industry 4.0 has facilitated a transformative shift in manufacturing by
changing the way products are designed, produced, and delivered leveraging digi-
tal technologies to create intelligent, interconnected, and adaptive manufacturing
ecosystems [1]. Advances in terms of Internet of Things (IoT), big data analytics,
artificial intelligence (AI), and cyber-physical systems (CPS) enable manufacturers
to achieve increased efficiency, flexibility, and productivity and deliver custom-
ized products and services. Smart factories are the heart of this transformation
that drives innovation and efficiency through advanced technologies, data ana-
lytics, and automation [2, 3]. Smart factories are a step towards next-generation
manufacturing and thus represent a paradigm shift from traditional manufacturing
approaches by introducing a higher level of flexibility, agility, and intelligence
in the production process [4]. Smart factories embody integrated cyber-physical
systems, IoT devices, AI, and data analytics creating interconnected and intelli-
gent production systems enabled by real-time monitoring, predictive capabilities,
and autonomous decision-making, leading to increased productivity at reduced
costs, improved quality, and enhanced sustainability [5]. In addition to production,
with real-time data, AI-driven analytics, and automation, smart factories assist in
delivering goods faster to market, improve the operational efficiency, and enhance
customer experiences [6]. Furthermore, smart factories enable real-time monitor-
ing and predictive maintenance, minimizing downtime and optimizing equipment
performance to facilitate data-driven decision-making, enhancing supply chain
visibility and responsiveness, efficient inventory management, and reducing wast-
age [7]. Furthermore, smart factories drive the implementation of lean and agile
manufacturing principles, fostering continuous improvement and increased adapt-
ability through SCI [8].
SCI plays a vital role in enabling seamless coordination and collaboration across
the entire supply chain ecosystem, fostering efficient and synchronized operations in
a smart factory scenario [9, 10]. Real-time visibility into various stages of the pro-
duction process, including inventory levels, production status, and customer demand,
enables accurate forecasting, demand planning, and inventory management, leading

DOI: 10.1201/9781003432319-14 255


256 AI-Driven IoT Systems for Industry 4.0

to improved efficiency and reduced costs [11]. Furthermore, transparency allows


stakeholders to monitor and track the movement of goods, ensuring compliance and
reducing the risk of counterfeit or substandard products [12]. Through SCI, smart
factories can respond rapidly to changing customer demands, market conditions, and
disruptions based on real-time data sharing and collaboration between suppliers [13].
The agility in integrated smart factories helps in minimizing stock-outs, reducing
lead times, and meeting customer expectations, ultimately enhancing customer sat-
isfaction and loyalty [14]. SCI also allows for better coordination and utilization of
resources by sharing information and insights throughout the supply chain to opti-
mize production capacity, reduce wastage, and allocate resources efficiently [15].
Coordinating raw material orders, managing production schedules, and optimizing
transportation routes hence lead to cost savings and improved resource utilization.
Integration across the supply chain eliminates silos and enables seamless data flow
for better communication among stakeholders minimizing manual interventions,
reducing errors, and improving overall operational efficiency [16]. Furthermore, SCI
fosters collaboration and innovation among partners in the ecosystem by sharing data
and knowledge, with suppliers, customers, and other stakeholders [17]. Joint product
development, improved supply chain planning, and the identification of new business
opportunities create a competitive advantage by driving continuous improvement
leading to the development of innovative solutions [18]. Furthermore, SCI enhances
risk management capabilities by enabling real-time monitoring of inventory levels,
demand patterns, and external factors [19].
However, the transition to smart factories requires significant investments in tech-
nology infrastructure, workforce upskilling, and change management. Furthermore,
across the entire supply chain, integration of diverse systems, ensuring data security
and privacy, and addressing interoperability issues are critical for a successful transi-
tion. The objective of this chapter is to explore the importance and implementation of
SCI in the context of smart factories within the industry 4.0 framework. This chapter
provides insights into how smart factories can leverage adaptive supply chain man-
agement (ASCM) practices, enabled by AI-driven IoT systems, to enhance opera-
tional efficiency, responsiveness, and competitiveness.

14.2 SMART FACTORIES


Smart factories are a key component of Industry 4.0 which represents the significant
advancements in the manufacturing industry, leveraging cutting-edge technologies
to create highly connected, intelligent, and efficient production systems [20]. The
smart factories are characterized by the integration of cyber-physical systems, IoT
connectivity, advanced data analytics, autonomous robots, and digitally enabled SCI
in the existing traditional setup. The key characteristic components of smart facto-
ries are presented in Figure 14.1.

14.2.1 Key Component of Smart Factories


Integrated CPS: Integration of CPS is a fundamental aspect of smart factories
that plays a crucial role in their functioning [21]. CPS integrates digital
Adaptive Supply Chain Integration in Smart Factories 257

FIGURE 14.1 Key components of smart factories.

and physical elements using computational algorithms and communication


networks with physical components such as machines, sensors, actuators,
and control systems [22]. In smart factories, the physical components, i.e.,
machines, robots, and production equipment, are equipped with sensors
and actuators that collect and transmit data in real time [23]. Through the
integration of CPS, the collected data is processed and analysed by the soft-
ware systems, AI algorithms, and data analytics platforms.
IoT connectivity: To enable seamless communication and data exchange
between devices, sensors, and machines on the factory floor, IoT connectiv-
ity is critical [24]. In smart factories, IoT connectivity facilitates real-time
data collection, communication, and analysis, enabling enhanced visibility,
predictive maintenance, and intelligent decision-making [25]. Beyond the
factory floor, IoT-enabled devices can track and monitor goods and mate-
rials throughout the supply chain, providing real-time visibility into their
movements, locations, and conditions, thereby improving supply chain
coordination, demand forecasting accuracy, and efficient inventory man-
agement [26].
Advanced data analytics: Through connected systems, organizations extract
valuable insights from the vast amounts of data generated within the
258 AI-Driven IoT Systems for Industry 4.0

production process and across supply chains [27]. By leveraging data-


driven approaches like big data analytics, machine learning (ML), and
AI techniques, smart factories can gain real-time monitoring capabilities,
predictive analytics, and optimization of manufacturing operations [28].
Furthermore, continuous learning capability associated with advanced data
analytics provides scope for improving operations in response to dynamic
market demands, emerging trends and technologies.
Autonomous and collaborative robots: They are known as “cobots” that are
capable of operating autonomously or in cooperation with humans enabled
by AI and ML algorithms to improve precision, safety, and efficiency in
manufacturing processes [29]. Unlike traditional industrial robots, cobots
are equipped with advanced sensors and safety features that enable them
to operate safely in close proximity to humans [30]. Cobots assist humans
in tasks that require strength, precision, or repetitive actions, augmenting
human capabilities and improving productivity [31]. Thus, the integration of
autonomous and collaborative robots contributes towards achieving higher
productivity, adaptability, and innovation with safety in the ever-evolving
manufacturing landscape.
Real-time monitoring and control: In the context of smart factories, real-time
monitoring and control systems are crucial in enabling manufacturers to
optimize their manufacturing processes and achieve higher efficiency and
productivity [32]. With the help of sensors and connected devices, smart
factories monitor the equipment performance, energy consumption, and
quality metrics on a real-time basis [33]. Using these insights, operators
can make data-driven decisions, proactively address issues, and optimize
production parameters, leading to improved efficiency, quality, and produc-
tivity in the manufacturing process.
Flexibility and customization: Smart factories enable agile production sys-
tems that can rapidly adapt to changing market demands, customer pref-
erences, and production requirements due to their built-in flexibility and
customization capability [34]. The interconnected machines, flexible
automation, and advanced planning systems lead to efficient product
customization while maintaining cost-effectiveness [35]. Thus, flex-
ibility and customization in smart factories contribute towards catering
to individual customer preferences leading to customer satisfaction and
loyalty.
Enhanced safety and sustainability: Smart factories prioritize worker safety
through advanced safety systems and technologies [30]. With AI-enabled
safety monitoring and data analytics-based safety optimization, smart
factories can proactively address safety concerns and continuously
improve safety performance [36]. On the other hand, smart factories
employ energy management systems that monitor and analyse energy
consumption patterns to identify opportunities to reduce energy waste,
optimize energy usage, and lower carbon emissions, thereby extending
the focus on sustainability in factory operations to the broader supply
chain [37].
Adaptive Supply Chain Integration in Smart Factories 259

Continuous improvement and innovation: Fostering a culture of advance-


ment and driving maximum optimization in manufacturing processes are
by-products of continuous improvement and innovation [38]. Smart tech-
nologies like virtual reality (VR), augmented reality (AR), and digital
twins facilitate virtual prototyping, process simulation, and real-time per-
formance monitoring, supporting continuous improvement and innovation
[39]. Hence, by embracing advanced technologies and promoting a proac-
tive improvement approach, smart factories can drive efficiency, productivity,
and competitiveness.

14.2.2 Role of IoT and AI in Enabling Smart Factory Operations


Though there are several components associated with smart factories, the role of the
IoT and AI is fundamental in enabling and optimizing factory operations. They have
revolutionized the traditional manufacturing process by enabling smart factories
to achieve heightened levels of automation, connectivity, and intelligent decision-
making. Let us discuss their role in detail in the following subsections.

1. Role of IoT: IoT forms the backbone of smart factory operations through
connecting devices, sensors, and machines throughout the manufacturing
environment [40]. Here is how IoT enables smart factory operations:
a. Data collection and real-time monitoring: IoT sensors embedded in
machinery and equipment in smart factories constantly collect data
on several control parameters, such as temperature, pressure, humid-
ity, and performance metrics [41]. IoT-enabled real-time data collection
hence enables factories to implement continuous and effective moni-
toring systems for equipment health, energy consumption, and overall
production efficiency [42].
b. Connectivity and communication: IoT facilitates connected devices to
seamlessly communicate between machines to enable the exchange of
information and data across the factory floor [43]. The enhanced con-
nectivity offered by IoT thus enables real-time collaboration, synchro-
nization, and coordination of production processes, leading to better
efficiency and responsiveness [44].
c. Predictive maintenance: IoT-enabled sensors monitor equipment con-
ditions and performance to provide valuable insights into maintenance
needs [45]. Predictive maintenance algorithms hence can effectively
analyse data patterns and generate alerts or predictions regarding poten-
tial failures and other maintenance requirements [46]. This minimizes
unplanned downtime, optimizes maintenance schedules, and improves
overall equipment reliability.
d. SCI: IoT connectivity extends beyond the factory walls and is key in
enabling integration with suppliers, customers, and logistics providers
[47]. Real-time data sharing and collaboration assist in streamlining
supply chain operations to ensure timely delivery and accurately man-
age inventory, and respond to changing demand effectively.
260 AI-Driven IoT Systems for Industry 4.0

2. AI: Advanced AI technologies, such as ML, deep learning (DL), and cogni-
tive computing along with IoT capabilities, enable effective processing and
analysis of huge volumes of real-time data generated by IoT devices [48].
Here is how AI enables smart factory operations:
a. Data analysis and insights: AI algorithms help in analysing large data-
sets collected by IoT devices to read patterns, anomalies, and correla-
tions providing valuable insights for improving production efficiency,
quality control, and decision-making [49]. AI-powered analytics enable
high-precision predictive and prescriptive capabilities, enhancing effi-
cient production planning, demand forecasting, and inventory manage-
ment [50].
b. Autonomous decision-making: Machines and robots within smart
factories integrated with AI can make autonomous decisions based
on real-time data and predefined rules [10]. AI-powered systems can
optimize production parameters, adjust schedules, and dynamically
respond to changes with minimum human interventions and maximum
efficiency [51].
c. Robotics and automation: AI-driven robots play a crucial role in
automating various tasks within smart factories. Advanced robot-
ics supported by AI algorithms can perform complex operations and
collaborate with human workers in a safe and efficient manner [52].
AI-enabled robots learn from experience (past data), improve their
performance, and enable them to handle future diverse manufacturing
processes [53].
d. Cognitive systems and natural language processing (NLP): AI-powered
cognitive systems improve human-machine interactions within smart
factories [50]. NLP models enable voice- or text-based communication
that allows operators to interact with machines and systems intuitively
[54]. Thus, AI-enabled capability enhances operational efficiency and
simplifies training to facilitate better decision-making.

The integration IoT and AI technologies drives efficiency, enhances production


quality, optimizes resource utilization, enables predictive capabilities, and ultimately
leads to a more agile smart factory.

14.3 SUPPLY CHAIN INTEGRATION IN SMART FACTORIES


SCI is the coordination and collaboration among various stakeholders, processes,
and systems within a supply chain ecosystem. The primary focus of SCI is to achieve
a seamless flow of information, real-time visibility, and efficient decision-making.
This requires the integration of internal processes within an organization and the
integration of external partners say suppliers, manufacturers, distributors, and cus-
tomers. In smart factories, SCI becomes even more critical as it enables the syn-
chronization and optimization of production processes, inventory management, and
logistics operations.
Adaptive Supply Chain Integration in Smart Factories 261

FIGURE 14.2 Significance of supply chain integration in smart factories.

14.3.1 Significance of Supply Chain Integration


The key significance of SCI is presented in Figure 14.2 and detailed next.

a. Seamless flow of information: Real-time exchange of data and informa-


tion across the entire supply chain network is the key outcome of SCI. IoT
sensors, connected devices, and cyber-physical systems in the smart facto-
ries generate and transmit data throughout the production and supply chain
processes [55]. By integrating the data throughout the supply chain, smart
factories can achieve visibility into inventory levels, production status,
customer demands, and other critical information. Thus, a seamless flow
of information enables accurate demand forecasting and improved supply
chain decision-making [56].
b. Improved coordination and collaboration: SCI facilitates close collabora-
tion and coordination among stakeholders in the manufacturing and supply
chain ecosystem [57]. By sharing real-time data, insights, and plans, smart
factories can align their operations in line with the suppliers, customers,
and logistics partners [58]. Thus, a collaborative approach enables effec-
tive demand management, synchronized production scheduling, optimized
inventory management, and efficient logistics management. Reduction in
delays, and lead times, and ensuring smooth coordination throughout the
supply chain results in increased operational efficiency and customer sat-
isfaction [59].
c. Enhanced responsiveness and agility: With effective SCI, smart factories
can quickly respond and adapt to changing market conditions, customer
demands, and even supply chain disruptions [60]. With real-time data
available, collaborative planning of operations in smart factories enables
dynamic adjustments in production schedules, inventory levels, and distri-
bution strategies [61]. This agility enables it to efficiently fulfil changing
customer needs, minimize stock-outs, and mitigate risks [62].
d. Optimal resource utilization: SCI in smart factories facilitates optimal
utilization of resources, including raw materials, production capacity,
and transportation. Integrating data from suppliers, production systems,
and logistics allows smart factories to accurately forecast demand, adjust
production schedules, and optimize inventory levels accordingly [63, 64].
Thus, SCI leads to reduced waste, lower carrying costs, improved resource
262 AI-Driven IoT Systems for Industry 4.0

allocation, and better utilization of production assets enabling smart fac-


tories to achieve cost savings, improve operational efficiency, and even
enhance sustainability.
e. Continuous improvement and innovation: SCI in smart factories pro-
motes continuous improvement and innovation by sharing data and insights
with supply chain partners to collaboratively identify areas for improve-
ment, drive innovation, and develop new solutions [65]. The collaborative
approach hence enables the implementation of best practices, adoption
of emerging technologies, and exploration of new business opportunities
[66]. By leveraging SCI, smart factories can foster a culture of continu-
ous improvement to stay ahead in a competitive market and drive industry
innovation [67].

Hence, SCI plays an important role in enabling a seamless flow of information to


improve coordination and collaboration among stakeholders. Furthermore, SCI cre-
ates responsive and agile supply chains that operate with optimal resources and drive
continuous improvement with innovation for a competitive market.

14.3.2 Facing Challenges in Facilitating Seamless SCI in Smart Factories


SCI from a manufacturing perspective is generally faced with several traditional
challenges that can hinder the smooth flow of information, coordination, and col-
laboration among stakeholders. Some of the challenges are listed below. The ASC
framework from the challenges perspective is presented in Figure 14.3.

a. Information silos: Siloed information within departments impedes the flow


of data, hinders real-time visibility, and creates inconsistencies, leading to
delays, errors, and inefficiency [67].
b. Traditional systems and infrastructure: Outdated software, hardware, com-
munication technologies, and lack of standardization and interoperability
lead to data incompatibility, hindering the smooth exchange of information
[68].
c. Data quality and accuracy: Incomplete, outdated, or inconsistent data from
traditional systems impact decision-making and planning processes in the
supply chain [69].
d. Resistance to change: Employees accustomed to established processes and
ways of working resist changes and hinder effective integration [67].
e. Lack of trust and collaboration: Establishing trust and fostering collabora-
tive relationships are major barriers to effective integration [70].
f. Security and privacy concerns: Ensuring robust cybersecurity, data encryp-
tion, access controls, and compliance with privacy regulations is a major
challenge to be addressed for successful integration [71].
g. Supplier and partner readiness: Traditional manufacturing environments
encounter challenges in aligning their systems, processes, and data-sharing
capabilities with integration initiatives [72].
Adaptive Supply Chain Integration in Smart Factories 263

FIGURE 14.3 SCI challenges.


264 AI-Driven IoT Systems for Industry 4.0

IoT and AI technologies play a pivotal role in addressing these challenges and
facilitating seamless SCI in smart factories. IoT enables real-time data collection,
connectivity, and visibility. Real-time data capture and transmission enhances sup-
ply chain visibility and better tracking of products and services. Customer data on
purchase behaviours, usage patterns, and preferences enable accurate forecasting.
Similarly, AI processes and analyses the gathered data to derive valuable insights,
enable predictive capabilities, and enhance decision-making. Although SCI has
these potential benefits, analysing the barriers and challenges against successful SCI
requires further research in developing holistic approaches and strategic road maps
for successful integration.

14.4 ADAPTIVE SUPPLY CHAIN MANAGEMENT


AND SMART FACTORIES
ASC management is a concept that emphasizes the ability of supply chains to proac-
tively respond to the dynamically changing market conditions, customer demands,
and supply chain disruptions [73]. In addition to agile practices, ASCs are integrated
with flexible practices enabled by advanced digital technologies and real-time data
analytics [74]. Hence, ASCs are a combination of digitally transformed manufactur-
ing systems enabled by IoT, AI, and automation. ASCs emphasize on resilience and
responsiveness and can manage unforeseen market disruptions.

14.4.1 Concept of Adaptive Supply Chain Management


Ivanov et al. [75, p.411] define ASC as

a networked organization wherein a number of various enterprises (i) collaborate (coop-


erate and coordinate) along the entire value chain and product life cycle to: acquire
raw materials, convert these raw materials into specified final products, deliver these
final products to retailers, design new products and ensure post-production services;
(ii) apply all modern concepts and technologies to make supply chains responsive, flex-
ible, robust, sustainable, cost-effective, and competitive in order to increase customer
satisfaction and decrease costs, resulting in increasing supply chain profitability.

Thus, ASCs are complex multistructural systems that have the capability to adapt
to changing markets, operations environments, and internal changes in the supply
chain by leveraging the information technology capabilities like IoT and AI. The
ASC strategy adopted from [75] is presented in Figure 14.4. The ASCs are driven by
three value chain drivers that consist of the main elements of traditional supply chain
management along with the elements of agility and sustainability.
ASCs are highly relevant in Industry 4.0 and smart factories due to the current rap-
idly changing market landscape characterized by shorter product lifecycles, volatile
customer demands, and increased product customization [76]. Sensing and respond-
ing to the dynamic market conditions promptly involve leveraging real-time data,
predictive analytics, and agile practices to adjust production schedules, optimize
inventory levels, and align supply with demand, which are the core characteristics of
Adaptive Supply Chain Integration in Smart Factories 265

FIGURE 14.4 ASCM strategy framework adopted from [75].

ASCs. The goal tree of ASCM presented in Figure 14.5 articulates the hierarchy of
goals and objectives within specific domains of the supply chain.
By leveraging the combination of AI and IoT technologies, smart factories can
benefit from ASCM. The AI-driven IoT systems in smart factories provide real-time
data insights, enable accurate demand forecasting, optimize supply chain opera-
tions, enhance risk management, and facilitate autonomous decision-making [77].
The result is a responsive and agile supply chain that can quickly adapt to chang-
ing conditions, optimize resources, and deliver superior operational performance.

FIGURE 14.5 ASCM goal tree adopted from [79].


266 AI-Driven IoT Systems for Industry 4.0

AI technologies play a crucial role in enabling ASCI by providing advanced capa-


bilities for data analysis, prediction, optimization, and decision-making, facilitating
seamless integration across the supply chain [78]. AI-driven demand forecasting,
predictive analytics, and optimization techniques enable organizations to optimize
operations, enhance efficiency, and improve the overall supply chain performance.

14.4.2 Emerging Trends in Adaptive SCI


Emerging trends in ASCI for smart focus on leveraging advanced technologies, data-
driven insights, and collaborative approaches to enable agile, responsive, and effi-
cient supply chains. Some key emerging themes are presented below.

AI/ML: The integration of AI and ML techniques is a prominent trend in ASC


integration. AI algorithms analyse the data generated by IoT devices, sen-
sors, and other sources to uncover patterns, trends, and anomalies while ML
models predict demand patterns, optimize inventory levels, and enhance
decision-making in real time [80]. AI/ML hence enables predictive analyt-
ics, autonomous decision-making, and continuous optimization across the
supply chain.
Advanced analytics and big data: The use of advanced analytics and big data
in ASC integration is another emerging theme that has gained traction [77].
Leveraging sophisticated data analytics tools and techniques to extract
valuable insights from large and diverse datasets allows enhanced demand
forecasting, supply chain visibility, risk management, and decision support
[81]. Advanced analytics hence enable data-driven decision-making and
facilitate real-time monitoring and analysis of supply chain operations.
IoT and connectivity: IoT plays a vital role in ASC integration by providing real-
time data collection, connectivity, and visibility through connected devices,
sensors, and connected systems [82]. This enables continuous monitoring
and data capture from various points in the supply chain, facilitating real-
time decision-making and agility. For instance, Hasan in [83] analysed how
Industry 4.0 could transform conventional industries into smart industries
through IoTs in the context of effective biomedical waste disposal in health-
care facilities. Thus, seamless integration of IoT devices with supply chain
management systems enables end-to-end visibility, improved coordination,
and proactive responsiveness.
Blockchain technology: Blockchain technology is a theme that is increasingly
being explored by researchers in the field of ASC integration as block-
chains offer secure and transparent data sharing across the supply chain,
enabling traceability, provenance verification, and real-time updates. Smart
contracts on blockchain platforms help to automate and streamline sup-
ply chain processes, facilitating trust, efficiency, and seamless collabora-
tion among stakeholders [84]. Thus, blockchains potentially enhance supply
chain integrity, reduce fraud, and improve compliance.
Collaborative networks and digital platforms: Collaboration among SC part-
ners through digital platforms and collaborative networks is an emerging
Adaptive Supply Chain Integration in Smart Factories 267

trend in ASC integration [57]. Smart factories integrate their systems with
suppliers, customers, logistics providers, and other stakeholders to share
real-time data, coordinate activities, and optimize operations while collab-
orative platforms enable end-to-end visibility, synchronized planning, and
efficient decision-making that enhance SCI.
Sustainability and circular economy: Sustainable practices and circular econ-
omy research has delved into ASC integration [85]. Smart factories focus-
ing on reduced waste, optimized energy usage, and adopting eco-friendly
processes are emerging trends. Supply chains embrace closed-loop systems,
recycling, and product life extension strategies by adopting circular econ-
omy concepts. The sustainability considerations in supply chain planning
and decision-making enable greener operations, enhancing the brand repu-
tation that meets evolving environmental regulations.
Predictive maintenance and asset optimization: AI-driven predictive main-
tenance and asset optimization techniques supported by analytics and IoT
sensors enable real-time monitoring of equipment health, performance, and
maintenance needs [49]. Hence, predictive maintenance algorithms to anal-
yse data and predict potential failures, optimize maintenance schedules,
and minimize downtime are emerging trends.

These emerging trends in ASC integration for smart factories focus on enabling
organizations to build agile, resilient, and customer-centric supply chains. By lever-
aging the capabilities of blockchains, smart factories can enhance supply chain trans-
parency, traceability, security, and collaboration. Edge computing, which enables
real-time data processing, decision-making, and resilience at the edge, improves
operational efficiency, security, and predictive capabilities. The combination of
these technologies can drive ASC integration, enabling smart factories to build agile,
transparent, and efficient supply chains.

14.5 CONCLUSION
This chapter delved into the concept of ASCI in the context of smart factories within
the Industry 4.0 framework. Smart factories help the manufacturing industry by
leveraging digital technologies to create intelligent, interconnected, and adaptive
manufacturing ecosystems. Integrating cyber-physical systems, IoT devices, AI, and
data analytics into smart factories enables real-time monitoring, predictive capabili-
ties, and autonomous decision-making. As a result, smart factories can function with
increased productivity, improved quality, and enhanced sustainability. Furthermore,
SCI facilitates real-time visibility, accurate demand forecasting, planning, and effi-
cient inventory management. Hence, smart factories deliver improved efficiency at
lower costs. This integration also enables supply chains to rapidly respond to ever-
changing customer demands, market conditions, accommodate disruptions, and
ultimately enhance customer satisfaction and loyalty. Additionally, SCI also fosters
collaboration and innovation among partners that are focused towards optimizing
resource utilization and improving risk management capabilities. However, the
implementation of SCI in smart factories is also faced with several challenges such
268 AI-Driven IoT Systems for Industry 4.0

as information silos, traditional systems and infrastructure, data quality and accu-
racy concerns, resistance to change, lack of trust and collaboration, and security and
privacy concerns. To overcome these challenges, IoT and AI technologies play a piv-
otal role by enabling real-time data collection, connectivity, and visibility. In addi-
tion, AI provides advanced capabilities for data analysis, prediction, optimization,
and decision-making. Furthermore, this chapter has also discussed the concept of
ASCM, emphasizing its ability to proactively respond to changing market conditions
and disruptions by integrating agile practices along with flexible practices supported
by advanced digital technologies and real-time data analytics. This chapter also
sheds light on the emerging trends in ASCI for smart factories which include AI/
ML, advanced analytics and big data, IoT and connectivity, blockchain technology,
collaborative networks and digital platforms, sustainability and circular economy,
and predictive maintenance and asset optimization. These trends mainly focus on
enabling agile, resilient, and customer-centric supply chains by leveraging advanced
technologies, data-driven insights, and collaborative approaches.

REFERENCES
1. Oztemel, E., & Gursev, S. (2020). Literature review of industry 4.0 and related tech-
nologies. Journal of Intelligent Manufacturing, 31, 127–182.
2. Dahmani, N., Benhida, K., Belhadi, A., Kamble, S., Elfezazi, S., & Jauhar, S. K. (2021).
Smart circular product design strategies towards eco-effective production systems: A
lean eco-design industry 4.0 framework. Journal of Cleaner Production, 320, 128847.
3. Rao, S. K., & Prasad, R. (2018). Impact of 5G technologies on industry 4.0. Wireless
Personal Communications, 100, 145–159.
4. Mantravadi, S., Møller, C., Chen, L. I., & Schnyder, R. (2022). Design choices for
next-generation IIoT-connected MES/MOM: An empirical study on smart factories.
Robotics and Computer-Integrated Manufacturing, 73, 102225.
5. Zarte, M., Pechmann, A., & Nunes, I. L. (2022). Knowledge framework for production
planning and controlling considering sustainability aspects in smart factories. Journal
of Cleaner Production, 363, 132283.
6. Morrar, R., Arman, H., & Mousa, S. (2017). The fourth industrial revolution (indus-
try 4.0): A social innovation perspective. Technology Innovation Management Review,
7(11), 12–20.
7. Pech, M., Vrchota, J., & Bednář, J. (2021). Predictive maintenance and intelligent sen-
sors in smart factory. Sensors, 21(4), 1470.
8. Ding, B., Ferras Hernandez, X., & Agell Jane, N. (2023). Combining lean and agile
manufacturing competitive advantages through Industry 4.0 technologies: An integra-
tive approach. Production Planning & Control, 34(5), 442–458.
9. Liu, Z., Sampaio, P., Pishchulov, G., Mehandjiev, N., Cisneros-Cabrera, S., Schirrmann,
A., Jiru, F., & Bnouhanna, N. (2022). The architectural design and implementation of
a digital platform for industry 4.0 SME collaboration. Computers in Industry, 138,
103623.
10. Maddikunta, P. K. R., Pham, Q.-V., Prabadevi, B., Deepa, N., Dev, K., Gadekallu, T. R.,
Ruby, R., & Liyanage, M. (2022). Industry 5.0: A survey on enabling technologies and
potential applications. Journal of Industrial Information Integration, 26, 100257.
11. Zhong, R. Y., Li, Z., Pang, L. Y., Pan, Y., Qu, T., & Huang, G. Q. (2013). RFID-enabled
real-time advanced planning and scheduling shell for production decision making.
International Journal of Computer Integrated Manufacturing, 26(7), 649–662.
Adaptive Supply Chain Integration in Smart Factories 269

12. Omar, I. A., Debe, M., Jayaraman, R., Salah, K., Omar, M., & Arshad, J. (2022).
Blockchain-based supply chain traceability for COVID-19 personal protective equip-
ment. Computers & Industrial Engineering, 167, 107995.
13. Oh, J., & Jeong, B. (2019). Tactical supply planning in smart manufacturing supply
chain. Robotics and Computer-Integrated Manufacturing, 55, 217–233.
14. Büchi, G., Cugno, M., & Castagnoli, R. (2020). Smart factory performance and indus-
try 4.0. Technological Forecasting and Social Change, 150, 119790.
15. Huang, G. Q., Zhang, Y. F., Chen, X., & Newman, S. T. (2008). RFID-enabled real-
time wireless manufacturing for adaptive assembly planning and control. Journal of
Intelligent Manufacturing, 19, 701–713.
16. Wang, T., Zhang, Y. F., & Zang, D. X. (2016). Real-time visibility and traceability frame-
work for discrete manufacturing shopfloor. In Proceedings of the 22nd International
Conference on Industrial Engineering and Engineering Management 2015: Core Theory
and Applications of Industrial Engineering (Volume 1) (pp. 763–772). Atlantis Press.
17. Soosay, C. A., Hyland, P. W., & Ferrer, M. (2008). Supply chain collaboration:
Capabilities for continuous innovation. Supply Chain Management: An International
Journal, 13(2), 160–169.
18. Leuschner, R., Rogers, D. S., & Charvet, F. F. (2013). A meta-analysis of supply chain
integration and firm performance. Journal of Supply Chain Management, 49(2), 34–57.
19. Zhang, Y., Qu, T., Ho, O., & Huang, G. Q. (2011). Real-time work-in-progress man-
agement for smart object-enabled ubiquitous shop-floor environment. International
Journal of Computer Integrated Manufacturing, 24(5), 431–445.
20. Shrouf, F., Ordieres, J., & Miragliotta, G. (2014, December). Smart factories in Industry
4.0: A review of the concept and of energy management approached in production
based on the Internet of Things paradigm. In 2014 IEEE International Conference on
Industrial Engineering and Engineering Management (pp. 697–701). IEEE.
21. Napoleone, A., Macchi, M., & Pozzetti, A. (2020). A review on the characteristics
of cyber-physical systems for the future smart factories. Journal of Manufacturing
Systems, 54, 305–335.
22. Jiang, J. R. (2018). An improved cyber-physical systems architecture for industry 4.0
smart factories. Advances in Mechanical Engineering, 10(6), 1687814018784192.
23. Kure, H. I., Islam, S., & Razzaque, M. A. (2018). An integrated cyber security risk
management approach for a cyber-physical system. Applied Sciences, 8(6), 898.
24. Soori, M., Arezoo, B., & Dastres, R. (2023). Internet of things for smart factories in
industry 4.0, a review. Internet of Things and Cyber-Physical Systems, 3, 192–204.
25. Ahmad, M., Ishtiaq, A., Habib, M. A., & Ahmed, S. H. (2019). A review of internet of
things (IoT) connectivity techniques. Recent Trends and Advances in Wireless and IoT-
Enabled Networks, 25–36.
26. Bana, A.-S., De Carvalho, E., Soret, B., Abrao, T., Marinello, J. C., Larsson, E. G., &
Popovski, P. (2019). Massive MIMO for internet of things (IoT) connectivity. Physical
Communication, 37, 100859.
27. Fan, C., Yan, D., Xiao, F., Li, A., An, J., & Kang, X. (2021). Advanced data analytics
for enhancing building performances: From data-driven to big data-driven approaches.
Building Simulation, 14, 3–24.
28. Wang, J., Ma, Y., Zhang, L., Gao, R. X., & Wu, D. (2018). Deep learning for smart
manufacturing: Methods and applications. Journal of Manufacturing Systems, 48,
144–156.
29. Kragic, D., Gustafson, J., Karaoguz, H., Jensfelt, P., & Krug, R. (2018). Interactive, col-
laborative robots: Challenges and opportunities. IJCAI, 18–25.
30. Evjemo, L. D., Gjerstad, T., Grøtli, E. I., & Sziebig, G. (2020). Trends in smart manu-
facturing: Role of humans and industrial robots in smart factories. Current Robotics
Reports, 1, 35–41.
270 AI-Driven IoT Systems for Industry 4.0

31. Kousi, N., Koukas, S., Michalos, G., & Makris, S. (2019). Scheduling of smart intra –
Factory material supply operations using mobile robots. International Journal of
Production Research, 57(3), 801–814.
32. Orellana, F., & Torres, R. (2019). From legacy-based factories to smart factories
level 2 according to the industry 4.0. International Journal of Computer Integrated
Manufacturing, 32(4–5), 441–451.
33. Yin, S., Rodriguez-Andina, J. J., & Jiang, Y. (2019). Real-time monitoring and control
of industrial cyberphysical systems: With integrated plant-wide monitoring and control
framework. IEEE Industrial Electronics Magazine, 13(4), 38–47.
34. Chung, S. H., Byrd, T. A., Lewis, B. R., & Ford, F. N. (2005). An empirical study of the
relationships between IT infrastructure flexibility, mass customization, and business
performance. ACM SIGMIS Database: The DATABASE for Advances in Information
Systems, 36(3), 26–44.
35. Wang, S., Wan, J., Li, D., & Zhang, C. (2016). Implementing smart factory of Industrie
4.0: An outlook. International Journal of Distributed Sensor Networks, 12(1), 3159805.
36. Ceccato, V., & Lukyte, N. (2011). Safety and sustainability in a city in transition: The
case of Vilnius, Lithuania. Cities, 28(1), 83–94.
37. Wang, L. (2015). Collaborative robot monitoring and control for enhanced sustainabil-
ity. The International Journal of Advanced Manufacturing Technology, 81, 1433–1445.
38. Wiktorsson, M., Do Noh, S., Bellgran, M., & Hanson, L. (2018). Smart factories: South
Korean and Swedish examples on manufacturing settings. Procedia Manufacturing,
25, 471–478.
39. Buckler, B. (1996). A learning process model to achieve continuous improvement and
innovation. The Learning Organization, 3(3), 31–39.
40. Aryal, A., Liao, Y., Nattuthurai, P., & Li, B. (2020). The emerging big data analytics
and IoT in supply chain management: A systematic review. Supply Chain Management:
An International Journal, 25(2), 141–156.
41. Plageras, A. P., Psannis, K. E., Stergiou, C., Wang, H., & Gupta, B. B. (2018). Efficient
IoT-based sensor BIG data collection–processing and analysis in smart buildings.
Future Generation Computer Systems, 82, 349–357.
42. Tao, H., Bhuiyan, M. Z. A., Abdalla, A. N., Hassan, M. M., Zain, J. M., & Hayajneh, T.
(2018). Secured data collection with hardware-based ciphers for IoT-based healthcare.
IEEE Internet of Things Journal, 6(1), 410–420.
43. Vijayaraghavan, A., Sobel, W., Fox, A., Dornfeld, D., & Warndorf, P. (2008). Improving
machine tool interoperability using standardized interface protocols: MT connect.
44. Qiu, X., Luo, H., Xu, G., Zhong, R., & Huang, G. Q. (2015). Physical assets and service
sharing for IoT-enabled supply hub in industrial park (SHIP). International Journal of
Production Economics, 159, 4–15.
45. Liu, K., & Shi, J. (2015). IoT-enabled system informatics for service decision making.
IEEE Intelligent Systems, 30(6), 18–21.
46. Shah, S. A., Seker, D. Z., Hameed, S., & Draheim, D. (2019). The rising role of big data
analytics and IoT in disaster management: Recent advances, taxonomy and prospects.
IEEE Access, 7, 54595–54614.
47. Agrawal, M., Eloot, K., Mancini, M., & Patel, A. (2020). Industry 4.0: Reimagining
manufacturing operations after COVID-19. McKinsey & Company, 1–11.
48. Chugh, G., Kumar, S., & Singh, N. (2021). Survey on machine learning and deep learn-
ing applications in breast cancer diagnosis. Cognitive Computation, 1–20.
49. Corallo, A., Crespino, A. M., Lazoi, M., & Lezzi, M. (2022). Model-based big data
analytics-as-a-service framework in smart manufacturing: A case study. Robotics and
Computer-Integrated Manufacturing, 76, 102331.
50. Sahoo, S., & Lo, C. Y. (2022). Smart manufacturing powered by recent technological
advancements: A review. Journal of Manufacturing Systems, 64, 236–250.
Adaptive Supply Chain Integration in Smart Factories 271

51. Javaid, M., Haleem, A., Khan, I. H., & Suman, R. (2023). Understanding the potential
applications of artificial intelligence in agriculture sector. Advanced Agrochem, 2(1),
15–30.
52. Goel, R., & Gupta, P. (2020). Robotics and industry 4.0. In A roadmap to industry 4.0:
Smart production, sharp business and sustainable development. 157–169. Springer,
Cham. https://doi.org/10.1007/978-3-030-14544-6_9
53. Ahmed, I., Jeon, G., & Piccialli, F. (2022). From artificial intelligence to explain-
able artificial intelligence in industry 4.0: A survey on what, how, and where. IEEE
Transactions on Industrial Informatics, 18(8), 5031–5042.
54. Varaprasad, R., & Mahalaxmi, G. (2022). Applications and techniques of natural lan-
guage processing: An overview. IUP Journal of Computer Sciences, 16(3), 7–21.
55. Meneguette, R. I., Bittencourt, L. F., & Madeira, E. R. M. (2013). A seamless flow
mobility management architecture for vehicular communication networks. Journal of
Communications and Networks, 15(2), 207–216.
56. Simatupang, T. M., Wright, A. C., & Sridharan, R. (2002). The knowledge of coor-
dination for supply chain integration. Business Process Management Journal, 8(3),
289–308.
57. Herczeg, G., Akkerman, R., & Hauschild, M. Z. (2018). Supply chain collaboration in
industrial symbiosis networks. Journal of Cleaner Production, 171, 1058–1067.
58. McNamara, M. (2012). Starting to untangle the web of cooperation, coordination,
and collaboration: A framework for public managers. International Journal of Public
Administration, 35(6), 389–401.
59. Peng, D. X., & Lu, G. (2017). Exploring the impact of delivery performance on cus-
tomer transaction volume and unit price: Evidence from an assembly manufacturing
supply chain. Production and Operations Management, 26(5), 880–902.
60. Lu, Y., Morris, K. C., & Frechette, S. (2016). Current standards landscape for smart
manufacturing systems. National Institute of Standards and Technology, NISTIR,
8107(3), 1–35.
61. Zhou, T., Tang, D., Zhu, H., & Wang, L. (2020). Reinforcement learning with compos-
ite rewards for production scheduling in a smart factory. IEEE Access, 9, 752–766.
62. Brusset, X. (2016). Does supply chain visibility enhance agility? International Journal
of Production Economics, 171, 46–59.
63. Nanjappan, M., Natesan, G., & Krishnadoss, P. (2021). An adaptive neuro-fuzzy infer-
ence system and black widow optimization approach for optimal resource utilization
and task scheduling in a cloud environment. Wireless Personal Communications,
121(3), 1891–1916.
64. Zhou, B., Hu, H., Huang, S.-Q., & Chen, H.-H. (2013). Intracluster device-to-device
relay algorithm with optimal resource utilization. IEEE Transactions on Vehicular
Technology, 62(5), 2315–2326.
65. Radziwon, A., Bilberg, A., Bogers, M., & Madsen, E. S. (2014). The smart factory:
Exploring adaptive and flexible manufacturing solutions. Procedia Engineering, 69,
1184–1190.
66. Ashurst, C., Freer, A., Ekdahl, J., & Gibbons, C. (2012). Exploring IT-enabled innovation:
A new paradigm? International Journal of Information Management, 32(4), 326–336.
67. Sjödin, D. R., Parida, V., Leksell, M., & Petrovic, A. (2018). Smart factory implementa-
tion and process innovation: A preliminary maturity model for leveraging digitaliza-
tion in manufacturing moving to smart factories presents specific challenges that can
be addressed through a structured approach focused on people, processes, and tech-
nologies. Research-Technology Management, 61(5), 22–31.
68. O’Donovan, P., Bruton, K., & O’Sullivan, D. T. (2016). Case study: The implementation
of a data-driven industrial analytics methodology and platform for smart manufactur-
ing. International Journal of Prognostics and Health Management, 7(3), 1–22.
272 AI-Driven IoT Systems for Industry 4.0

69. Illa, P. K., & Padhi, N. (2018). Practical guide to smart factory transition using IoT, big
data and edge analytics. IEEE Access, 6, 55162–55170.
70. Cooke, P. (2021). Image and reality: “Digital twins” in smart factory automotive pro-
cess innovation – Critical issues. Regional Studies, 55(10–11), 1630–1641.
71. Ghobakhloo, M. (2020). Determinants of information and digital technology imple-
mentation for smart manufacturing. International Journal of Production Research,
58(8), 2384–2405.
72. Davis, J., Edgar, T., Porter, J., Bernaden, J., & Sarli, M. (2012). Smart manufactur-
ing, manufacturing intelligence and demand-dynamic performance. Computers &
Chemical Engineering, 47, 145–156.
73. Ivanov, D., Dolgui, A., & Sokolov, B. (2012). Applicability of optimal control theory
to adaptive supply chain planning and scheduling. Annual Reviews in Control, 36(1),
73–84.
74. Urciuoli, L., & Hintsa, J. (2017). Adapting supply chain management strategies to secu-
rity–an analysis of existing gaps and recommendations for improvement. International
Journal of Logistics Research and Applications, 20(3), 276–295.
75. Ivanov, D., Sokolov, B., & Kaeschel, J. (2010). A multi-structural framework for adap-
tive supply chain planning and operations control with structure dynamics consider-
ations. European Journal of Operational Research, 200(2), 409–420.
76. Malhotra, A., Gosain, S., & El Sawy, O. A. (2007). Leveraging standard electronic
business interfaces to enable adaptive supply chain partnerships. Information Systems
Research, 18(3), 260–279.
77. Dolgui, A., & Ivanov, D. (2022). 5G in digital supply chain and operations manage-
ment: Fostering flexibility, end-to-end connectivity and real-time visibility through
internet-of-everything. International Journal of Production Research, 60(2), 442–451.
78. Modgil, S., Singh, R. K., & Hannibal, C. (2022). Artificial intelligence for supply
chain resilience: Learning from Covid-19. The International Journal of Logistics
Management, 33(4), 1246–1268.
79. Ivanov, D. (2018). New drivers for supply chain structural dynamics and resilience:
Sustainability, industry 4.0, self-adaptation. In: Structural dynamics and resilience in
supply chain risk management, 293–313. International Series in Operations Research
& Management Science, vol 265. Springer, Cham. https://doi.org/10.1007/978-3-319-
69305-7_10
80. Sharp, M., Ak, R., & Hedberg, T. Jr (2018). A survey of the advancing use and develop-
ment of machine learning in smart manufacturing. Journal of Manufacturing Systems,
48, 170–179.
81. Tiwari, S., Wee, H. M., & Daryanto, Y. (2018). Big data analytics in supply chain
management between 2010 and 2016: Insights to industries. Computers & Industrial
Engineering, 115, 319–330.
82. Rana, A., Rawat, A. S., Afifi, A., Singh, R., Rashid, M., Gehlot, A., & Alshamrani, S. S.
(2022). A long-range Internet of Things-based advanced vehicle pollution monitoring
system with node authentication and blockchain. Applied Sciences, 12(15), 7547.
83. Hasan, M. A., Raghuveer, K., Pandey, P. S., Kumar, A., Bora, A., Jose, D., … &
Khanapurkar, M. M. (2021). Internet of Things and its application in industry 4.0 for
smart waste management. Journal of Environmental Protection and Ecology, 22(6),
2368–2378.
84. Bhaskar, P., Tiwari, C. K., & Joshi, A. (2020). Blockchain in education management:
Present and future applications. Interactive Technology and Smart Education, 18(1),
1–17. https://doi.org/10.1108/ITSE-07-2020-0102
85. Cioffi, R., Travaglioni, M., Piscitelli, G., Petrillo, A., & Parmentola, A. (2020). Smart
manufacturing systems and applied industrial technologies for a sustainable industry:
A systematic literature review. Applied Sciences, 10(8), 2897.
15 Implementation of
Intelligent CPS
for Integrating
the Industry and
Manufacturing Process
T. Rajasanthosh Kumar, Mahesh. M. Kawade,
Gaurav Kumar Bharti, and Laxmaiah G.

15.1 INTRODUCTION
Today’s marketplaces are being changed by revolutionary technology, hitting more
needs in the sector to become more agile and adaptive. As the global market becomes
more competitive, businesses must significantly adjust their production strategies,
technology, and management to maintain profitability [1–3]. Automation, digitali-
zation, artificial intelligence (AI), powerful computers, advanced sensors and data
acquisition systems, intelligent robots, technology for communication and informa-
tion (ICT), the Internet of Things (IoT), and big data, along with the use of cloud
computing have all helped many different industries increase their performance and
productivity [4, 5]. Industry 4.0, also known as the Fourth Industrial Revolution,
resulted from the shift from traditional production to intelligent manufacturing
through digitalization and other forms of cutting-edge technology. The word origi-
nated in Germany and refers to cutting-edge production methods that bring digital
technology to traditional industries through the IoT [6, 7]. The efficient implementa-
tion of key enabling technologies (KETs) is a cornerstone of Industry 4.0’s revolution
and crucial to its success [8]. The most widely accepted tenets of Industry 4.0 are
shown in Figure 15.1.
Cyber-physical systems (CPSs), the IoT, big data, AI, cloud-based computing,
robots, and the smart factory will be discussed in more detail below.

15.1.1 Cyber-Physical Systems
Aiming to bridge the gap between the digital and physical worlds via the integra-
tion of computing, networking, and physical assets, CPSs blur the lines between the
two. It is generally accepted that CPS refers to the combination of hardware (e.g.,
actuators, sensors, and other components in robotics) and cybernetic software (com-
munication, networking, and the Internet), albeit the precise meaning of this term
DOI: 10.1201/9781003432319-15 273
274 AI-Driven IoT Systems for Industry 4.0

FIGURE 15.1 Accepted tenets.

may vary depending on the reader’s viewpoint and background [9, 10]. The success
of Industry 4.0, of which CPS is a fundamental component, relies on an intelligent
oversight of linked systems among its human elements and computational capabili-
ties, modernizing both fields using cutting-edge tech [11, 12]. Since its inception,
CPSs have been at the forefront of academic study and practical use in industries
around the globe. New developments in computer science and information and com-
munication technologies, on one hand, and manufacturing science and technology,
on the other, are referred to in the manufacturing business as cyber-physical produc-
tion systems (CPPS) [13, 14]. The automobile, healthcare, and energy sectors are just
a few that have used CPPSs. The references within provide the following:

• Comprehensive analysis of why smart manufacturing’s long-term viability


must be investigated further using CPPS [15].
• Accomplishing a complete life cycle [16].
• Enhancing transportation, design, performance, and maintenance.

15.1.2 Internet of Things
According to the International Telecommunications Union (ITU), the IoT is a world-
wide framework that facilitates the information society and enables the provision
of advanced services through the interconnection of physical and virtual entities.
This interconnection is established using both existence and developing compatible
Implementation of Intelligent CPS 275

information and communication technologies. “Internet of Things” refers to the


network infrastructure that facilitates the interconnection of various devices and
assets [17, 18]. The term “Internet of Things” has been widely utilized across vari-
ous domains, adopting corresponding field-specific names such as the international
IoTs and Internet of Service [19, 20]. IoT-aided robots, also known as the Internet
of Robotic Things, are innovative ideas made possible by merging robots and IoT
technology. Every part of modern industrial systems is integrated with hardware
and linked to the Internet; this includes actuators, sensors, and electronic software
[21, 22]. Through the IoT, manufacturing equipment may communicate and share
information, among other manufacturing devices, with service providers and con-
sumers [23]. Many researchers have recently focused on the industrial IoT (IIoT) and
its potential benefits in many sectors [24, 25]. Issues with procedures and standard-
izations, structures and communication, and conceptual and ontology concerns for
Industry 4.0 were uncovered and contributed to by research into combining an IoT
platform with CPSs for brilliant electronic production for additive manufacturing
(AM) in digital factories and smart manufacturing [26, 27].

15.1.3 Analytics of Big Data


Big data analytics can give worldwide feedback and a high level of coordination
within the context of Industry 4.0; both are essential for effective production to func-
tion at a high level [28, 29]. Sensors, motors, network activity, and log files are just
a few examples of the sources and channels that can be mined for valuable informa-
tion that can be used for dynamic system reconfiguration and optimization and for
performing direct supervision and control tasks, giving businesses a much-needed
boost. The development of intelligent analytical models and algorithms, made pos-
sible by combining big data analysis and mining technologies, contributes to the
success of intelligent manufacturing [30]. Industry 4.0’s virtual twin represents one
of the most visible applications of big data. Predictive and proactive manufacturing,
driven by big data, is another area where it has had an enormous impact.

15.1.4 Cloud Computing
The cloud can serve as a massive infrastructure for the company to store informa-
tion, determine information, and use offer schemes that optimize its manufacturing
process chain, boost efficiency in manufacturing, and reduce costs because it pro-
vides a hub for the transfer of data and allows the exchange of assets over storage,
processing, and analysis of data. Cloud manufacturing (CMfg) is the term for cloud
computing’s use in manufacturing. CMfg’s capacity to include and use cutting-edge
technologies like cloud computing, big data, the CPS, and the IoT, has led to its
rapid rise and widespread use. Consequently, it allows for the simple administration
and virtualization of big data, interfaces, and the virtualization of resources used
in production. The IoT allowed actual time equipment status monitoring deployed
in CMfg surroundings, RFID-enabled shop-floor logistics, and big data visualiza-
tion. Because of its flexibility, scaling, multiple tenants, innovative decision-making
instruments, and automated production on demand, CMfg has found widespread use
276 AI-Driven IoT Systems for Industry 4.0

in cloud computing for tracking procedures and controlling factory loops, among
other applications. Additional information and examples of CMfg’s use may be
found in.

15.1.5 Artificial Intelligence
Increasing degrees of connections, nonlinear behaviour, unpredictability, and data
volume have all arisen due to Industry 4.0, making today’s industrial settings more
complicated than ever. As AI develops and machine learning/deep learning-based
approaches become more widely used across industries, they will be in a prime posi-
tion to take on a central part in bringing Industry 4.0 to fruition. Industry 4.0 relies
heavily on AI, leading to commercial AI via the development, testing, and deploy-
ment of several AI algorithms for industrial applications. Industry 4.0 benefits from
AI-based techniques because of the enhanced agility, efficiency, and sustainability
that come from intelligent equipment doing tasks that include monitoring them-
selves, analysis and diagnosis, and analysis without human intervention. Several
recent industrial applications have explored the best ways to integrate AI methods
and ideas within the framework of Industry 4.0. Intelligent systems’ analysis of data
and real-time supervisory duties is one such application; it provides an extensive
structure with a comprehensive overall strategy of AI in the sector with an example
with a specific use in mind, outlining the steps necessary to implement adaptable
data analysis and real-time monitoring systems that are adaptable and pluggable in
industrial environments.

15.1.6 Robotic Technology
Robotics technology is essential to the success of Industry 4.0. Commercial manipu-
lators, mobile robots, and collaborative robots are a few instances of robotics tech-
nology employed in manufacturing processes and Industry 4.0 applications. Modern
technical advancements in robotics have increased smart factories’ efficiency, flex-
ibility, versatility, reconfigurability, and safety, making industrial robots the central
technology in automation and industry. Industry 4.0 settings now have a cooperatively
automated vehicle for more flexibility and intelligence in manufacturing. This is just
one example of the many industrial uses that use state-of-the-art technology for robot-
ics, like using electronic factory processes to develop intelligent manufacturing cells.
The Internet of Robotic Things (IoT) is a new field that emerged from the convergence
of robotics and IoT technology. Integrating robots with IoT will boost the capabilities
of both current IoT and robotics infrastructures. Autonomous robots and portable
machines, with a wide range of potential uses, would be made possible by advances in
robotics, paving the way for the introduction of Industry 4.0. More research and case
studies have been provided on robots’ applications in Industry 4.0.

15.1.7 The Intelligent Factory


Industry 4.0, which seeks to achieve brilliant production via connected manufactur-
ing facilities and vertical integration of the production processes, relies heavily on
Implementation of Intelligent CPS 277

the smart factory. With an eye on Industry 4.0, the intelligent factory strategically
uses technologies like robots, automation, embedded technology, and information
networks. Many new technologies, including CPS, the IoT, big data, cloud comput-
ing, and AI, are aiding in the shift from traditional to intelligent manufacturing,
which will ultimately result in automated production and digital twin systems. The
intelligent factory uses modern technology and production models to enable fully
autonomous production processes that attain peak performance via ongoing self-
tuning, situational adaptation, and experience learning. The term “smart factory”
has been explored in the academic literature from various perspectives. In this, the
fundamentals of an intelligent factory design are discussed, including the technol-
ogy that makes it possible and the difficulties it faces as well as a sample applica-
tion. An extensive literature review on intelligent factories was done to summarize
current research accomplishments and developments in this area, and possible
prospects. Another in-depth literature review based on a multi-stage methodology
and selected criteria has been conducted to establish background information and
assess emerging theories and practices related to Industry 4.0, which includes smart
manufacturing.
Figure 15.2 depicts how the four tiers of the smart factory are interconnected
through a hierarchical pyramid of intelligent automation. Unlike the traditional
pyramid, this one allows for modern computers, cloud storage, networking, and
intelligent devices to provide two-way knowledge interaction between numerous
components of the industrial process’s hierarchy.
Smart factories have been implemented across several industries to the Industry
4.0 framework. Under Industry 4.0, for example, the pharmaceutical industry’s
process for medication packaging has been integrated into an intelligent factory.
The agents in this smart factory can self-organize thanks to a dialogue mechanism
powered by AI. Additionally, a CPS-based cutting-edge plant using digital twin

FIGURE 15.2 Smart factory architecture.


278 AI-Driven IoT Systems for Industry 4.0

technology is being used to produce circuit breakers. Also, AM has started using
intelligent factories. In addition, the real-time optimization of a regular chemical
plant due to the 4.0 paradigm in the industry has turned it into an intelligent plant.
Finally, IIoT has been used to transform the tannery sector from a traditional factory
into an intelligent factory. Design, machining, monitoring, and other processes are
some of the many areas that might benefit from Industry 4.0’s proposed architecture
for intelligent production systems. The leading construction and its process elements
are used as feedback data, and the utility’s outputs help with decision-making and
machining process planning, according to another work that focuses on intelligent
factories and unique machining processes. In the linked article, several different
uses for smart factories are discussed.

15.1.8 Industry 5.0
For the last decade, “Industry 4.0” has been the most often used phrase when dis-
cussing business transactions in the manufacturing sector. It has had a significant
influence on social and economic advancement and is now a widely held belief sys-
tem across the world. Intelligent and self-sufficient manufacturing processes are a
hallmark of this kind of economy. Recently, a concept known as Industry 5.0 has
been proposed in response to these concerns, which centres the worker at the core of
digital transformation rather than automation. The five main topics of Industry 5.0
identified in the published literature are supply chain assessment and optimization,
business innovation and digitalization, intelligent and sustainable manufacturing, the
IoT, AI, big data, and human-machine interaction. Many academics see Industries 5.0
as a doorway to the peaceful cohabitation of humans and machines. The end goal
is a collaborating industrial system with holistic human-centred growth, long-term
dedication to environmental protection, and a dependable source of resilience, all
achieved via reimagining the interaction between people and intelligent components.
This chapter overviews the technologies necessary to create an intelligent factory,
such as those used in manufacturing, computing, and communication. An advanced
CPS conforming to the one-of-a-kind, cutting-edge manufacturing framework for
Industry 4.0 is mapped out. It describes the potential interplay between the intel-
ligent factory’s core components to create an automated, data-driven manufacturing
environment. A simple, brilliant factory concept is shown using a real-life examina-
tion of smart manufacturing involving a drilling operation.

15.2 TECHNIQUES AND SYSTEM DESIGN


This section describes the hardware components of the digital production platform,
including a robot arm, PLC Siemens Company S7-1200, global online cameras,
detectors, a cutting unit, and a personal computer equipped with various commu-
nication technologies (see Figure 15.3) that will be covered in a later section. Smart
manufacturing in this context refers to moving the workpiece to a specific slot, where
the controller (PLC) and the web camera can detect its arrival and begin the machin-
ing (drilling) operation. The robot’s gripping point coordinates are determined by
the machine vision system’s calculations of the workpiece’s location and orientation.
Implementation of Intelligent CPS 279

FIGURE 15.3 Physical assets of the part of the digital manufacturing platform.

The pick-and-place robot gets the coordinates for the slot in the drilling unit, grabs
the part, and deposits it there. Before beginning the drilling process, a proximity
sensor ensures it arrives at the table. When the robot is done drilling, it activates a
limit switch and moves the piece to a holding location. The process will restart once
a piece has returned to its starting location.
The sensors are linked to the PLC, and information is sent from the robots and the
PLC through the cloud using the Laboratory View application programming inter-
face (API). The following parts provide a more in-depth explanation of the system’s
framework.

15.2.1 Mechanism of Construction
This project included the development of a vertical drill to showcase a miniature
manufacturing process, with the robot being instructed to retrieve the sample from a
predetermined slot and deliver it to the platform for drilling. A drill is a device used
to create holes in various materials. As seen in Figure 15.4, it was suggested that the
drilling platform be modelled in CAD software. Components of the drilling system
include the following:

1. A bottom, supporting framework, posts, lead screw, and bearings make up


the mechanical components of the drilling machine.
2. Power distribution system includes a stepper motor and the components that
provide power to the motor and regulate its operation.
3. The drill’s tip is moved to the desired location, and the desired depth of cut
is determined by a mechanism under the operator’s control.
280 AI-Driven IoT Systems for Industry 4.0

FIGURE 15.4 Drilling unit.

A grasping mechanism was added to secure the workpiece and keep the drilling
unit steady, as illustrated in Figure 15.4. The workpiece may be secured from both
sides by mounting the hydraulic pistons and steel gripper (clamp) to the table.

15.2.2 Manage the Process


An Arduino microcontroller was used to activate the mechanism sequentially in
communication with the PLC. There are three sensors connected to the PLC: the
arrival of the workpiece at the procedure’s first stage is detected by two proximity
sensors and the drilling unit, respectively. The linear upward movement of the drill
activates the third sensor, which is engaged by a switch. The PLC relays the second
signal from the proximity sensor to the Arduino.

15.2.3 IoT Platform Setup


The IBM Watson IoT platforms serve as an intermediary between the many com-
ponents of an IoT system, including sensors, gateways, and software. Programmes,
gadgets, pathways, events processing, and administrative duties may use this frame-
work’s REST and MQTT protocols. This system connects devices in the field with
cloud-based user apps or IBM’s data analytics services. The platform’s user-friendli-
ness and the simplicity of the MQTT protocol made it ideal for this use case. Devices
may be added and maintained under the devices tab, as illustrated in Figure 15.5.
A unique identifier, category, and authentication token must be set up for each con-
nected device before it can communicate with the IoT platform. The characteristics
of external devices determine how they are linked to the platform.
Implementation of Intelligent CPS 281

FIGURE 15.5 IBM Watson menu.

Under the Boards menu, users may generate customizable visualization cards to
examine data in real time. Individual API keys may be generated for any user app
that wants to link to the IoT platform to either receive information from devices or
send out commands to devices. Information about the MQTT client connection uti-
lized in this study is as follows:

• Contact email: [email protected]


• Different Messaging Domains
• QoS: at most once (QoS0); MQTT client: device; device verification: token
authentication; security port: non-secure connection type, technology
MQTT, port number 1883.

A subject and an asset describing the information are provided to the platform at
the same time.
282 AI-Driven IoT Systems for Industry 4.0

15.2.4 Interface and Communication


Various approaches for system extension were explored during the implementation
process in the context of CPSs to guarantee hardware compatibility with Industry
4.0. One such approach involved the integration of an embedded system board, which
combined the PLC S7-1200 and the KRC4 using the Lab View API. This integration
facilitated bidirectional communication between manufacturing hardware and cloud
devices, enabling interactions between the two. Figure 15.6 depicts the interaction
of the system.
Integrating the PLC’s S7 protocol with the KRC4’s external applications presents
a significant technical hurdle. This is why the network was designed in the first
place. Neither Robot nor Siemens has publicly disclosed a standard protocol for use
by other parties. A library that can access and modify PLC blocks of data, KRC4
variables, and data structures might be developed by examining the protocol used
during conversations among machines using an instrument like Wireshark.

15.2.5 PLC Model


The Siemens S7-1200 was chosen for this implementation, and TCP functional com-
ponents in Siemens’ STEP 7 programming environment were used as a starting
point for establishing a connection with the S7-1200. These bits of code are used
in ladder programmes to establish TCP/IP connections to other Ethernet cable gad-
gets. Within this architecture, Siemens employs the S7 protocol for communication
between PLCs and access to PLCs through SCADA. The TCP/IP standard is met
by implementing the block-oriented protocol ISO-on-TCP. Because the S7 system
is command and function focused, all interactions are commands or responses to
commands. Data read/write the relevant command. Using a TCP socket in the lad-
der programmes is unnecessary for this interfacing choice. C, C++, and Python all
have wrappers for the Snap7. The S7 protocol may be read and written to a Lab View
VI library. However, this was implemented with specific tweaks to help achieve the
process’s ultimate goal.

FIGURE 15.6 Interaction of the system.


Implementation of Intelligent CPS 283

15.2.6 Lab View API


The Lab View API uses the traditional producer/consumer design paradigm. This
design style aims to avoid data loss when processing data with varying production
and consumption rates. The JOpenShowVar API, for instance, has “an unpredictable
duration with a mean of 5 ms” for transferring data. Depending on the web con-
nection rate, transferring information to the connected device might take anywhere
from a few thousand milliseconds to a couple of seconds.
Figure 15.7 depicts an eight-state machine representing the producer loop.

• The initialize command sets up the TCP client sockets.


• The internet connectivity and reliability may be tested using Test _Internet.
• Connect IoT links to gadgets and IBM’s Watson IoT hub.
• The Check _Status function ensures that the programme has successfully
exited.
• From _PLC enqueues read variables after retrieving the data block using a
S7 1200.
• From _Kuka enqueues the reading of variables and performs a read from
the KRC.
• To _Kuka submits a letter to the KRC.

This command terminates the current programme.

FIGURE 15.7 Producer loop.


284 AI-Driven IoT Systems for Industry 4.0

15.3 DISCUSSION AND APPLICATION


Integration of a KUKA robot and Siemens S7 PLC is shown during a drilling opera-
tion in an intelligent industrial application. To allow for conversation between the
two, a TCP connection is established. An ultrasonic sensor is used to begin the pro-
cess, which begins with a workpiece being placed in a predetermined slot. Through
the controller and sensor/camera setup, the robot is informed of the work piece’s
location. After locating the workpiece, the robot picks it up and sets it on the table
for drilling. The controller then activates an air-pressurized cylinder to secure the
component and begins a lead screw mechanism with a drill once the workpiece has
been positioned. When the drilling is done, the drill travels vertically down and
drills the workpiece. The robot then stacks the workpiece where instructed and waits
for the next one. Figure 15.8 depicts the connections between the system’s primary
components.

15.3.1 The Systems Response


Table 15.1 calculates the PLC-robot reaction time. The table illustrates multiple
experiments’ average replies. The producer loop that exchanges data across devices
has an excellent reaction time of 100 ms.

FIGURE 15.8 Connections between the system’s primary components.

TABLE 15.1
PLC-Robot Reaction Time
Variables Time
Average KRC4-Labview API 7.955
Average PLC Data Block Read Time 2.600
Producer Loop Iteration Average 103.9
Implementation of Intelligent CPS 285

15.3.2 Integrated Systems
Table 15.2 depicts the normal MQTT message delivery time to the IoT platform.
Speed test JSON. The programme used MQTT’s QoS0 level since it sent informa-
tion to the cloud fastest. However, as increasing iterations increase data transmission
time, QoS2 is required most.
Various measured metrics are visualized in IBM Watson IoT, as shown in
Figure 15.9. When the information reaches the IoT platform, users may make use
of tools like IBM’s Watson Analytics and Data Science Experience. Cloudant is a
NoSQL database (see Figure 15.10) that may also be used to store information. IBM
Watson also supports integrating third-party apps into the IoT infrastructure.

TABLE 15.2
Normal MQTT-IoT Message Delivery Time
Platform
Variable Time (ms)
Avg. Publish time QoS0: at most once 263.9
QoS1: at least once 0.8415
QoS2: exactly once 218.1

FIGURE 15.9 Visualization in IBM Watson IoT.


286 AI-Driven IoT Systems for Industry 4.0

FIGURE 15.10 NoSQL database model.

15.4 CONCLUSIONS
As a result of the proliferation of digitalization and intelligent manufacturing,
“Industry 4.0” has become a focal point in the industrial industry. In the era of
Industry 4.0, an intelligent factory is a crucial component of the digital revolution.
It paves the way for an easy transition from traditional (standard) to intelligent pro-
duction by leveraging state-of-the-art developments in intelligent machines, CPSs,
intelligent sensor technology and instrumentation, the IoT website, big data, the
cloud, and AI. This chapter provides a hierarchical structure for brilliant manu-
facturing facilities incorporating recent findings. This design, which includes all
of the cutting-edge technologies connected with the different levels of the intel-
ligent factory, has been implemented and verified to demonstrate the efficacy of
an intelligent factory in production operations. Using this technique, observe how
implementing the high-tech solutions of the intelligent manufacturing paradigm
has improved production, communication, and independence. The results showed
that the framework encouraged collaboration and information sharing across the
various parts of the system, leading to a more efficient manufacturing procedure
(lower reaction time) and more output. The suggested technique has certain restric-
tions, one of which is that the model can only be used with a single production.
The intelligent factory hopes to offer consumers more malleable goods through
reconfigurable production.

REFERENCES
1. Cheng, Guo-Jian, et al. “Industry 4.0 development and application of intelligent man-
ufacturing.” 2016 International Conference on Information Systems and Artificial
Intelligence (ISAI).
2. Zhou, Keliang, Taigang Liu, and Lifeng Zhou. “Industry 4.0: Towards future industrial
opportunities and challenges.” 2015 12th International Conference on Fuzzy Systems
and Knowledge Discovery (FSKD).
Implementation of Intelligent CPS 287

3. Liu, Chang, et al. “Integrated application in intelligent production and logistics man-
agement: Technical architectures concepts and business model analyses for the cus-
tomised facial masks manufacturing.” April 2019 International Journal of Computer
Integrated Manufacturing 32.4–5.
4. Chen, Yubao. “Integrated and intelligent manufacturing: Perspectives and enablers.”
2017 Engineering 3.5.
5. Tao, Fei, et al. “Digital twins and cyber–physical systems toward smart manufacturing
and industry 4.0: Correlation and comparison.” 2019 Engineering 5.4.
6. Zhang, Yingfeng, et al. “A framework for smart production-logistics systems based on
CPS and industrial IoT.” 2018 IEEE Transactions on Industrial Informatics 14.9.
7. Thoben, Klaus-Dieter, Stefan Wiesner, and Thorsten Wuest. ““Industrie 4.0” and
smart manufacturing-a review of research issues and application examples.” 2017
International Journal of Automation Technology 11.1.
8. Chen, Zhen, and Mingjie Xing. “Upgrading of textile manufacturing based on Industry
4.0.” 2015 5th International Conference on Advanced Design and Manufacturing
Engineering. Atlantis Press.
9. Shi, Zhan, et al. “Smart factory in Industry 4.0.” 2020 Systems Research and Behavioral
Science 37.4.
10. You, Zhijia, and Lingjun Feng. “Integration of industry 4.0 related technologies in the
construction industry: A framework of A cyber-physical system.” 2018 IEEE Access 8.
11. Pathak, Pankaj, et al. “Fifth revolution: Applied AI & human intelligence with
cyber-physical systems.” 2019 International Journal of Engineering and Advanced
Technology 8.3.
12. El Zant, Chawki, et al. “Enhanced manufacturing execution system “MES” through
a smart vision system.” 2021 Advances on Mechanics, Design Engineering and
Manufacturing III: Proceedings of the International Joint Conference on Mechanics,
Design Engineering & Advanced Manufacturing, JCM.
13. Wang, Chengcheng, and Kunlun Wei. “Construction of intelligent manufacturing digi-
tal workshop ability assessment model for CPS.” 2019 2nd World Conference on
Mechanical Engineering and Intelligent Manufacturing (WCMEIM).
14. Liu, Chao, and Pingyu Jiang. “A cyber-physical system architecture in shop floor for
intelligent manufacturing.” 2016 Procedia CIRP 56.
15. Li, Suisheng, et al. “Blockchain dividing based on node community clustering in
intelligent manufacturing cps.” 2019 IEEE International Conference on Blockchain
(Blockchain).
16. Atieh, Anas Mahmoud, Kavian Omar Cooke, and Oleksiy Osiyevskyy. “The role of
intelligent manufacturing systems in the implementation of Industry 4.0 by small and
medium enterprises in developing countries.” 2023 Engineering Reports 5.3.
17. Qi, Quansong, Zhiyong Xu, and Pratibha Rani. “Big data analytics challenges to imple-
menting the intelligent Industrial Internet of Things (IIoT) systems in sustainable man-
ufacturing operations.” 2023 Technological Forecasting and Social Change 190.
18. Chen, Gaige, et al. “The framework design of smart factory in discrete manufactur-
ing industry based on cyber-physical system.” International Journal of Computer
Integrated Manufacturing 33.1.
19. Tupa, Jiri, Jan Simota, and Frantisek Steiner. “Aspects of risk management implemen-
tation for Industry 4.0.” 2019 Procedia Manufacturing 11.
20. Wang, Lin, Jinfeng He, and Songjie Xu. “The application of industry 4.0 in customized
furniture manufacturing industry.” 2017 Matec Web of Conferences. Vol. 100. EDP
Sciences.
21. Zawra, Labib M., et al. “Utilizing the Internet of Things (IoT) technologies in the imple-
mentation of Industry 4.0.” 2020 International Conference on Advanced Intelligent
Systems and Informatics. Cham: Springer International Publishing.
288 AI-Driven IoT Systems for Industry 4.0

22. Li, Suisheng, Hong Xiao, and Jingwei Qiao. “Multi-chain and data-chains partitioning
algorithm in intelligent manufacturing CPS.” 2021 Journal of Cloud Computing 10.1.
23. Barbosa, José, et al. “Cross benefits from cyber-physical systems and intelligent
products for future smart industries.” 2016 IEEE 14th International Conference on
Industrial Informatics (INDIN).
24. Abikoye, Oluwakemi Christiana, et al. “Application of Internet of Thing and cyber-
physical system in Industry 4.0 smart manufacturing.” 2021 Emergence of Cyber-
Physical System and IoT in Smart Automation and Robotics: Computer Engineering in
Automation. Cham: Springer International Publishing, 203–217.
25. Leitão, Paulo, Armando Walter Colombo, and Stamatis Karnouskos. “Industrial auto-
mation based on cyber-physical systems technologies: Prototype implementations and
challenges.” 2016 Computers in Industry 81.
26. Xu, Li Da, Eric L. Xu, and Ling Li. “Industry 4.0: State of the art and future trends.”
International Journal of Production Research 56.8.
27. Chen, Zhehan, Xiaohua Zhang, and Ketai He. “Research on the technical architecture
for building CPS and its application on a mobile phone factory.” 2017 5th International
Conference on Enterprise Systems (ES).
28. Lv, Zhihan, et al. “Trustworthiness in industrial IoT systems based on artificial intel-
ligence.” 2018 IEEE Transactions on Industrial Informatics 17.2.
29. Habib, Maki K., and Chukwuemeka Chimsom. “CPS: Role, characteristics, architec-
tures and future potentials.” 2014 Procedia Computer Science 200: 1347–1358.
30. Ahmed, Rania Salih, Elmustafa Sayed Ali Ahmed, and Rashid A. Saeed. “Machine
learning in cyber-physical systems in Industry 4.0.” 2020 Artificial Intelligence
Paradigms for Smart Cyber-Physical Systems. IGI Global, 20–41.
16 Machine-Learning-
Enabled Stress Detection
in Indian Housewives
Using Wearable
Physiological Sensors
Shruti Gedam and Sanchita Paul

16.1 INTRODUCTION
Mental stress has emerged as a major worry affecting people’s well-being, particu-
larly among housewives, who frequently confront various tasks and pressures in
their everyday lives. Detecting and monitoring mental stress levels in this population
can aid in the development of effective intervention techniques and the promotion of
mental health [1]. The introduction of wearable physiological sensors in recent years
has provided a promising avenue for real-time monitoring of physiological signals
related to stress.
Among the different physiological signals that can be measured, electrocardio-
gram (ECG), galvanic skin response (GSR), and skin temperature (ST) have been
identified as useful indicators of a person’s stress level [2]. Each of these signals
gives important information on the physiological changes that occur during stress,
offering light on the underlying mechanisms and improving stress detection and
monitoring. Wearable physiological sensors are devices that continually monitor an
individual’s physiological parameters such as heart rate, skin conductance, ST, and
others. When used to identify mental stress, these sensors act as a non-invasive, real-
time data source, providing significant insights into an individual’s physiological
responses to stressors. The idea is to use these sensors to collect data, analyse it, and
identify trends or deviations related to stress, which can subsequently be used for
early intervention or stress management.
The ECG, which measures electrical activity in the heart, provides critical infor-
mation about heart rate variability (HRV). HRV, which measures the time intervals
between consecutive heartbeats, has been extensively researched as a key indicator
of stress levels. The autonomic nervous system alters with stress, resulting in varia-
tions in HRV rhythms. Increased sympathetic nervous system activity and decreased
parasympathetic nervous system activity are specifically seen, resulting in altera-
tions in heart rate and HRV measurements [3]. Valuable elements such as heart rate,

DOI: 10.1201/9781003432319-16 289


290 AI-Driven IoT Systems for Industry 4.0

HRV, and certain frequency domain metrics can be retrieved from ECG data, assist-
ing in the precise assessment of mental stress levels.
GSR, also known as electrodermal activity (EDA), monitors changes in the skin’s
electrical conductivity. It is affected by sympathetic nervous system activity, which
is linked to emotional and psychological responses such as stress. During times of
stress, GSR signals exhibit different patterns, which are characterised by higher skin
conductance levels. Sweat gland activity, which is controlled by sympathetic arousal,
is responsible for the rise in conductance. Skin conductance level, skin conductance
response, and temporal dynamics can be derived from GSR data, providing useful
information for mental stress detection [4].
Changes in blood flow and sympathetic nervous system activity influence ST,
another important physiological marker. Vasoconstriction occurs under stress as a
result of heightened sympathetic arousal, resulting in decreased blood flow to the
skin and a corresponding drop in ST [5]. Changes in ST patterns might be sugges-
tive of the body’s physiological reaction to stimuli, making this thermoregulatory
response a potential stress marker. Capturing ST data allows for the extraction of
parameters such as average ST, temperature variability, and thermal response, which
aids in the understanding and detection of mental stress levels.
The combination of ECG, GSR, and ST readings gives a multi-modal tech-
nique for detecting mental stress. By taking into account numerous physiological
signs at the same time, researchers can make use of the complimentary nature
of these signals to improve the accuracy and reliability of stress evaluation. The
combination of these signals enables a more thorough investigation of the phys-
iological changes that occur during stress, capturing many components of the
body’s stress response.
The study’s contributions include investigating the applicability of wearable phys-
iological sensors and the efficacy of feature selection approaches in capturing mental
stress changes in Indian housewives. Furthermore, we hope to provide insights into
the best relevant strategies for stress detection in this specific population by analys-
ing and comparing multiple machine-learning (ML) algorithms.

16.1.1 Common Challenges Faced by Housewives


Despite their essential functions within the family structure, Indian housewives
encounter a distinct set of challenges that can significantly contribute to their stress
levels. These difficulties are firmly founded in cultural, sociological, and economic
variables, which frequently cross to form a complex network of pressures [6]. Here
are some common problems that Indian housewives deal with that can cause stress:

1. Lack of recognition: Housewives frequently deal with a lack of acknowl-


edgement for their contributions to the household and society. Their work is
frequently unappreciated and decreased in value, contributing to feelings of
invisibility and low self-esteem.
2. Unpaid labour: Housewives execute a wide range of tasks, from childcare
and cooking to cleaning and financial management, all without pay. The
lack of financial freedom can cause stress and dependency.
Machine-Learning-Enabled Stress Detection 291

3. Limited career opportunities: Many housewives had to give up their careers


or studies to care for their families. This lack of financial independence
might result in feelings of dependency and frustration.
4. Gender expectations: Society typically expects housewives to complete tra-
ditional gender roles, complying with conventions that limit their autonomy
and opportunities for personal growth.
5. Work-life balance: Balancing domestic duties, childcare, and personal needs
can be difficult. The constant juggling act can lead to tiredness and tension.
6. Lack of me-time: A housewife’s daily routine generally revolves around the
requirements of the family, leaving little time for self-care and personal
hobbies.
7. Financial stress: Managing a household budget on a single salary can be
difficult, especially with increased living costs and inflation.
8. Lack of help: Housewives who are overwhelmed by their responsibilities
may get limited or no help from their extended family or spouse.
9. Marital strain: Stress can be exacerbated by marital problems and damaged
relationships with spouses. Expectations and misunderstandings regarding
family roles and obligations can cause conflict.
10. Mental health stigma: Discussing mental health difficulties has been looked
down upon in many Indian societies. Housewives may be hesitant to seek
help for stress-related issues out of fear of being judged.
11. Health concerns: Housewives frequently neglect their own health in order
to prioritise the well-being of family members. This selflessness may result
in health issues and excessive stress.
12. Personal fulfilment: Some housewives may become dissatisfied with their
lives as they sense a lack of purpose and opportunity for personal growth.

16.1.2 Need to Monitoring the Mental Stress of Housewives


Monitoring housewives’ mental stress is critical for a variety of compelling reasons.
First of all, persistent and mismanaged stress can be harmful to both physical and
mental health. Housewives who experience chronic stress are more likely to suffer
health problems such as cardiovascular disease, depression, anxiety disorders, and
reduced immunological function. Monitoring stress levels allows for the identifica-
tion of those at risk and early action to avert these health concerns. Also, the mental
health of housewives has a direct impact on the dynamics of the entire family. Stress
can cause tension within the home, harming relationships with spouses and children.
Stress monitoring can assist in identifying sources of stress within the family and
implementing ways to improve family harmony, benefitting all family members in
the end.
Stress monitoring can shed information on the specific issues that housewives
confront as a result of gender inequity and traditional gender norms. Recognising and
addressing these issues is a step towards minimising gender gaps and fostering gen-
der equality in the home and society as a whole. It might spark discussions within the
family about altering gender roles and expectations. Furthermore, stress monitoring
can aid in the identification of housewives who are at risk of experiencing extreme
292 AI-Driven IoT Systems for Industry 4.0

stress and emotional discomfort. This information can be utilised to create support
systems and programmes that are tailored to their individual needs. Counselling,
peer support groups, or access to mental health services are all examples of initia-
tives that might equip housewives with the resources they need to properly manage
their stress.
Providing insights into housewives’ stress levels might empower them to take
charge of their mental health. Individuals who are aware of their stress triggers and
patterns can practise self-care strategies and seek help when necessary, thereby
enhancing their overall quality of life. This type of empowerment benefits house-
wives’ autonomy and self-efficacy. Additionally, good stress management can boost
a housewife’s productivity and engagement in daily activities. Stress reduction can
lead to better decision-making, enhanced concentration, and increased excitement
for home activities and personal pursuits. This, in turn, can have a favourable impact
on the household’s general well-being.
Another key advantage of stress monitoring is the prevention of burnout.
Housewives frequently work nonstop, handling various obligations with no breaks.
Continuous stress monitoring can aid in the prevention of burnout by detecting early
indicators of excessive stress and allowing for appropriate modifications to daily
routines and lifestyle choices. This helps to improve their long-term well-being and
resilience. Addressing stress among housewives has the potential for long-term posi-
tive consequences in a broader societal context. If individuals choose to re-enter the
workforce or pursue personal hobbies, stress reduction can lead to improved mental
health, better family connections, and enhanced job satisfaction. When housewives
are emotionally healthy and nurtured, they may take a more active role in their com-
munities, benefiting society as a whole.

16.2 DATA COLLECTION AND STRESSORS


To collect the essential data, we created a custom-built Internet of Medical Things
(IoMT) gadget for our research. This gadget had three critical sensors: an ECG sen-
sor (heart rate monitor AD 3282), a GSR module, and an ST sensor (DS18B20 tem-
perature sensor). Figure 16.1 depicts this IoMT device. Our device also included a

FIGURE 16.1 IoMT device made of described low-cost the sensor (a) outside part of the
device (b) inside the structure of the device.
Machine-Learning-Enabled Stress Detection 293

USB power supply for device operation, a real-time clock (RTC) module for precise
timekeeping, a microSD card module for data storage, Arduino Mega and UNO
units for sensor connectivity, and a TFT LCD display for real-time visualisation of
ECG, GSR, and ST measurements.
During the data collection phase, the ECG sensor was strategically placed on the
chest with three electrodes or pads to accurately capture the heart’s electrical sig-
nals. The GSR sensor was placed on the palmar surface of the distal phalanx of the
index and middle fingers, which was chosen due to the increased density of sweat
glands, allowing exact measurements of changes in skin conductance. Because of its
accessibility, stability, and consistent temperature readings, the DS18B20 tempera-
ture sensor was carefully inserted under the armpits of the participants. Special care
was taken to ensure appropriate skin contact and proper positioning of the sensor tip
in the centre of the armpit.

16.2.1 IoMT Device and Subjects


Our study included 50 housewives carefully selected from the CMPDI, Ranchi,
household. These participants were chosen based on particular criteria: they were
married, between the ages of 25 and 40, and worked largely in the home. Individuals
with known cardiovascular or respiratory disorders that could affect the accuracy
of physiological data as well as those using stress management therapy during the
study period were excluded. Prior to participation, interested participants were given
detailed information about the study’s objectives, procedures, potential risks and
benefits, and informed consent was acquired.
Participants were given particular instructions in order to ensure consistency and
control over experimental settings. For a minimum of 24 hours previous to the data
collecting sessions, they were required to refrain from ingesting caffeinated bever-
ages, engaging in strenuous physical activities, or taking drugs known to impact
cardiovascular or autonomic responses. Additionally, efforts were made to ensure
that the sample was diverse in terms of socio-demographic characteristics such as
age, educational background, and household size.

16.2.2 Stressors
A stressor is a stimulation or condition that causes an individual to experience stress.
Stressors can be external or internal, and their nature varies greatly. They can elicit
physiological, psychological, and emotional responses, which constitute the body’s
response to stress [7].
Participants in the study were exposed to a variety of stress-inducing cir-
cumstances in order to investigate their physiological reactions. These stresses
were divided into three categories: “No Stress,” “Acute Stress,” and “Chronic
Stress.”

• No stress: It required participants to listen to relaxing music for 5 minutes.


This phase served as a baseline, allowing us to observe the physiological
reactions of participants when they were relaxed.
294 AI-Driven IoT Systems for Industry 4.0

• Acute stress: It required participants to complete a puzzle within a specific


time restriction. This time-sensitive exercise increased the sense of urgency
and mental stress, providing insight into instantaneous stress reactions.
• Chronic stress: It required participants to complete a 5-minute serial sub-
traction exercise. This intellectually taxing work demanded constant con-
centration and effort, resulting in a protracted state of stress.

16.3 METHODOLOGY
16.3.1 Pre-Processing of Dataset and Extraction of Features
A set of pre-processing measures were carried out in order to improve the model’s
performance. These steps were critical in getting the data ready for deep ML pro-
cessing. To begin, the ECG data was filtered using a notch filter with a cut-off fre-
quency of 0.05, resulting in the extraction of 15 HRV components. Simultaneously,
15 different features were obtained from the GSR signal. This was accomplished
by passing the signal through a low-pass Butterworth filter and calculating the first
derivative of the filtered data. The discrete wavelet transform (DWT) [8] was used to
address the inherent randomness in the GSR signal. This transformation efficiently
captured various frequency components by separating the signal into approximation
and detailed coefficients. To extract 11 ST features, a Butterworth filter was used,
but this time with a lower frequency cut-off of 5 Hz and a sampling rate of 1000 Hz.
The model’s inputs were the retrieved characteristics from the ECG, GSR, and ST
modalities. The dataset was partitioned for model training and testing, with 75% set
aside for training and 25% set aside for testing. The flowchart of the methodology of
the whole study is illustrated in Figure 16.2.
The StandardScaler [9] technique was used to provide consistent and predict-
able model performance throughout training and testing. The features taken from
the ECG, GSR, and ST data were standardised using this technique, yielding fea-
tures with a mean of 0 and a standard deviation of 1 (Table 16.1). This procedure of
standardisation was critical in optimising the model’s performance throughout the
investigation.

16.3.2 Feature Selection and Dimensionality Reduction


The process of feature selection is critical in our effort to construct an accurate
stress detection model employing wearable physiological sensors. The proper fea-
tures can have a big impact on the model’s performance, interpretability, and effi-
ciency. Here, an effective feature selection strategy is used for determining the most
relevant physiological indicators for stress detection: principal component analysis
(PCA).

16.3.2.1 Principal Component Analysis


Principal Component Analysis (PCA) [10] is a dimensionality reduction approach
that converts our high-dimensional physiological dataset into a lower dimensional
space, capturing the most important information. PCA allows us to minimise
Machine-Learning-Enabled Stress Detection 295

FIGURE 16.2 Flowchart of the proposed system.

complexity while keeping key traits by identifying principal components that explain
the most variance in the data. PCA assists us in identifying the most relevant physi-
ological variables that lead to stress changes in the context of stress detection. Based
on cumulative explained variance or other criteria, we identify the ideal number of
primary components to retain, ensuring the preservation of crucial stress-related
information.
296 AI-Driven IoT Systems for Industry 4.0

TABLE 16.1
All the Extracted Features (15 Each) from the Three Signals – ECG, GSR, and ST

Signal Feature Description of Feature


ECG Mean RR The average duration between successive R-peaks
(R-R intervals) in milliseconds
SDNN Reflects overall HRV by measuring the standard deviation of
RR intervals, indicating the variability in heart rate
RMSSD Captures short-term HRV by calculating the square root
of the mean of the squared differences between successive
RR intervals
pNN25 Indicates parasympathetic activity by quantifying the percentage
of RR intervals with differences greater than 25 milliseconds
pNN50 Indicates parasympathetic activity by quantifying the percentage
of RR intervals with differences greater than 50 milliseconds
VLF Represents long-term regulatory mechanisms in HRV, typically
in the frequency range of 0.0033–0.04 Hz
LF Reflects both sympathetic and parasympathetic activity, found
in the frequency range of 0.04–0.15 Hz.
HF Reflects parasympathetic (vagal) activity and respiratory
influences, typically in the frequency range of 0.15–0.4 Hz
TP Represents overall HRV across all frequencies, providing a
global assessment of variability
LF/HF Ratio Indicates the balance between sympathetic and parasympathetic
activity, with higher values suggesting sympathetic dominance
and lower values indicating parasympathetic dominance
SD1 SD1 measures short-term HRV by assessing the shape and
spread of RR intervals in a scatterplot
SD2 SD2 represents long-term HRV by assessing the shape and
spread of RR intervals in a scatterplot
SampEn Measures the irregularity or complexity of HRV patterns, with
higher values indicating greater irregularity
ApEn Evaluates the complexity of HRV patterns, quantifying
predictability, with lower values indicating greater regularity
TI Represents overall HRV using the area of a triangle formed by
RR intervals on a histogram plot
GSR SCL Skin conductance level – Represents the baseline conductance
of the skin, indicating the tonic level of skin conductance
SCR Skin conductance response – Measures the magnitude of rapid
changes in skin conductance, reflecting phasic arousal or
emotional responses
Mean Amp of SCRs Calculates the average amplitude of all detected SCRs,
providing an overall measure of skin conductance response
magnitude
Number of SCRs: Indicates the total count of skin conductance responses during a
specific time period, reflecting arousal events

(Continued)
Machine-Learning-Enabled Stress Detection 297

TABLE 16.1 (Continued)


All the Extracted Features (15 Each) from the Three Signals – ECG, GSR, and ST

Signal Feature Description of Feature


Rise Time Measures the time it takes for the skin conductance to increase
from the baseline to the peak of an SCR, indicating the speed of
response
Recovery Time Reflects the time it takes for the skin conductance to return to
the baseline following an SCR, indicating the recovery rate
Phasic Activity Quantifies the changes in skin conductance activity,
encompassing both the number and amplitude of SCRs
SC Amplitude Skin conductance amplitude – Represents the absolute
magnitude of the skin conductance response
SCT Level Skin conductance tonic level – Indicates the continuous
conductance level during non-response periods, providing
information about baseline skin conductance
SCV Skin conductance variability – Reflects the fluctuation in skin
conductance over time, offering insights into autonomic nervous
system activity
SCH Skin conductance habituation – Measures the reduction in the
magnitude of responses when exposed to repetitive stimuli,
indicating adaptation or habituation
SCR Skin conductance reactivity – Evaluates the extent of change in
skin conductance in response to specific stimuli, stressors, or
events
SC Recovery Skin conductance recovery – Assesses the time it takes for the
skin conductance to return to baseline levels following a
stimulus, reflecting emotional regulation
SCA Skin conductance asymmetry – Compares the conductance
levels between two skin sites (e.g., left and right hands) to detect
potential differences in physiological responses
EDL Electrodermal lability – Indicates the degree of variability in
skin conductance levels, offering insights into emotional
reactivity and stability
ST Mean ST The average temperature value over a specified time period,
indicating the central tendency of skin temperature
SDST Measures the degree of variability or dispersion in skin
temperature readings, providing insights into temperature
fluctuations
Min ST Measures the degree of variability or dispersion in skin
temperature readings, providing insights into temperature
fluctuations
Max ST Indicates the highest recorded skin temperature during the
observation period
ST Range Calculated as the difference between the maximum and minimum
skin temperature values, reflecting the overall temperature span

(Continued)
298 AI-Driven IoT Systems for Industry 4.0

TABLE 16.1 (Continued)


All the Extracted Features (15 Each) from the Three Signals – ECG, GSR, and ST

Signal Feature Description of Feature


ST Variability Assess the degree of variation in skin temperature values, which
can be calculated using metrics like the coefficient of variation
Mean ST Gradient Represents the rate of change of skin temperature over time,
indicating temperature trends
Frequency of ST Measures how often skin temperature reaches local maximum
Peaks values, providing insights into temperature oscillations
Frequency of ST Measures how often skin temperature reaches local minimum
Troughs values, indicating temperature dips
ST Amplitude Reflects the magnitude of temperature fluctuations, typically
calculated as the difference between peak and trough
temperatures
ST Slope Represents the rate of change in skin temperature over time,
indicating the temperature’s overall trend
ST Baseline Indicates the average temperature value that skin temperature
tends to return to during periods of stability
ST Entropy Measures the randomness or unpredictability in skin
temperature patterns over time
ST Autocorrelation Evaluates the degree of correlation between skin temperature
values at different time lags, providing insights into temporal
dependencies
ST Spectral analysis Utilises frequency-domain analysis techniques such as fast
Fourier transform (FFT) to extract frequency components and
patterns in skin temperature data

16.3.3 Machine-Learning Algorithms
In this study, a variety of ML methods in the pursuit of effective stress detection
among Indian housewives utilise wearable physiological sensors. These algorithms
formed the framework of our stress-detection model, each with its own set of benefits
and capabilities. Here, providing a brief overview of the ML algorithms utilised the
following:

16.3.3.1 Random Forest


Random forest (RF) is a decision tree-based ensemble learning technique. During
training, it creates numerous decision trees and aggregates their predictions to
improve accuracy and reduce overfitting. RF plants decision trees in a forest [11].
Each tree in the forest is trained on a random subset of the data and features (bagging
and feature bagging). This randomisation aids in the reduction of overfitting and the
improvement of generalisation. RF collects predictions from each tree and takes a
majority vote for classification tasks during prediction. This strategy often yields
more robust and accurate predictions. RF gives a measure of feature importance,
demonstrating which features have the greatest influence on prediction.
Machine-Learning-Enabled Stress Detection 299

16.3.3.2 Support Vector Machines


Support vector machines (SVMs) are powerful classifiers that excel at locating a
hyperplane that maximises the margin between different classes of data points [12].
SVM seeks to find a hyperplane that has the largest margin, which is the distance
between the hyperplane and the nearest data points from each class. This margin
maximisation facilitates good generalisation of unseen data. Non-linear data can be
handled by SVM by mapping it to a higher dimensional space with kernel functions
(e.g., polynomial and radial basis functions). A linear hyperplane can separate non-
linearly distributed data in this higher dimensional space. Support vectors are data
points that are closest to the hyperplane and play an important role in establishing
the margin. They define the SVM’s decision boundary.

16.3.3.3 Multi-Layer Perceptron Neural Networks


Multi-layer perceptron neural networks (MLPNN) are a type of artificial neural net-
work (ANN) that can simulate complex data relationships. MLPNNs are made up
of interconnected layers of neurons (nodes) [13]. In most cases, layers consist of an
input layer, one or more hidden layers, and an output layer. Each layer’s neurons
apply activation functions to their inputs, introducing non-linearity. ReLu (Rectified
Linear Unit), sigmoid, and tanh are examples of common activation functions.
During training, MLPNNs use backpropagation to update the network’s weights.
Calculating gradients and modifying weights to minimise the discrepancy between
projected and actual outputs is part of this process. Because MLPNNs are well-
suited for capturing detailed patterns in data, they are useful for tasks requiring
physiological signals such as ECG and GSR.

16.3.3.4 Radial Basis Function Neural Networks


Radial basis function neural networks (RBFNN) [14] are a form of ANN in which
the hidden layers use radial basis functions as activation functions. In RBFNNs, the
activation function is often a radial basis function that generates an output based on the
distance between the input and a centre point. As radial basis functions, Gaussian func-
tions are often utilised. RBFNNs excel at estimating functions and may approximate
complex, non-linear data relationships. The centres of radial basis functions are found
using K-means clustering on the input data in some RBFNN implementations. The final
predictions or classifications are often produced by a linear output layer in RBFNNs.

16.3.3.5 Linear Discriminant Analysis


Linear discriminant analysis (LDA) [15] is both a dimensionality reduction approach
and a classification tool. It seeks a linear combination of attributes that best separates
different classes of data. LDA decreases the dimensionality of data by projecting it
onto a lower dimensional space while maximising class separation. LDA uses the
covariance matrices of the classes to identify the optimal linear discriminants. It
is assumed that the classes have the same covariance matrix. LDA computes linear
transformations that maximise between-class scatter while minimising within-class
scatter, resulting in a set of discriminant axes. After dimensionality reduction, LDA
can be used as a classifier by allocating data points to the class with the closest cen-
troid on the discriminant axis.
300 AI-Driven IoT Systems for Industry 4.0

16.3.3.6 Quadratic Discriminant Analysis


Quadratic discriminant analysis (QDA) [16], like LDA, is a dimensionality reduc-
tion and classification algorithm. In contrast to LDA, which assumes that all classes
have the same covariance matrix, QDA assumes that each class has its own covari-
ance matrix. This means that QDA may simulate more complex decision boundaries,
making it suited for situations involving classes with varying variances or non-linear
interactions with features. QDA, like LDA, tries to reduce dimension by projecting
data onto lower dimensional regions. Because it takes into account class-specific
covariance matrices, the resulting discriminant axes can be curved and non-linear.
After dimensionality reduction, QDA is used as a classifier. It categorises data points
according to their closeness to class centroids in the converted space. Because of
QDA’s versatility in modelling diverse covariance structures, it can be useful for
situations where LDA’s assumptions may not apply in the altered space.

16.3.3.7 Gaussian Naive Bayes


Gaussian Naive Bayes [17] is a Bayesian-based probabilistic classification tech-
nique. Gaussian Naive Bayes models the likelihood of each feature given each
class as a Gaussian distribution (thus, the name “Gaussian”). It is assumed that the
characteristics are continuous and distributed normally within each class. The term
“Naive” refers to the class label’s assumption of conditional independence between
features. In other words, it presumes that the existence or value of one feature has
no effect on the presence or value of another.
Using Bayes’ theorem, the technique computes the posterior probabilities of each
class for a given collection of feature values. As the projected class, it chooses the
class with the highest posterior probability. Gaussian Naive Bayes is a computa-
tionally efficient method for dealing with high-dimensional data. It is frequently
employed in text classification and spam detection.

16.3.3.8 LightGBM
LightGBM [18] is an ML technique of the gradient boosting family. It sequentially
constructs an ensemble of decision trees, with each tree learning from the mistakes
of the previous ones. LightGBM, as opposed to standard depth-wise tree growth,
employs a leaf-wise growth technique. At each stage, it chooses the leaf node that
produces the greatest reduction in the loss function. This strategy frequently results
in shorter training times. LightGBM can directly accept categorical features, elimi-
nating the need for one-hot encoding and making it useful for datasets with mixed
data types. Because of its speed and efficiency, LightGBM is a popular choice for
contests and applications where model training time is critical.

16.4 RESULTS AND DISCUSSION


In this section, we show and analyse the findings of our work on ML-enabled stress
detection in Indian housewives using wearable physiological sensors. To minimise
the dimensionality of the dataset while maintaining the most significant information,
we used PCA as the feature selection technique in our work. PCA successfully trans-
lated the high-dimensional data into a lower dimensional space, thereby capturing
Machine-Learning-Enabled Stress Detection 301

TABLE 16.2
Performance of ML Algorithms in Terms of Accuracy, Precision,
Recall, and F1-Score
ML Algorithm Accuracy Precision Recall F1-Score
RF 92.68 92.11 93.57 92.83
SVM 91.81 91.84 92.01 91.92
MLPNN 93.29 92.07 94.92 93.47
RBFNN 85.98 85.89 86.51 86.20
LDA 53.87 54.49 53.91 54.20
QDA 54.54 55.17 54.54 54.85
Gaussian Naïve Bayes 92.81 91.43 94.69 93.03
LightGBM 92.31 91.31 93.73 92.50

the data’s variance. This dimensionality reduction was critical for improving the
performance and interpretability of our stress-detection model. Based on wearable
physiological sensor data, we investigated multiple ML methods to detect mental
stress in Indian housewives. The performance metrics of these algorithms are sum-
marised in Table 16.2.
According to the results, the MLPNN algorithm had the highest accuracy (93.29%)
and F1-score (93.47%), showing its efficiency in detecting mental stress in housewives.
MLPNN’s precision and recall values also show a well-balanced performance. RF
and Gaussian Naive Bayes also performed well, with accuracy values of 92.68% and
92.81%, respectively. These algorithms displayed great precision and recall, implying
that they have the capacity to identify stress reliably. LDA and QDA, on the other
hand, demonstrated lower accuracy and F1-scores, indicating limits in their ability to
properly capture the intricacies of stress-related physiological changes (Figure 16.3).

FIGURE 16.3 The graph shows all performance matrices of all ML algorithms; here the
X-axis shows performance metrics and the Y-axis shows values in percentage.
302 AI-Driven IoT Systems for Industry 4.0

The success of MLPNN, RF, and Gaussian Naive Bayes in our stress-detection
model demonstrates the utility of wearable physiological sensors in detecting men-
tal stress in Indian housewives. These discoveries have important implications for
research as well as practical applications. Accurate stress identification can lead to
prompt interventions and support systems for housewives dealing with a variety
of issues. Personalised treatment and stress management programmes, for exam-
ple, can be adapted to individual stress levels, contributing to greater well-being.
Furthermore, the use of IoMT devices allows for continuous monitoring, increasing
the possibility of real-time interventions and personalised activities. Furthermore,
the study highlights the significance of feature selection approaches such as PCA in
improving the performance of stress-detection models. We can improve model effi-
ciency and interpretability by identifying the most relevant physiological indicators.
To generalise the findings, future studies should try to include a larger and more
diverse sample. The model’s applicability can be improved by broadening the range
of stressors and including real-life scenarios.

16.5 CONCLUSION AND FUTURE WORK


Using wearable physiological sensors and ML approaches, this study explored the
critical field of mental stress detection among Indian housewives. Monitoring and
addressing the mental well-being of women who face a variety of home challenges
is critical. Our investigation provided important results. Wearable physiological
sensors such as ECG, GSR, and ST sensors have shown to be invaluable for real-
time stress monitoring, providing a non-invasive insight into stress-related physi-
ological responses. PCA was critical in developing our stress-detection model. PCA
improved model accuracy and interpretability by reducing data dimensionality while
keeping essential stress-related information. MLPNN had the highest accuracy and
F1-score among the various ML methods tested. Notably, RF and Gaussian Naive
Bayes performed well as well.
In application, our findings emphasise the possibility of personalised stress man-
agement and support systems enabled by IoMT devices. These discoveries have the
potential to dramatically improve the well-being of housewives. Despite study con-
straints such as sample size and stressor diversity, our findings highlight the revolu-
tionary potential of wearable sensors and ML in treating mental stress among Indian
housewives, changing mental health care and support.

REFERENCES
1. Singh, V., Kumar, A., & Gupta, S. (2022). Mental Health Prevention and Promotion-A
Narrative Review. Frontiers in Psychiatry, 13, 898009. https://doi.org/10.3389/
fpsyt.2022.898009.
2. Kyriakou, K., Resch, B., Sagl, G., Petutschnig, A., Werner, C., Niederseer, D.,
Liedlgruber, M., Wilhelm, F., Osborne, T., & Pykett, J. (2019). Detecting Moments
of Stress from Measurements of Wearable Physiological Sensors. Sensors (Basel,
Switzerland), 19(17), 3805. https://doi.org/10.3390/s19173805.
3. Shaffer, F., & Ginsberg, J. P. (2017). An Overview of Heart Rate Variability Metrics and
Norms. Frontiers in Public Health, 5, 258. https://doi.org/10.3389/fpubh.2017.00258.
Machine-Learning-Enabled Stress Detection 303

4. Subramanian, S., Barbieri, R., & Brown, E. N. (2020). Point Process Temporal
Structure Characterizes Electrodermal Activity. Proceedings of the National Academy
of Sciences of the United States of America, 117(42), 26422–26428. https://doi.org/
10.1073/pnas.2004403117.
5. Herborn, K. A., Graves, J. L., Jerem, P., Evans, N. P., Nager, R., McCafferty, D. J., &
McKeegan, D. E. (2015). Skin Temperature Reveals the Intensity of Acute Stress. Physiology
& Behavior, 152(Pt A), 225–230. https://doi.org/10.1016/j.physbeh.2015.09.032.
6. Malhotra, S., & Shah, R. (2015). Women and Mental Health in India: An
Overview. Indian Journal of Psychiatry, 57(Suppl 2), S205–S211. https://doi.org/
10.4103/0019-5545.161479.
7. Schneiderman, N., Ironson, G., & Siegel, S. D. (2005). Stress and Health: Psychological,
Behavioral, and Biological Determinants. Annual Review of Clinical Psychology, 1,
607–628. https://doi.org/10.1146/annurev.clinpsy.1.102803.144141.
8. Van Fleet, P. J. (2019). Discrete wavelet transformations: An elementary approach with
applications. Wiley.
9. Abbasi, A., Javed, A. R., Chakraborty, C., Nebhen, J., Zehra, W., & Jalil, Z. (2021).
ElStream: An Ensemble Learning Approach for Concept Drift Detection in Dynamic
Social Big Data Stream Learning. IEEE Access, 9, 66408–66419.
10. Kurita, T. (2019). Principal component analysis (PCA). Computer Vision: A Reference
Guide, 1–4.
11. Fratello, M., & Tagliaferri, R. (2018). Decision trees and random forests. In
Encyclopedia of bioinformatics and computational biology: ABC of bioinformatics
(Vol. 374). Elsevier.
12. Byun, H., & Lee, S. W. (2002, July). Applications of support vector machines for pat-
tern recognition: A survey. In International workshop on support vector machines
(pp. 213–236). Springer Berlin Heidelberg.
13. Jambukia, S. H., Dabhi, V. K., & Prajapati, H. B. (2015, March). Classification of ECG
signals using machine learning techniques: A survey. In 2015 International Conference
on Advances in Computer Engineering and Applications (pp. 714–721). IEEE.
14. Yingwei, L., Sundararajan, N., & Saratchandran, P. (1997). A Sequential Learning
Scheme for Function Approximation Using Minimal Radial Basis Function Neural
Networks. Neural Computation, 9(2), 461–478.
15. Varatharajan, R., Manogaran, G., & Priyan, M. K. (2018). A Big Data Classification
Approach Using LDA with an Enhanced SVM Method for ECG Signals in Cloud
Computing. Multimedia Tools and Applications, 77, 10195–10215.
16. Ghojogh, B., & Crowley, M. (2019). Linear and quadratic discriminant analysis:
Tutorial. arXiv preprint arXiv:1906.02590.
17. Jahromi, A. H., & Taheri, M. (2017, October). A non-parametric mixture of Gaussian
naive Bayes classifiers based on local independent features. In 2017 Artificial
Intelligence and Signal Processing Conference (AISP) (pp. 209–212). IEEE.
18. Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., … & Liu, T. Y. (2017).
Lightgbm: A Highly Efficient Gradient Boosting Decision Tree. Advances in Neural
Information Processing Systems, 3149–3157.
17 Rising of Dark
Factories due to
Artificial Intelligence
Anjali Mathur

17.1 INTRODUCTION
An industry can be considered economic activity that is concerned with the pro-
duction of the goods, extraction of the services and the provisions of services.
Steel energy is an example of the production of goods, while coal mining belongs
to extraction of the materials. The tourism industry is all about the provision of
services. Industries came into existence in the mid-eighteenth century. Before the
industrialization, the economy was fully dependent on agriculture and handicrafts.
Industrialization grows with the Industrial Revolution.
The industry revolution has taken place four times, and the recent one is Industry
4.0. The first industry revolution was driven by the creation of the steam engine in
the eighteenth century. Then after a series of developments had taken place in the
fields of electrical, chemical, oil, steel, airplane, mechanical refrigeration, etc. It was
the era of the second industry revolution. Development of such industries opens a
path for a plenty of opportunities in the field of management that was other than the
production of goods. It was the beginning of the third industry revolution. Gradually,
it develops in horizontal and vertical directions with two pillars of management and
production in industries. The third revolution was well known for its technological
development especially in the fields of biotechnology, microelectronics, telemetric
and computer science. Slowly, it steps toward the fourth revolution, and Industry 4.0
is an advanced manufacturing model with an extensive set of technologies. It was
the beginning of the socio-technical interaction era. Gradually, an autonomous and
self-organized production system comes into existence that develops a value chain
system between organizations. Industry 4.0 evolved with multiple versions, includ-
ing the development of smart products, individualized production, autonomous con-
trol and data-centric management.

17.2 SMART FACTORY


The Industrial Revolutions brought three major changes: (1) the invention of
machines to do the work of hand tools, (2) making use of various powers like water,
steam and electricity, and (3) the adoption of the factory system. The factory system
was developed with a combination of machinery and technology with the purpose of
minimizing the cost of production and maximizing efficiency. With the development
304 DOI: 10.1201/9781003432319-17
Rising of Dark Factories due to Artificial Intelligence 305

FIGURE 17.1 Various types of sensors [1].

of industrialization, the factory system grows in the dimension of smart plants, smart
production, smart logistics and finally smart factories.
Smart factory is a new approach, consisting of smart sensors, computer technolo-
gies and predictive analytics. Various types of sensors (Figure 17.1) are commonly
used in different applications. Some most common sensors are as follows:

• Temperature sensor
• Proximity sensor
• Accelerometer
• IR sensor (infrared sensor)
• Pressure sensor
• Light sensor
• Ultrasonic sensor
• Smoke, gas and alcohol sensor
• Touch sensor
• Color sensor
• Humidity sensor
• Position sensor
• Magnetic sensor (Hall effect sensor)
• Microphone (sound sensor)
• Tilt sensor
• Flow and level sensor
• PIR sensor
• Strain and weight sensor

Internet of Things (IoT) and Industry IoT (IIoT) are the basic blocks of smart fac-
tories. The combination of all these maximizes the control over the manufacturing
process and enhancements to acquire, transfer, interpret and analyze the informa-
tion. Smart factories are flexible systems that can self-optimize the performance over
a broader network, can self-adapt to new conditions in real time, can learn from the
306 AI-Driven IoT Systems for Industry 4.0

new environment and can execute the whole production processes in autonomous
mode. Smart factories can be characterized by five major features: connected, opti-
mized, transparent, proactive and agile.

17.3 DIGITAL TWINS


Industry 3.0 is about automation—the reduction of human intervention in processes.
Industry 4.0 is about cognition or the process of acquiring knowledge and under-
standing. Industry 4.0 brought the concept of “Digital Twin”, a simulation approach
used for smart manufacturing with the help of industrial equipment. Digital twin
works as a tool to collect data from sensors (Figure 17.2). The tool facilitates the
cyber–physical integration of manufacturing, on real-time data. This simulation
model acts as a twin of the real world in cyberspace and behaves in the same way
as the physical space. A digital twin consists of a virtual representation of a produc-
tion system that is able to use sensory data, connected smart devices, mathematical
models and real-time data elaborations. With its capabilities of conceptualization,
comparison and collaboration, the digital twin frees us from the physical realm,
where humans operate relatively inefficiently.

FIGURE 17.2 Digital twin framework for a holistic view of building [2].
Rising of Dark Factories due to Artificial Intelligence 307

17.4 DARK FACTORY


As digital technologies are increasing, the dark factories are rising. There is a pos-
sibility that fully automated dark factory or a completely “light-out environments”
will overtake the smart factory and would become the “factory of the future”. The
increase in the usage of robots and automated machines makes the dark factory
possible. The automated machines work with MOM (manufacturing operations
management) software that can manage automated manufacturing processes with-
out any assistance. The software provides help for self-produced goods and moni-
tors the light-out operations from a distance. The advancements in the software
provide alarm or warning messages for better control. One of the major advantages
of using an autonomous environment is its scalability; one can apply the dark
factory concept for some limited area in the industry or can scale it up to make a
complete light-out factory. Therefore, most of the industries are taking benefits of
dark factories.

17.5 RISING OF TRUE DARK FACTORIES


Existence of a fully dark factory is questionable. A fully automated environment
with no required human labor that can operate in the dark without heat and light
is still a fantasy. The concept of “no man factory” cannot be achieved completely
as to control and monitor the fully automated environment man-power is required.
Instead of a full version of true dark factories, the “dark shift” factories are more
likely to be possible. Where the autonomous environment can be applicable at any
one shift, for example, factory orders could get pre-picked in a warehouse and be
ready for processing and shipping by the next shift. A very few factories around the
world can be considered dark factories or a much better word smart factories. Few
examples are

1. “Schneider Electric”, Franc, that works with augmented reality to speed up


operations and maintenance.
2. “Johnson & Johnson DePuy Synthes”, Ireland, that is fully functional with
IoTs.
3. “Bosch”, China, that makes use of IIoT and Big Data for digital
transformations.
4. “Tesla Giga factory”, Germany, that uses smart manufacturing methods.
5. “Haier”, China, that is the world’s first manufacturer-independent industry.
6. “Infineon”, Germany, that has the most powerful intelligent networked
manufacturing.

Recently, during the time of COVID-19 pandemic, there was a high demand
for fully autonomous systems for industries. That leads to a path for dark factories.
Advancements in technologies especially in artificial intelligence (AI) pave the way
for true dark factories. Decreased prices of robots, increased labor costs in manu-
facturing, scarcity of labors, expansion needs in production capacity, requirement of
more flexible environments, energy saving opportunities, efficiency with improved
308 AI-Driven IoT Systems for Industry 4.0

quality products, and sustainability are the few factors behind the rising of dark
factories. Advancements in technologies change the need and working culture of
industries. Precise, meticulous and more accurate works are in demand. Industrial
robotics changes the perception about work; more production with lower cost was
always required, but now industrialists want to cut short the labor wages and replace
the human labors with automated machines.

17.6 ROLE OF ARTIFICIAL INTELLIGENCE


Acquiring and applying the knowledge in intelligent manner is the specialty of
“intelligent systems”. These systems employ AI to think like a person and execute
tasks without direct supervision. To acquire the intelligence, two subsets of AI
are used: machine learning (ML) and natural language processing (NLP). AI can
be defined as the science of making machines do things that would require intel-
ligence if done by humans. AI is the creation of computer programs that emulate
acting and thinking like a human, as well as acting and thinking rationally. It
can perform cognitive functions like learning and decision-making. Mathematics,
statistics and logical functions are helpful to simulate the learning and decision-
making processes. ML uses data-driven mathematical models to make a machine
learn. With the combination of AI, ML and NLP, the intelligent systems have
the capabilities of knowledge acquisition, learning patterns, pattern recognition,
pattern classification, perception, inductive reasoning, prediction, etc. Intelligent
systems are the big help for manufacturers (Figure 17.3). Generation of massive
amount of data, classifying the patterns, prediction of features and decision-making
can easily be carried out with AI and ML. Factory without man takes the shape
with the advancements in AI.

FIGURE 17.3 Intelligent health care system [3].


Rising of Dark Factories due to Artificial Intelligence 309

17.6.1 Predictive Analytics


One of the advanced forms of analytics is the predictive analytics that can draw con-
clusions about future events based on prior data. ML and data science have the capa-
bilities to find patterns on the basis of previous experience. Manufacturers make use
of ML to identify faults and anomalies present in the massive data and the operations.
That helps them escape from huge expenses of machine failures. Predictive analytics
can be divided into two broad models: classification and regression. Classification is
used to make categories of data sets, while the regression model is used to predict the
data. Predictive analytics can be used for risk reduction, maintenance forecasting,
operational improvement and customer segmentation (Figure 17.4).

17.6.2 Predictive Maintenance


Predictive maintenance aims to reduce costly and unscheduled failures by allowing
manufacturers to plan maintenance around their manufacturing schedule. The pre-
dictive maintenance technique that can monitor an asset’s condition, status and per-
formance can be applied through AI (Figure 17.5). Futuristic errors on the production

FIGURE 17.4 Predictive analysis using machine learning [4].

FIGURE 17.5 Data science and predictive analysis in industry [5].


310 AI-Driven IoT Systems for Industry 4.0

line can easily be predicted by ML. Manufacturers make use of ML-based predictive
maintenance to reduce malfunctions and unscheduled failures at a very low cost.
Traditionally, there are two types of maintenance: corrective (or reactive) and pre-
ventive. The process of repairing or restoring equipment to its original or working
condition is known as corrective maintenance. The process of maintaining equip-
ment in order to prevent failures is known as preventive maintenance. AI introduces
a new branch of “predictive maintenance”, which uses data analytics to identify and
predict equipment failures. It helps the industrialist reduce the cost for finding and
repairing the equipment failures.

17.6.3 Industrial Robotics
Mobile, programmable and automated robots used for manufacturing are considered
industrial robots. Lower mobility, parallel manipulators and concomitant motion are
the major features of the industrial robots. Industrial robotics was first introduced
in 1937, a crane-like device powered by a single electric motor (Figure 17.6). It was
used to carry and put the goods from one place to another. After few modifications,
robotic manipulator came into existence. It was a mechanical device used for mov-
ing goods, without direct physical contact by the operator. The specialty of robotic
manipulator was its reprogrammable and multifunctional features. The program-
ming of motions and sequences requires some connectivity with computer system.
Gradually, evolution in industrial robotics involves AI and makes the industrial
robots an alternative of human labors. The involvement of robotics increases the
processing speed, quality and production. The industrial robotics has successfully
been applicable for assembling and disassembling products, welding, painting, pack-
aging and labeling, palletizing, product inspection and testing.

FIGURE 17.6 Basic types of industrial robots [6].


Rising of Dark Factories due to Artificial Intelligence 311

17.6.4 Computer Vision
One of the major domains of AI is machine vision or computer vision that works on
visual inspections. Camera is an essential tool for computer vision and can do jobs
like human eyes with AI. Visual inspection by machine vision provides swiftness
and precision. For an industry, rapid and accurate information matters a lot. The
visual inspection is required at various fields like defect detection, product assembly,
barcode analysis, packing standards, safety and security standards and pruning the
items. As compared to human eye, “computer eye” can find out the defects more
precisely, and with the inclusion of AI, these defects can further be classified on
the basis of some parameters. Detecting foreground and background objects, track-
ing, moving or rotating objects, differentiation of images on the basis of colors and
brightness, etc. are a few of the jobs of computer vision (Figure 17.7).

17.6.5 Inventory Management
Management of the flow of goods from manufacturers to warehouses and from
warehouses to end users is considered inventory management. It is a complex pro-
cess that manages ordering, storing, using and selling of goods. Basically, it’s a
series of processes for managing the things from raw materials to finished products.

FIGURE 17.7 Artificial intelligence-based smart quality inspection methodology [7].


312 AI-Driven IoT Systems for Industry 4.0

FIGURE 17.8 Artificial intelligence for inventory management [8].

Inventory management has so many risks also like spoilage, theft, damage or shifts
in demand. For every industry, the inventory management is a big role player and
needs a very meticulous monitoring. AI helps the industrialist to perform inventory
management in very systematic and intellectual way (Figure 17.8). ML has the abil-
ity to process a massive data and can do classification and pattern recognition on the
data. It helps an industry in supply chain management.

17.7 CHALLENGES FOR DARK FACTORIES


The industry revolution starts with the mechanization of work. The concept of “fac-
tories” comes from industrialization, then the factory gradually evolved toward digi-
tal twins, smart factories and now going in the direction of dark factories. A fully
functional dark factory would be the identity of Industry 5.0. As of now the imple-
mentation of dark factories is somehow challenging. Some of the major challenges
are as follows:

• Transformation of architecture: Traditionally, industries have manufactur-


ing units that operate as large factories to produce goods, where many
people work together in different departments in a collaborative manner
to ensure the quality of production. These manufacturing plants can be
identified by their features: large space, massive number of workers, prod-
uct line, quality control area, storage area and warehouse or distribution
center.
On the other hand, dark factories are “no-man” factories with fully
autonomous environment and no human labor. To establish a dark factory,
architectural standards must be dependent on appropriate technology. The
labor wages transform to the cost of technology, like industrial robotics,
sensors, software and other equipment. This cost could be expensive for
the small-scale industries. The production line needs to be identified by its
autonomous processes that should discard the repetitive, dangerous and
time-consuming processes. The dark factory architecture must include
various regulations related to safety, health and environmental standards.
• Requirement of technical environment: The technical environment includes
product, production and procedure. The working of “light-out” factories
Rising of Dark Factories due to Artificial Intelligence 313

is complex in terms of production and procedure. Working in completely


autonomous environment is quite challenging. Tactic knowledge and skill-
ful workers are required to manage the environment. The autonomous
processes work with advanced technology like AI, Big Data analysis and
machine vision. These technologies are evolving rapidly, and to be with
the market, it would be a necessity to adopt the current updates. The cost
of these techniques is very high. As the production by autonomous labor is
much faster than human labor, managing and responding to the production
on time is also a bigger challenge. Identification and corrections of errors,
tracking the defects and controlling the quality are some of the crucial tasks
for “no-man” factories. Dark factories can be more suitable for very few
certain types of manufacturing but not well for all others.
• Security threats: High dependencies on technology create security vul-
nerabilities for dark factories. Data breaches, production disruptions and
malicious technical failures are the major challenges on the security
of dark factories. The inter-connected systems are more vulnerable to
cyber-attacks.
• Workforce displacement: The automation affects the human workforce.
Skillful and well-trained workers are required for autonomous environment
that makes a way for new jobs. Requirements of technical experts are usu-
ally lesser as compared to human labors.

17.8 CONCLUSION
In spite of multiple challenges, the dark factories are growing rapidly. High-quality
products with lesser human involvement are the major advantage of dark factories.
AI makes the things much easier and increases the quality by providing precise,
accurate and in-depth views of products. ML, deep learning, computer vision NLP
and Big Data analysis are major wheels for driving the AI-based systems. The
advancements in AI will transform the smart factories into dark factories.

REFERENCES
1. https://dataconomy.com/2022/05/05/artificial-intelligence-in-industry-4-0/
2. https://facilityexecutive.com/optimize-with-digital-twin-technology/
3. Muzammal, M., Talat, R., Sodhro, A. H., & S. Pirbhulal. A Multi-Sensor Data Fusion
Enabled Ensemble Approach for Medical Data from Body Sensor Networks. Information
Fusion. 2020, 53, 155–164, ISSN 1566–2535. doi:10.1016/j.inffus.2019.06.021
4. Noureddine, R., Solvang, W., Johannessen, E., & Yu, H. Proactive Learning for
Intelligent Maintenance in Industry 4.0. 2020. doi:10.1007/978-981-15-2341-0_31
5. https://subscription.packtpub.com/book/data/9781788831307/1/ch01lvl1sec04/crisp-dm
6. https://www.tthk.ee/inlearcs/1-basic-about-industrial-robots/
7. Sundaram, S., & Zeid, A. Artificial Intelligence-Based Smart Quality Inspection for
Manufacturing. Micromachines. 2023, 14, 570. doi:10.3390/mi14030570
8. Sustrova T. An Artificial Neural Network Model for a Wholesale Company’s Order-
Cycle Management. International Journal of Engineering Business Management. 2016,
8. doi:10.5772/63727
314 AI-Driven IoT Systems for Industry 4.0

FURTHER READINGS
1. Ortiz, J. H. Industry 4.0 – Current Status and Future Trends. doi:10.5772/intechopen.
86000
2. Jacić, L. Re: Is Industry 4.0 Always Correspond with Automation? 2022. Retrieved
from: www.researchgate.net/post/Is_Industry_40_always_correspond_with_automation/
62f386cc3061927d1e04d950/citation/download
3. https://issuu.com/techfastly/docs/feb_2021/s/11659740
4. https://www.grainger.com/know-how/industry/manufacturing/kh-what-is-a-dark-factory
5. https://www.linkedin.com/pulse/dark-factories-likely-reality-shirish-kulkarni/
6. https://www.planettogether.com/blog/balancing-the-dark-side-of-automation-exploring-
dark-factories-and-finding-the-optimal-production-line-efficiency
7. https://www.i-scoop.eu/industry-4-0/lights-out-automation-manufacturing/
8. https://www.cnbc.com/2023/04/19/from-generative-ai-to-digital-twins-how-tech-will-
transform-factories.html
9. https://www.threadinmotion.com/blog/smart-factories-all-around-the-globe
10. https://www.electronicshub.org/different-types-sensors/
18 Deep Learning for
Real-Time Data Analysis
from Sensors
Sagar C V, Harshit Bhardwaj,
and Anupama Bhan

18.1 INTRODUCTION
According to the World Health Organization, cancer is the second biggest cause of
death from medical problems and has a substantial influence on the world, caus-
ing more than 10 million deaths yearly. According to estimates, 1,500 people lose
their lives to cancer every day. Cancer recurrence is a constant concern, despite
the fact that treatment and prevention can increase the likelihood of survival. For
instance, breast cancer treatment might impair the immune system, which can
cause malignant cells to reappear in the brain and other organs. As researchers and
data scientists look for methods to use artificial intelligence (AI) in anything from
weather forecasting to judicial procedures, AI has entered many industries, includ-
ing medicine. The diverse range of AI applications is significantly influenced by
the medical industry. Cervical cancer is a rapidly progressing and highly danger-
ous form of cancer that has a significant impact on the lives of women world-
wide. The World Health Organization reports that the incidence of cervical cancer
is particularly increasing among Indian women, affecting approximately 1 in 53
women compared to 1 in 100 globally. One of the common symptoms experienced
by patients is abnormal vaginal discharge or bleeding. The Pap smear test is a
commonly employed method in medical diagnosis and treatment for identifying
abnormalities in cervical cells. It helps detect variations in cell size, color, mucus,
and cell integrity. Nevertheless, the precise and manual segmentation of Pap smear
cells presents a significant challenge for cytologists. This is primarily because of
cell overlap, which complicates the identification of inflammatory cells and the
presence of mucus in cell images. To address these challenges, the effectiveness of
medical imaging modalities such as MRI scans, CT scans, X-rays, and others has
been demonstrated through the application of machine learning and deep learn-
ing algorithms. The most frequent malignancy among women is cervical cancer.
The Pap smear test includes removing cells from the cervical region and adding
color to the microscope slide in order to look for abnormalities. The translucent
cells are examined by skilled cytologists. Manual examination takes time, though,
and many ailments frequently depend heavily on time. It is essential to develop
techniques that can aid human-based diagnostic systems because their influence is

DOI: 10.1201/9781003432319-18 315


316 AI-Driven IoT Systems for Industry 4.0

only going to expand and the number of cases is going to increase every year. The
objective of this project is to build a categorization model based on machine learn-
ing. However, pre-processing like filtering and region-of-interest segmentation is
required before Pap smear images can be fed directly into a classification system.
As having too many features can result in a longer processing time, the features
extracted from the pre-processed images are further improved to remove redun-
dant features. This study employs principal component analysis (PCA) for this
purpose. The resulting features that have been filtered and decorrelated are then
fed into multiple Support Vector Machine (SVM) models that categorize cells into
various stages or levels of cancer cell growth. To avoid mortality, cervical cancer
must be identified early and treated properly. Cervical cytology, also known as a
Pap smear, is a standard technique for detecting cervical cancer. It entails examin-
ing cervical cells under a microscope for any abnormal alterations that can point
to the existence of precancerous or cancerous diseases. The manual evaluation of
Pap smear slides by human experts takes time and is subject to variation across
observers. The primary cause of death in female patients stems from the chal-
lenge of detecting cervical cancer in its early stages, as symptoms do not typically
appear until the disease has advanced. The mortality rate among women could be
significantly reduced if cervical cancer could be identified earlier. Thus, this chap-
ter proposes a procedure for early detection of cervical cancer in order to prevent
the deaths of female patients. Traditional methods heavily rely on human experts,
resulting in a time-consuming process. Furthermore, these methods are not fea-
sible in developing countries with a large population due to a scarcity of trained
specialists or radiologists in this particular field.

18.2 LITERATURE REVIEW


The purpose of this research is to determine whether cancer is present in the sample
by analyzing the cells, choosing features, and classifying them as normal or con-
taminated. Using five-fold cross-validation, previous studies [1–3] on this project
used K-Nearest Neighbor (KNN) to classify images of Pap smears with a classifica-
tion accuracy of over 82%. In the previous work [4–6] built an automated system
for categorization and detection utilizing biopsy images, and it was successful in
achieving accuracy levels of over 92%. KNN was also employed as a classifier in
this proposal. Fuzzy C-means (FCM) clustering was utilized in a model for cell
picture segmentation and classification described by authors in [7–9] that achieved
93.78% accuracy. For validation, artificial neural networks were used. Furthermore,
some research works have also developed a model for cell segmentation and clas-
sification in [10, 11] that combined an SVM with morphological operations. Some
of the authors mentioned in [12, 13] presented a unique model in [12, 13] that was
centered on the automated detection of cervical cancer cells in liquid-based cytol-
ogy slides. In order to classify cells as normal or epithelial, the study used 20,000
pictures of cells from 120 separate slides using a two-level cascaded integration
approach. Prior to utilizing KNN to identify the cells, the cells were segmented to
separate their nuclei from their cytoplasm [14–17]. Without validation, the model
had a classification accuracy of 84.3%, while with five-fold cross-validation, it had
Deep Learning for Real-Time Data Analysis from Sensors 317

an accuracy of 82.9%. Following a phase of k-means segmentation, a KNN algo-


rithm was utilized to classify healthy and malignant cells on microscopic biopsy
pictures [18]. FCM clustering has been utilized by researchers to segment images of
Pap smears [19]; however, FCM clustering has the disadvantage of not being able to
identify all legitimate clusters in color image segmentation. William et al. [19, 20]
created a Trainable Weka Segmentation classifier with an improved FCM method to
increase the segmentation of cervical cells. In a variety of fields, including cancer
research, deep learning has proven successful. Digital images of Pap smears, for
instance, have been segmented using deep learning in previous studies [21, 22]. Liu
et al. [23] proposed a strategy to fragment cervical cytoplasm and cores utilizing
superpixels and convolutional neural networks (CNNs). Z. Hu et al. [24] segmented
cervical nuclei automatically using Mask RCNN and LFCCRF. To address the limi-
tations of traditional machine learning techniques, the concept of deep learning has
been introduced, which aims to extract meaningful information from images and
utilize various classifiers for classification purposes. In recent years, deep learning
models, particularly CNNs with different kernel parameters, have achieved signifi-
cant success in biomedical image processing and healthcare systems. These models
have been utilized for detecting tumor cells, segmenting breast nodes, classify-
ing lung nodules, identifying skin diseases, and classifying immune cells, among
others. To improve recognition and classification accuracy, a combination of indi-
vidual CNN performances can be leveraged through the use of transfer learning.
Additionally, integrating different pretrained models with context-based learning
not only enhances the CNN architecture but also yields a stable and efficient model
for the recognition and classification of cancerous cells [24].

18.3 METHODOLOGY
The diagnosis of cervical cancer based on AI applications on images involves train-
ing a supervised algorithm (ML) with images labeled by an expert. During this train-
ing, the ML systems (intelligent agents) are taught to differentiate between images of
different classes (normal or pathologic), and the parameters of mathematical models
describing these methods are optimized. After successful training, the algorithms
are evaluated using validation images to measure their identification and generaliza-
tion capabilities using established classification metrics such as accuracy, sensitivity,
specificity, area under (the receiver operating characteristic) curve (AUC), and F1
score, among others.

18.4 INTERNET OF THINGS


The Internet of Things (IoT) is a transformative technology that has fundamentally
changed the way we interact with our surroundings. It comprises a network of inter-
connected devices, objects, and systems equipped with sensors, software, and com-
munication capabilities. This empowers them to amass, exchange, and analyze data.
Within the IoT ecosystem, an expansive array of devices coexists, ranging from per-
sonal smartphones and household appliances to large-scale industrial machinery and
critical infrastructure as shown in Figures 18.1 and 18.2. At the heart of the IoT are
318 AI-Driven IoT Systems for Industry 4.0

FIGURE 18.1 Internet of Things.

FIGURE 18.2 Cloud computing diagram.


Deep Learning for Real-Time Data Analysis from Sensors 319

the devices themselves, equipped with sensors and actuators designed to perceive
and interact with their surroundings. These sensors are adept at monitoring diverse
variables such as temperature, humidity, light, motion, and location. Subsequently,
the data collected from these sensors is relayed across networks, employing both
wired and wireless connections, for subsequent processing and analysis. Central to
the IoT landscape is connectivity, which plays a pivotal role in facilitating devices’
communication and data sharing among themselves and with cloud platforms. The
underlying objective of the IoT concept is to establish an interconnected ensemble of
devices that collaborate harmoniously to enhance efficiency, automation, and con-
venience across various facets of our daily existence. This connectivity is achieved
through a spectrum of protocols and technologies, including but not limited to
Wi-Fi, Bluetooth, Zigbee, RFID, and cellular networks. The expansion of wireless
communication technologies and the proliferation of energy-efficient, cost-effective
connectivity solutions have undeniably contributed to the burgeoning realm of IoT
applications.

18.5 HYPERTEXT PREPROCESSOR


Hypertext Preprocessor (PHP) is a popular scripting language known for its
versatility, predominantly used in web development to generate dynamic web
pages. It was initially designed to be seamlessly integrated into HTML source
documents. PHP code is interpreted by web servers equipped with PHP processor
modules, leading to the generation of web page documents. PHP also functions
as a general-purpose programming language, capable of executing commands in
a command-line interpreter and producing output on standard output channels. It
can even be used for graphical applications. Modern web servers extensively sup-
port PHP, and it is accessible as a standalone interpreter on a variety of operating
systems and computer platforms. Rasmus Lerdorf created it in 1995, and it has
undergone continual evolution since then. Due to the lack of a formal specifica-
tion, the PHP Group is presently responsible for the principal implementation of
PHP, which has become the unofficial standard for the language. PHP is distrib-
uted as open-source software under the PHP License, which has distinct terms
compared to the GNU General Public License (GPL). One notable difference is
that the PHP License places specific restrictions on the usage of the term “PHP.”
The term “Hypertext” in the context of “Hypertext Preprocessor” pertains to files
that are interconnected via hyperlinks, as seen in HTML (HyperText Markup
Language) files.

18.6 IoHT CATEGORIZATION SCHEME FOR


CERVICAL PAP SMEAR CELLS
A deep learning-based system, known as the Cervical IoHT system, was developed
and utilized for application development, specifically to enhance the processing of
Pap smear images. This system consisted of both a front end and a back end. The
front end utilized a Python-based image processing package for its functionality.
320 AI-Driven IoT Systems for Industry 4.0

FIGURE 18.3 Flow chart of ML classifier.

This module built a convolutional encoder network using the TensorFlow and Keras
frameworks. A system call was produced from the. NET framework through APIs
to execute the module, allowing the deployment of the appropriate algorithms at any
layer or node of the model. The system saves the photos in either RGB or grayscale
format for pre-processing before analyzing them using trained or pretrained models
to produce effective results. The picture data is imported and separated into training,
testing, and validation subsets before being processed in pretrained models using
various data augmentation and flipping approach as shown in Figure 18.3. The next
stage is to create a model for training and testing purposes utilizing a web-based API
framework.

18.7 TRANSFER LEARNING


Training a CNN from scratch typically requires a substantial amount of data, which
can be challenging to collect, especially for classification tasks. Furthermore, cor-
rectly matching the training and testing data in real-world circumstances might
be problematic. Transfer learning has evolved as an advanced machine learn-
ing approach to address these difficulties. It entails learning to tackle a specific
Deep Learning for Real-Time Data Analysis from Sensors 321

problem and then reusing and applying that knowledge to solve comparable chal-
lenges in other fields. A context network is initially trained using an appropriate
dataset in transfer learning, and then the obtained information is transferred to
a specified target. The target is trained using a dataset that is suited to its unique
needs. The effectiveness of transfer learning depends on several factors, includ-
ing the choice of a suitable pretrained model and consideration of factors such
as dataset size and similarity. The pretrained model used should be appropriate
to the target problem while also taking into account the context problem or job.
Overfitting can occur when the target dataset is smaller or similar in size to the
source dataset. If the dataset is larger, the pretrained model often requires fine-
tuning rather than rebuilding the model from scratch, regardless of the precise
processes required.

18.8 SUPPORT VECTOR MACHINES


SVMs are powerful supervised learning models widely recognized for their versatil-
ity and effectiveness in various machine learning tasks, including classification and
regression. They have found applications in diverse domains, including computer
vision, natural language processing, and bioinformatics. At the heart of SVMs lies
a fundamental concept: the search for an optimal hyperplane that maximizes the
separation between different classes within the data space. This hyperplane serves
as a decision boundary during binary classification, effectively dividing the data
into distinct regions corresponding to each class. The primary goal is to identify
the hyperplane that yields the widest margin, representing the space between the
hyperplane and the nearest data points from each class. This pursuit aims to enhance
generalization, allowing the model to effectively classify unseen data by maximiz-
ing this margin. SVMs employ a mathematical technique known as the kernel trick
to identify the most suitable hyperplane. This technique enables SVMs to implicitly
transform input data into a higher dimensional feature space, making linear separa-
tion possible even when the initial data isn’t linearly separable. The transformation
is facilitated by a kernel function that measures the similarity between data points in
the input space. Commonly used kernels include linear, polynomial, and radial basis
function (RBF) kernels. The training process of an SVM involves determining the
parameters that define the hyperplane’s properties. This process is formulated as an
optimization problem, where the goal is to minimize a cost function that accounts
for classification errors and margin size. Embedded within this cost function is a
regularization component that governs the trade-off between maximizing the mar-
gin and allowing some degree of misclassification. Techniques such as quadratic
programming or convex optimization are employed to solve this optimization chal-
lenge and identify the optimal hyperplane and its associated support vectors—data
points closest to the hyperplane.

18.8.1 Polynomial SVM


Polynomial SVMs represent a variant of the well-established SVM methodology
that offers a versatile and potent approach to addressing classification and regression
322 AI-Driven IoT Systems for Industry 4.0

challenges. Comparable to conventional SVMs, Polynomial SVMs endeavor to iden-


tify an optimal hyperplane capable of effectively separating data instances belonging
to distinct classes. However, Polynomial SVMs differentiate themselves by integrat-
ing polynomial kernel functions, enabling the capture of intricate non-linear relation-
ships among features. This empowers them to adeptly handle complex data patterns.
Diverging from the linear SVM’s strategy, this augmentation proves especially ben-
eficial when dealing with data that resists linear separation within the original space.
By employing the polynomial kernel function, the SVM gains the ability to com-
prehend additional dimensions and interactions among features, thereby enhancing
its capacity to interpret intricate relationships. Training a Polynomial SVM involves
tackling an optimization problem with the objective of finding the hyperplane that
optimally maximizes the margin while minimizing classification errors. In this pro-
cess, the SVM calculates the coefficients that define the configuration of the decision
boundary. Simultaneously, the polynomial kernel is responsible for computing the
similarity or inner product between pairs of data points. Throughout the optimiza-
tion process, a balance is struck between the pursuit of an expansive margin and the
accommodation of a predefined level of misclassifications. To regulate the equilib-
rium between model complexity and precision, regularization strategies such as L1
or L2 regularization can be enlisted. These mechanisms introduce constraints that
influence the optimization process, ensuring a judicious trade-off between intricate
model representations and accurate predictions.

18.8.2 Quadratic SVM
Quadratic SVMs represent an advanced iteration of the conventional SVM algo-
rithm, incorporating quadratic functions into the decision-making process. This
innovation enhances their performance, particularly in scenarios involving intri-
cate non-linear relationships between features and the target variable. Quadratic
SVMs are extensively employed in classification and regression tasks that grapple
with complex data patterns. Similar to other SVM variants, Quadratic SVMs
aim to determine an optimal hyperplane that effectively segregates data points
into distinct classes. Unlike their linear SVM counterparts, which rely solely on
linear decision boundaries, Quadratic SVMs introduce quadratic terms within
the kernel function. This augmentation empowers them to capture intricate non-
linear associations between features, significantly augmenting their capability to
handle intricate data patterns. Training a Quadratic SVM involves the resolution
of an optimization problem focused on identifying the hyperplane that maximizes
the margin while minimizing classification errors. The quadratic kernel func-
tion serves to gauge the similarity between data points, and the SVM endeav-
ors to ascertain the coefficients that underpin the configuration of the decision
boundary.

18.8.3 Gaussian RBF SVM


Gaussian RBF SVMs stand as prevalent SVM methodologies adept at address-
ing non-linear classification and regression tasks through the utilization of the
Deep Learning for Real-Time Data Analysis from Sensors 323

Gaussian RBF kernel. This kernel empowers SVMs to effectively handle intricate
feature interactions by transforming data into a higher dimensional realm, thereby
enabling a more robust modeling of non-linear patterns. During the training phase
of Gaussian RBF SVMs, the primary objective is to identify an optimal hyper-
plane that maximizes the margin between distinct classes while simultaneously
minimizing classification errors. The Gaussian RBF kernel computes the similar-
ity between pairs of data points, while the SVM determines the coefficients gov-
erning the formulation of the decision boundary. This optimization process entails
a delicate balance between the size of the margin and the tolerance for allowable
misclassifications. In order to regulate this trade-off, regularization techniques,
such as the utilization of the C regularization parameter, are brought into play.
A significant advantage of Gaussian RBF SVMs lies in their capability to effec-
tively encapsulate highly non-linear relationships among features. The inherent
capacity of the RBF kernel equips the SVM to define intricate decision boundaries
and adeptly handle datasets that don’t exhibit linear separability. By mapping data
into an elevated-dimensional space, Gaussian RBF SVMs are adept at identifying
complex patterns and generating dependable predictions, even in the presence of
intricate non-linearities.

18.9 DATASET
The Pap smear database, which is available to the public, comprises 917 cervical cell
images distributed across seven categories in an uneven manner. The resolution of
the photos is 0.201m per pixel. The dataset was produced using tissue samples from
the uterine cervix that were obtained during smear tests. To ensure accuracy, two
cytotechnicians and a doctor diagnosed premalignant cell changes before progress-
ing to malignancy. This can make the classification procedure more challenging. The
dataset was collected from the Herlev University Hospital and contains manually
annotated ground truth segmentation of cervical cells. The images were captured
using digital cameras and a microscope and were obtained from various sources,
including clinical laboratories and research institutes. Expert pathologists exten-
sively documented and categorized each image based on the presence or absence of
abnormalities. The dataset comprises seven distinct categories, each corresponding
to specific cervical cell conditions as mentioned in Table 18.1. Within this dataset,
a prominent image is showcased, featuring a stained cell sample wherein a distinct
nucleus is encircled by cytoplasm (Figure 18.4). Moreover, for reference, a meticu-
lously manually annotated segmented image representing the ground truth accom-
panies this image (Figure 18.5). To expound further, this database encompasses a
diverse set of classes designed to facilitate the classification and recognition of cervi-
cal lesions. Notably, high-grade squamous intraepithelial lesion (HSIL) refers to cells
displaying notable and pronounced abnormalities. In contrast, low-grade squamous
intraepithelial lesion (LSIL) characterizes cells with milder irregularities. Atypical
squamous cells of undetermined significance (ASCUS) pertain to cells manifesting
ambiguous attributes that necessitate more in-depth evaluation. Lastly, the classifica-
tion category of invasive cervical cancer pertains to the advanced stage of malignant
growth in cervical cells.
324 AI-Driven IoT Systems for Industry 4.0

TABLE 18.1
Seven Distinct Categories Assigned to Individual Pap Smear Cells
S. No Cell Type Cell Image Category

1. Carcinoma in situ Abnormal cells

2. Severe dysplasia Abnormal cells

3. Mild dysplasia Abnormal cells

4. Moderate dysplasia Abnormal cells

5. Normal columnar Normal cells

6. Intermediate squamous Normal cells

7. Superficial squamous Normal cells

The Pap smear information base incorporates 917 pictures of cervical cells that
are grouped into seven unique classes. Classes 1–3 are viewed as typical, while
classes 4–7 are thought of as strange. It is seen that most cells with bigger cores have
a place with the strange gathering, yet this isn’t generally evident as expected cells
may likewise have cores equivalent in size and chromatin circulation to disease or
extreme cells, making grouping troublesome. To address this, the seven classes were
diminished to five by joining classes 1–2 and 6–7 into a solitary class. This improves
on the grouping system, as displayed in Table 18.2.
Deep Learning for Real-Time Data Analysis from Sensors 325

FIGURE 18.4 Normal columnar cell.

FIGURE 18.5 Normal columnar cell segmented.

TABLE 18.2
The Classification Process Employed Five Distinct Categories
Class Cell Type Category
1–2 Carcinoma in situ + Severe dysplasia Abnormal cells
3 Mild dysplasia Abnormal cells
4 Moderate dysplasia Abnormal cells
5 Normal columnar Normal cells
6–7 Intermediate squamous + Superficial Squamous Normal cells
326 AI-Driven IoT Systems for Industry 4.0

18.10 PRE-PROCESSING OF IMAGES


The basic abstractions applied to intensity images are referred to as “image pre-pro-
cessing.” A two-dimensional matrix of brightness or intensity values is used to rep-
resent these images, which are derived from sensor images mentioned in Table 18.3.
The presence of picture overt repetitiveness is significant in such methodologies, as
it considers the assessment of the first pixel esteem, or a worth near it, even within
the sight of picture twisting. The primary goals of pre-processing techniques are to
improve image quality, get rid of noise or artifacts, and standardize data to make it
easier to extract and classify features with greater precision. The typical procedures
for preparing Pap smear images are listed below.

Image resizing: The resolution and size of Pap smear images might vary.
Resizing the photos to a standardized resolution aids in the production of
consistent data for further analysis. It also lowers computational complexity
and memory demands.

TABLE 18.3
Intermediate Cells and Normal Superficial Cells
Steps Cell 1 Cell 2 Cell 3
Cropped and segmented

Binary cell

Cropped nucleus

Binary nucleus
Deep Learning for Real-Time Data Analysis from Sensors 327

TABLE 18.4
Columnar Cells
Steps Cell 1 Cell 2

Cropped and segmented

Binary cell

Cropped nucleus

Binary nucleus

Noise reduction: In Pap smear images, noise or artifacts may be present due to
various factors such as image capture, staining, or digitalization. To miti-
gate these issues and preserve essential visual information, noise reduction
techniques like Gaussian smoothing and median filtering can be employed.
These methods help reduce unwanted noise while retaining critical details
in the images mentioned in Tables 18.4 and 18.5.
Enhancement of contrast: Enhancing the contrast in Pap smear images can
significantly improve the visibility of cellular structures and abnormalities.
Various techniques, including histogram equalization and adaptive contrast
enhancement, can be applied to adjust the dynamic range of pixel intensi-
ties and enhance the visual quality of the image mentioned in Tables 18.6
and 18.7.
328 AI-Driven IoT Systems for Industry 4.0

TABLE 18.5
Light Dysplastic Cells along with Severe Dysplastic and
Carcinoma in Situ Cell
Severe Dysplastic and Moderate
Steps Light Dysplastic Cells Carcinoma in Situ Cell Dysplastic Cells

Cropped and
segmented

Binary cell

Cropped nucleus

Binary nucleus

TABLE 18.6
Values for Various Classification of Cells
Normal Intermediate Carcinoma Moderate Light
Columnar and Normal in Situ and Severe Dysplastic Dysplastic
Cell Superficial Dysplastic Cells Cells Cells
Area of 12,000 approx. 4,000 approx. 37,000 approx. 4,300 approx. 26,000 approx.
nucleus
Area of cell 14,500 approx. 18,000 approx. 680,000 approx. 61,000 approx. 690,000 approx.
Eccentricity 0.53 approx. 0.72 approx. 0.48 approx. 0.38 approx. 0.41 approx.
of nucleus
Eccentricity 0.7 approx. 0.68 approx. 0.89 approx. 0.62 approx. 0.89 approx.
of cell
Area of 0.08 approx. 0.02 approx. 0.055 approx. 0.071 approx. 0.038 approx.
nucleus:
Area of cell
Deep Learning for Real-Time Data Analysis from Sensors 329

TABLE 18.7
Texture Features
Normal Intermediate Carcinoma Moderate Light
Texture Columnar and Normal in Situ and Severe Dysplastic Dysplastic
Features Cell Superficial Dysplastic Cells Cells Cells
Contrast 0.2391–0.2238 0.295–0.344 0.0684–0.072 0.1984–0.1586 0.0901–0.0964
Autocorrelation 48.814–48.840 53.906–53.894 54.589–54.602 35.889–36.92 56.483–56.492
Entropy 1.715–1.693 1.200–1.215 1.185–1.182 1.272–1.2705 0.9958–0.9995
Correlation 0.9432–0.946 0.940–0.927 0.9816–0.9806 0.9334–0.9486 0.9662–0.9638
Sum variance 153.875–154.235 185.089–184.875 186.39–186.58 114.584–115.10 198.57–198.61
Sum average 13.685–13.688 14.379–14.381 14.527–14.529 11.915–11.918 14.859–14.860
Variance 48.720–48.738 53.829–53.841 54.39–54.42 36.799–36.82 56.296–56.308
Sum entropy 1.618–1.604 1.115–1.123 1.147–1.143 1.484–1.463 0.9545–0.9544
Difference 0.239–0.223 0.294–0.344 0.0684–0.072 0.1984–0.1586 0.0902–0.0965
variance
Difference 0.303–0.284 0.259–0.278 0.1638–0.162 0.2286–0.1982 0.1646–0.1691
entropy

Image registration: Pap smear pictures may have minor misalignments or ori-
entation anomalies in some circumstances. Image registration techniques
may be used to align and fix misalignments in the dataset, assuring consis-
tency and boosting the quality of future analysis mentioned in Table 18.8.
Color normalization: Color differences in Pap smear pictures may occur owing
to differing staining methods or imaging circumstances. Color normalization
methods can be used to standardize the look of colors across photographs,
ensuring that color-based characteristics are not affected by such variances.
Cell segmentation: Cell segmentation is a critical stage in Pap smear image
processing as it enables the separation of individual cells or regions of inter-
est. By applying various segmentation methods, it becomes possible to iso-
late cells from the background, facilitating more precise feature extraction
and categorization. This step plays a pivotal role in the accurate analysis
and interpretation of Pap smear images for medical diagnosis.

TABLE 18.8
Comparative Analysis of Proposed
Technique with Previous Techniques
References Accuracy (%)
[1] 84.3
[2] 92.19
[3] 93.78
[4] 84.31
[5] 97.18
Proposed technique 95
330 AI-Driven IoT Systems for Industry 4.0

18.10.1 Image Filters
In computerized picture handling, there are two primary kinds of channels: smooth-
ing channels and honing channels. Smoothing channels, like low pass channels, per-
mit lower recurrence parts of the picture to pass while lessening higher recurrence
content, which can incorporate edge data. The reason for smoothing channels is to
lessen commotion in the picture. Be that as it may, by streamlining edges, there is a
gamble of losing limit data. Interestingly, honing channels expect to improve limit
data and capability as a high pass channel, permitting the high recurrence edge parts
of the picture to pass. In this specific case, a two-sided channel (bilateral filter) is
utilized.

18.10.2 Bilateral Filters
Similar to Gaussian filtering, bilateral filtering takes into account the intensity val-
ues of adjacent pixels as well as the distances between them. This approach saves
the picture limits while decreasing clamor in the inward areas. The following is the
output of the bilateral filter:

BF[ I ] p =
1
Wp ∑G ( p − q ) G
σs σr (I p − Iq ) Iq (18.1)
q∈ S

The function G (|(|p-q|)|) is applied to reduce the impact of pixels that are distant
from the target pixel. The portion of the 2D Gaussian channel, which is utilized in
picture handling, gives this capability:

 i2 
− 
1 2σ 2 
Gσ (i ) = e (18.2)
2πσ 2

The parameter determines the size of the neighborhood. The blurring increases
as the value increases. In addition, W_p is a normalization function that ensures that
the sum does not exceed 1. This is provided by

Wp = ∑G ( p − q ) G
σs σr (I p − I q ). (18.3)
q∈ S

It is evident that, alongside the term G(s)(|p-q|) that accounts for the distance
factor, another component G(r)(|I_p-I_q|) is present. This additional term serves to
diminish the impact of pixels having dissimilar intensity values compared to the tar-
get pixel, even when they are situated in close proximity to the said target pixel. The
degree of blurring within the target image is now dictated by two distinct param-
eters: σ_s and σ_r. This choice of parameters and methodology is highlighted in the
Deep Learning for Real-Time Data Analysis from Sensors 331

work of Sylvain Paris (2009). Given the significance of boundary information, espe-
cially in scenarios involving intensity inhomogeneity and noise, the bilateral filtering
technique was opted for among various filtering strategies.

18.11 IMAGE SEGMENTATION


Division is a course of partitioning a picture into segments to extricate important
data and dispose of the undesirable bits. Accurate segmentation is a vital component
in the identification of the region of interest (ROI) within medical imaging, serving
as a valuable tool for numerous applications. This process facilitates the extraction of
pertinent information and streamlines the image for subsequent processing, enabling
further analysis and exploration. Changes in the size of the nucleus and cytoplasm,
for example, can result from improper segmentation, which can cause severe feature
alterations. Since no single division technique can work for all applications, it is
fundamental to pick a proper strategy.
Active contour models, which combine energy components and constraints to dis-
tinguish the ROI from the rest of the image, are one popular segmentation technique.
Various approaches that take into account both internal and external factors are used
to determine the boundary curvature around the desired section. The boundary’s
curvature is associated with an energy function, and as the energy function is mini-
mized, the contour definition will precisely encircle the ROI.
An alternative technique for segmentation entails the utilization of Gaussian fit-
ting energy to drive active contours. This method employs a Gaussian distribution
with adaptable means and variances to characterize the local intensities observed in
the image. By incorporating spatially fluctuating local means and variances, it effec-
tively handles challenges associated with intensity inhomogeneity. These variables
are employed to define a level set function, enabling the iterative minimization of
energy. As a result, this iterative approach diminishes the impact of pixels located
far from the target pixel through the G(|p-q|) function. The following expression [4]
defines the local Gaussian fitting energy (LGFE):

N 
Energy LGF
X = ∑ ∫ 

− log ( pi,X ( I(y)) dy ) (18.4)
i=1  Ωi∩ϑ i

The point is to limit the energy articulation utilizing an iterative technique. Here,
I denotes various image regions, and pi,X(I(y) denotes portions of those regions that
are contained within the circular boundary. A neighborhood with a radius, where
“X” denotes any location in the image domain, “y” denotes the location of a neigh-
boring pixel, and “I(y)” denotes the intensity value at that location, forms a circular
boundary.

 u (x) − I ( y) 2 
pi,X (I ( y ) =
1
exp −
( i )  (18.5)
2π σ i  2σ ( x )
2 
 i 
332 AI-Driven IoT Systems for Industry 4.0

The local means and variances of intensities are represented by the variables u
and i(x). A weighting capability is utilized, where the weight doled out to y incre-
ments as it draws nearer to the focal point of x.

 d2
 1 −
e 2σ 2 if d ≤ ρ
ω (d ) =  a (18.6)

 0 if d > ρ

The energy equation is used in the segmentation process after the term “(x-y)”
has been added. When the absolute difference between x and y is greater than a
predetermined radius, this term has a value of zero and gains weight as y gets closer
to the center of x:

N 
Energy LGF
X = ∑ ∫ 
 (
− ω ( x − y ) log pi,X ( I ( y )) dy ) (18.7)
i=1  Ωi∩ϑ i

18.12 FEATURE EXTRACTION


The classifier’s data cannot be directly utilized by the algorithms [12]. As a result,
the classifier systems can use the quantitative values that are extracted from the
microscopically observable graphical parameters [13]. Various methods for extract-
ing features have been developed, each with its own set of benefits and drawbacks.
Some methods are simpler to implement and computationally simpler than others.
This stage is significant for the outcome of the arrangement calculation, and special-
ists in the field are frequently used to distinguish highlights (Figure 18.6 and 18.7).
Feature selection is used to get rid of unwanted features that could hurt the
classifier’s classification. First- and second-order statistical variables were used to
categorize the five classes. Mathematical and surface highlights were separated from
a divided Pap smear picture. The cell area, nucleus area, cell eccentricity, nucleus
eccentricity, and nucleus-to-cytoplasm ratio were the five geometric parameters that
were calculated for the cell and nucleus. To extract texture features for complete image
processing, the authors extracted texture features from raw images. In their analysis,
they performed computations on various first- and second-order statistical features.

18.13 FEATURE SELECTION


For every individual class, a collection of 22 key features was meticulously gath-
ered and then subjected to a PCA procedure to condense the features down to their
most pertinent elements. This employment of PCA in conjunction with bi-plot
analysis aimed to identify distinctive characteristics. In the results of the PCA, the
most impactful features were identified as cluster shade, cluster prominence, sum
variance, and sum of squares. These key features were then integrated into a multi
SVM framework, leading to the creation of a confusion matrix for evaluation and
Deep Learning for Real-Time Data Analysis from Sensors 333

FIGURE 18.6 Optimization Process 1.

FIGURE 18.7 Optimization Process 2.


334 AI-Driven IoT Systems for Industry 4.0

performance assessment. These components efficiently encapsulate the most piv-


otal information and variability within the data. By strategically selecting a subset
of the most pertinent principal components, it becomes possible to effectively trim
the dimensionality of the feature space while preserving crucial information. Upon
determining these principal components through PCA, they are effectively employed
as input features across different types of multi SVMs. The SVM, a well-established
supervised learning methodology extensively utilized in classification tasks, strives
to identify the optimal hyperplane for effectively categorizing data points. To gauge
the efficacy of the classification model forged by Multi SVMs along with the incor-
porated features, a confusion matrix is generated.

18.14 CLASSIFICATION
Numerous advancements in the field of data science have been achieved, particu-
larly in the classification of data collected through feature extraction. Processing is
required due to the volume of created data, as well as some unique algorithms and
analytical tools for obtaining significant insights [15]. This data can be categorized
using a number of techniques, including KNN and SVM [16]. Consider Tress’ deci-
sion. Every grouping technique enjoys its own benefits and detriments. In the preced-
ing section, it has been found that properties extracted from segmented cells can be
valuable for classification purposes in image processing. An SVM classifier was used
in our study. To distinguish unfair highlights that can be utilized for order, we led
head part investigation (PCA) preceding applying the classifier.

18.15 RESULT
A more effective image pre-processing technique involved the use of bilateral fil-
tering, a non-linear blur method known for preserving boundary information.
Subsequently, image segmentation was performed using an active contour model
incorporating LGFE. The primary objective of this approach was to extract relevant
data while disregarding irrelevant information for accurate analysis. For the segmen-
tation process, the nucleus and cell borders were specifically targeted by employing
a Gaussian distribution with fluctuating means and variances. Starting from a seed
point, active contours were marked within the image. Various parameters, such as
kernel size, time step, inner weight, outer weight, and the number of iterations, were
adjusted to optimize the segmentation procedure. On average, it took approximately
240 seconds to complete 800 cycles of this process. We used a multi-class SVM
classifier to divide the data into various classes after converting the black-and-white
images of the cell and its nucleus into segments and examining their geometric fea-
tures, such as area, eccentricity, perimeter, cell area, and nucleus area. There are four
types of classifiers: Polynomial SVM, Linear SVM, Quadratic SVM, and Gaussian
RBF SVM (Figure 18.8). Because segmentation is the classification model’s top pri-
ority, we chose the images with the highest dice coefficient values because they pro-
duced the best segmentation results. The top 200 images with the best segmentation
among the 917 were chosen for feature extraction. After that, a 90:10 ratio was used
Deep Learning for Real-Time Data Analysis from Sensors 335

FIGURE 18.8 (a) Polynomial SVM confusion matrix; (b) Gaussian RBF SVM confusion
matrix; and (c) Quadratic SVM confusion matrix. (Continued)
336 AI-Driven IoT Systems for Industry 4.0

FIGURE 18.8 (Continued)

to divide these images into a training set and a testing set. A confusion matrix was
produced as an output using the SVM model.
The accuracy rates produced by various SVM types varied. With a rate of 95%,
the Polynomial SVM was the most accurate, followed by the Quadratic and the
Gaussian RBF SVM.

18.16 CONCLUSION
Among women, cervical cancer is the most common type. The Papanicolaou test, or
Pap smear test, is the best strategy for early recognizable proof and finding of cervi-
cal disease because of its capacity to distinguish cell irregularities and precancerous
transformations. We developed an automated algorithm that accurately identifies the
kind of Pap smear images in order to bypass the laborious and ineffective manual
cervical cancer detection procedure. The calculation includes compelling clamor
expulsion, division, and element extraction. To improve image quality and reduce
noise, bilateral filtering was utilized. Energy and local Gaussian fitting were used
to segment cells and nuclei without re-initialization. The exactness of the division
was assessed utilizing the Dice coefficient, and different boundaries were utilized to
portray the state of every cell.
For the categorization of cervical images as normal or abnormal, a neural net-
work classifier is utilized. The experimental results showcase the efficacy of the
proposed framework in accurately identifying non-cancerous and cancerous regions
within cervical images. Future studies can expand upon this work by examining
Deep Learning for Real-Time Data Analysis from Sensors 337

segmented cancerous areas in cervical images and classifying them based on sever-
ity levels such as “Early” or “Advanced,” enabling timely surgical interventions to
prevent fatalities. Additionally, the potential influence of different infections on cer-
vical cancer can be explored using cervical and Pap smear images.
To distinguish between normal and malignant cells and nuclei, a set of features
was extracted, encompassing texture attributes like the gray-level co-occurrence
matrix (GLCM), as well as geometric properties, including area, eccentricity, and
perimeter. These features were chosen to capture the distinctive textural and geo-
metric characteristics of the cells, aiding in the differentiation process. To address
the challenge of high-dimensional feature space, PCA was employed. PCA was
utilized to compute and condense basic statistical and texture features, effectively
reducing dimensionality while preserving relevant information. For classification,
three distinct varieties of multi-class SVM classifiers were employed. This com-
bined approach not only enables accurate classification but also identifies the most
relevant features for all categories. The performance of the classifiers was evaluated
by plotting confusion matrices for each of the four SVM classifiers used. A dataset
of approximately 200 photos was used to train and evaluate the classifiers. Among
the different SVM classifiers, the polynomial SVM classifier exhibited the highest
precision, surpassing other methodologies.

REFERENCES
1. S. Kumar Singh, P. Agrawal, V. Madaan, and M. Sharma, “Classification of clinical
dataset of cervical cancer using KNN,” Indian Journal of Science and Technology,
vol. 9, no. 28, pp. 1–5, 2016.
2. R. Srivastava, S. Srivastava, and R. Kumar, “Detection and classification of cancer
from microscopic biopsy images using clinically significant and biologically interpre-
table features,” Journal of Medical Engineering, vol. 2015, pp. 101–115, August 2015.
3. N. Theera Umpon, S. Auephanwiriyakul, and T. Chankong, “Automatic cervical cell
segmentation and classification in Pap smears,” Computer Methods and Programs in
Biomedicine, vol. 113, no. 2, p. 2014, February 2014.
4. N. Mittal, G. Singh, and M. Arya, “Cervical cancer detection using segmentation on
Pap smear images,” in the International Conference, Jaipur, 2016.
5. X. Xu, Y. He, J. Song, and J. Su, “Automatic detection of cervical cancer cells by a
two-level cascade classification system,” Analytical Cellular Pathology, vol. 2016,
pp. 8–16, May 2016.
6. J. Jantzen, and G. Dounias, “Analysis of Pap-smear Image Data,” in Proceedings of the
Nature-Inspired Smart Information System, pp. 11–18, 2006.
7. L. Lu, I. Nogues, R. M. Summers, S. Liu, J. Yao, and L. Zhang, “DeepPap: Deep con-
volutional networks for cervical cell classification,” IEEE Journal of Biomedical and
Health Informatics, pp. 21–31, November 2017.
8. J. Norup, “Classification of Pap-smear Data by Tranduction Neuro-Fuzzy Methods,”
Lyngby, Denmark, 2005.
9. J. Norup, G. Dounias, B. Bjerregaard, and J. Jantzen, “Pap-smear Benchmark Data
For Pattern Classification,” NiSIS 2005: Nature Inspired Smart Information Systems
(NiSIS), EU Co-Ordination Action, 2005.
10. P. Kornprobst, J. Tumblin, F. Durand, and S. Paris, “Bilateral filtering: Theory and
applications,” Foundations and Trends® in Computer Graphics and Vision, vol. 4,
pp. 1–73, 2009.
338 AI-Driven IoT Systems for Industry 4.0

11. L. He, A. Mishra, C. Lee, and L. Wang, “Active contours driven by local Gaussian dis-
tribution fitting energy,” Signal Processing, vol. 89, no. 12, pp. 2435–2447, December
2009.
12. E. Martin and J. Jantzen, “Pap-smear Classification,” Master’s Thesis, Automation,
Technical University of Denmark, Oersted-DTU, September 22, 2003.
13. M. Chowdhury, L. B. Mahanta, M. Kumar Kundu, A. Kumar Das K. Bora, “Automated
classification of Pap smear images to detect cervical dysplasia,” Computer Methods
and Programs in Biomedicine, vol. 138, pp. 31–47, January 2017.
14. Y. Hao, K. Hwang, L. Wang, L. Wang, and M. Chen, “Disease prediction by machine
learning over big data from healthcare communities,” IEEE Access, vol. 5, pp. 8869–
8879, April 2017.
15. J. Ding, J. Wang, X. Kang, and X.-H. Hu, “Building an SVM Classifier for Automated
Selection of Big Data,” in IEEE International Congress on Big Data (BigData
Congress), pp. 15–22, 2017.
16. M. Sharma, S. Kumar Singh, P. Agrawal, and V. Madaan, “Classification of clinical
dataset of cervical cancer using KNN,” Indian Journal of Science and Technology,
vol. 9, no. 28, pp. 1–5, July 2016.
17. R. Kumar, R. Srivastava, and S. Srivastava, “Detection and classification of cancer
from microscopic biopsy images using clinically significant and biologically interpre-
table features,” Journal of Medical Engineering, vol. 2015, pp. 1–14, April 2015.
18. J. Talukdar, C. Kr Nath, and P. H. Talukdar, “Fuzzy clustering based image segmenta-
tion of Pap smear images of cervical cancer cell using FCM Algorithm,” International
Journal of Engineering and Innovative Technology, vol. 3, no. 1, pp. 460–462, July 2013.
19. W. William, A. Ware, A. H. Basaza-Ejiri, and J. Obungoloch, “Cervical cancer clas-
sification from Pap-smears using an enhanced fuzzy C-means algorithm,” Informatics
in Medicine Unlocked, vol. 14, pp. 23–33, February 2019.
20. P. Liang, G. Sun, and S. Wei, “Application of deep learning algorithm in cervical cancer
MRI image segmentation based on wireless sensor,” Journal of Medical Systems,
vol. 43, no. 156, pp. 1–7, June 2019.
21. F. H. D. Araújo, R. R. V. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A.
G. C. Bianchi, and F. N. S. Medeiros, “Deep learning for cell image segmentation and
ranking,” Computerized Medical Imaging and Graphics, vol. 72, pp. 13–21, March
2019.
22. Y. Song, L. Zhang, S. Chen, D. Ni, B. Li, Y. Zhou, B. Lei, and T. Wang, “A deep learn-
ing based framework for accurate segmentation of cervical cytoplasm and nuclei,” in
Proc. EMBC, pp. 2903–2906, Aug. 2014.
23. Y. Liu, P. Zhang, Q. Song, A. Li, P. Zhang, and Z. Gui, “Automatic segmentation of
cervical nuclei based on deep learning and a conditional random field,” IEEE Access,
vol. 6, pp. 53709–53721, 2018.
24. Z. Hu, J. Tang, Z. Wang, K. Zhang, L. Zhang, and Q. Sun, “Deep learning for image-
based cancer detection and diagnosis − A survey,” Pattern Recognition, vol. 83,
pp. 134–149, 2018, ISSN 0031-3203.
19 Blockchain as a
Controller of Security in
Cyber-Physical Systems
A Watchdog for Industry 4.0
Adri Jovin John Joseph, Marikkannan Mariappan,
and Marikkannu P

19.1 INTRODUCTION
The advancement in computers, communication, and control has led to their advent in
the most conventional mechanisms which involve mechanical and electrical energy.
These mechanical and electrical devices that are controlled by computers, communi-
cation channels and control systems form the basis of a cyber-physical system (CPS).
CPSs connect computing systems with physical systems. Integrating the physical
system with computing systems provides a lot of advantages like handling physical
systems from the remote and having control over most operations seamlessly. These
advantages have made them suitable to be used in applications relevant to defense,
healthcare, household, and industrial infrastructure [1]. Some of the most common
applications that are a part of CPSs are power grids, smart cities, healthcare, smart
transportation, etc. All these create a lot of data, which greatly contributes to big
data analytics. The advantage poses such systems with risk also. The various com-
ponents of a CPS are tightly coupled with one another, which indirectly infers that
each component depends on another component for its functioning [2]. Systems can
be controlled equally by hackers like authorized users when access is controlled
through a public network. When there is a lot of scope for data transfer through com-
munication channels, it is obvious that it drags the interests of hackers and people
with malicious intent. Hence, it is important to secure the data from the access of
malicious users. A CPS should ensure the important information security services,
namely confidentiality, availability, and integrity when the data gets transferred from
one layer to the other. Further, it should ensure authenticity and a non-repudiation
nature. One of the peculiar features of data generated in CPSs is that it is distributed
over the entire network and is processed at different ends. This poses more problems
in managing the data as well as in securing the data. These data reach a longer extent
than what an end-user expects to be. The longer the extent the data reaches, the more
its impact on the business. Any compromise in any part of the data in any geographi-
cal location may have a drastic impact on the data manager as well as the service

DOI: 10.1201/9781003432319-19 339


340 AI-Driven IoT Systems for Industry 4.0

provider. The CPS market was estimated to be around 60.5 billion US dollars in 2018
and is anticipated to grow with a compounding rate of 9.3% for a period of ten years
in near future, according to Credence Research [3]. If there is a business impact over
such a billion-dollar economy, it would cripple the global economic scenario. The
Internet of Things (IoT) is a well-known use case of CPS that is popular and has
extended its application in various domains from simple farm monitoring systems to
complex nuclear reactor control and monitoring systems. The reason for quoting IoT
in this instance is that the technology, though viewed to be a complex one during its
initial phases, has become a very common one in recent days. It has advanced in a
way that even novice users could learn about it in a short period and could develop
their applications and host them over the cloud. Several cloud-hosting services offer
easier interfaces for users to access their services. But the underlying risk is with
security. Most users do not bother about the security issues or are unaware of the
security implications at the time of deployment paving way for intruders to easily
gain access to the system.
Blockchain is a promising innovation that has been deployed in many applica-
tions used over the internet in recent days. The blockchain contains data, which
are outcomes of transactions. These transactions are done in a distributed envi-
ronment, which is validated by consensus algorithms and is integrated as chains
of blocks. A block is a collection of transactions, which in turn contributes to the
blockchain which is a collection of blocks [4]. Like most other data storages, block-
chains can also be compromised. The presence of consensus algorithm is one key
component that can be used to prevent the compromise on data. Data cannot be
appended nor removed without consent from the involved parties or through some
proof of work. In case of trying to modify or compromise the data by some means,
the whole blockchain may get invalidated, which makes the efforts of the hacker
futile. Further, to add, it is not as easy to alter data in a blockchain, as many people
think. It requires a lot of computational power and effort to get it done, because the
consensus algorithm utilized by each blockchain may vary depending on the frame-
work or application deployed. This property of implementing a non-alterable data
structure is termed immutability. Immutability is the core value of a blockchain
system. According to Gartner, blockchain will add 3.1 trillion US dollars in busi-
ness in 2030 [3]. Blockchains have found their application in finance, IoT, cyberse-
curity, smart contracts, identity management, reputation management etc. and suit
any application that requires decentralization and consensus among participation
parties.
How does the blockchain get aligned with CPSs? How do they ensure security in
CPSs? Blockchains are distributed ledgers, whereas CPSs produce distributed data.
This data needs to be validated and integrated, which by default ensures integrity.
The blockchains cannot be altered so easily, which further ensures the integrity of
data. The blockchains are available throughout which ensures the availability of
data. Further, various cryptographic implementations in blockchains prove to ensure
the confidentiality of data. Some of the use cases that are described in this chapter
are first of their kind, and hence it would be really helpful for researchers to deep
dive into those domains.
Blockchain as a Controller of Security in Cyber-Physical Systems 341

19.2 BLOCKCHAINS AND SECURITY


Blockchain, by its basic implementation construct, has built-in cryptographic fea-
tures. The inbuilt security feature of blockchains, which includes encryption and
hashing, is a measure to implement immutability and does not add to the security
of data or the intrusion happening at the time of execution of the consensus algo-
rithm. However, in a view, it is a data structure that is designed to perform trusted
transactions amidst untrusted parties. One of the successful implementations of
blockchains is Bitcoin, which has been successfully executing operations to date
since its introduction in 2008, without any intervention by the Government or
some other regulated authority. Some other systems that implement blockchains
are Ethereum and Hyperledger. These have been successfully deployed in finan-
cial and business use cases [5]. Though there are several features available in
blockchains in the merit of cybersecurity, blockchain is no exception to intelligent
attacks by malicious users. However, blockchains can be engineered to provide
data security. Several security mechanisms have been devised to supplement the
capability of blockchains. A few systems are designed to address the specific
security issues related to CPSs alone. Some of them address false data injection
attacks [6–8].

19.3 THE BLOCKCHAIN – CYBER-PHYSICAL SYSTEMS RELATION


Nodes, which are the building blocks of blockchains, in general, possess the capa-
bility of self-configuring, connecting, and getting synchronized with one another
within the same peer group. Hence, it requires minimal human intervention to moni-
tor the process happening within a blockchain [1]. Blockchains are suitable for orga-
nizations that cannot afford single-point failure [6].
Some of the aspects that relate the CPSs to blockchain systems are listed
below [9]:

• Decentralized and replicated data storage, which improves the availability


of data.
• Ability to include new nodes whenever required.
• Irrespective of the nature of the participant, the process of consensus is
executed.
• Ability to execute smart contracts in a distributed manner.
• Operations are done in a ‘Majority wins’ mode.
• Public availability or transparency of data.
• Immutability of data.
• Availability of the system throughout the clock.
• In a distributed environment, the attraction of penalty for a losable stake
increases the accountability of participants.
• Load balancing throughout the network leads to graceful degradation of
the system.
• Ownership of identities and private keys by the participants.
342 AI-Driven IoT Systems for Industry 4.0

Some of the blockchain capabilities that need to be deployed in a CPS are as


follows [10]:

• Enabling unified data channel and platform for machine-to-machine com-


munication within the CPS
• Enabling cybersecurity for the CPS
• Guaranteeing scaling and restructuring of the CPS
• Provisioning of a single data storage
• Creating digital twins using smart contracts
• Performing all operations through smart contracts

19.3.1 Challenges
One of the major challenges of deploying blockchains in CPSs is the volume of data
the blockchain must handle since it involves a huge cost to handle the volume of data.
Secondly, the CPSs designed for each application have a specific data structure, and
hence, data interoperability is another major challenge. Thirdly, the public avail-
ability of blockchains should ensure the privacy of users, and hence, the design of a
privacy framework for blockchains is a major challenge [1].
Apart from those listed above, there are certain domain-specific challenges. For
example, in the case of power grids, there may be fluctuation in the recent state, and
signature verification may contribute to additional overhead and slower execution of
processes due to consensus process and data replication [9].

19.3.2 Limitations
Though blockchain implementation has advantageous aspects in CPS, it possesses
certain limitations which are listed as follows [2]:

• The number of devices connected shall be limited based on the block size
and turnaround time of hash calculations.
• Transaction fee or a reward mechanism is mandatory.
• A set of miners connected to the blockchain may gain complete control over
the system if they possess a malicious intent.
• The entire ledger needs to be stored if a party is an endorser or miner, which
adds data storage overhead.

19.3.3 Classification
Blockchain CPSs can be categorized into further subcategories based on the acces-
sibility option, voting rights, and sustainability properties [9]. The classification is
portrayed in Figure 19.1.
Based on the accessibility option available in the blockchain-aided CPS, the sys-
tems are categorized as follows:

Open blockchain-aided CPSs: This CPS provides open access to the various
layers of the architecture. The participants are free to vote, participate, and
Blockchain as a Controller of Security in Cyber-Physical Systems 343

FIGURE 19.1 Classification of blockchain-aided cyber-physical systems.

get excluded from participation, and all processes are based on the consen-
sus mechanism that is deployed in the system.
Private blockchain-aided CPSs: The access is restricted by the owners of the
CPS. The data about the state and consensus is not exposed to the public.

Based on the voting rights in the blockchain-aided CPS, systems are categorized
as follows:

Permission-less blockchain-aided CPSs: In this type of CPS, voting is exer-


cised by everyone, but there may be variation in the weightage of the vote
depending on the relationship of the user with the system.
Permissioned blockchain-aided CPSs: In this category of CPS, all the users
are not permitted to vote. Users who have delegated the authority of voting
alone can vote during the consensus mechanism.

Based on the sustainability properties prevailing in the blockchain, they are clas-
sified as follows:

Fully self-sustaining symbiotic blockchain-aided CPSs: These CPSs do not


rely on external agents. They completely rely on the work done by the inter-
nal agents and natural resources. However, it is possible to invite external
agents to participate as internal agents.
Partially symbiotic blockchain-aided CPSs: These may partially rely on natu-
ral resources, or the resources required may be collected from agents from
an external system.
Non-symbiotic blockchain-aided CPSs: The resources required by these CPSs
are completely provided by agents from external systems.

The category of blockchain-aided CPSs based on sustainability property is appli-


cable only when they are associated with a multi-agent system.
344 AI-Driven IoT Systems for Industry 4.0

19.4 SECURE BLOCKCHAIN DEPLOYMENT OVER CPS


The National Institute of Standards and Technology (NIST) provides guidelines on
the tolerance of CPSs against cybersecurity attacks. It provides detailed guidelines
on the mode of operation, enablement of fail-safe mode etc. which has a great impact
on the business demands of CPSs [11]. This is quite important since most modern
large-scale industries rely on CPSs for their day-to-day operations.
The transactions of a mined block of a blockchain are stored as a hash value
in the Merkle root helps in maintaining the integrity of the data. The records in a
blockchain cannot be modified since the formation of blocks forms a timestamped
sequence. Despite the involvement of untrusted parties in the transactions, the com-
putation and the results obtained because of computation are trusted.

19.5 SECURE BLOCKCHAIN-ENABLED CYBER-PHYSICAL SYSTEMS


19.5.1 Power Grid
A blockchain-enabled smart grid architecture with privacy preservation as a prime
objective was designed using Hyperledger Fabric with three layers [1]:

1. Protocol layer
2. Infrastructure layer, and
3. Blockchain layer

The functionalities of each layer were well-defined in this system, and the entities
listed under each layer are displayed in Table 19.1.
The client automation which provides the platform for the Protocol Layer is
responsible for verification. It also protects the privacy of the customers. The data
provenance generator is responsible for the monitoring as well as the retrieval of data
on the delivery of power. The data thus obtained is uploaded to the Fabric platform.
The data will be attached to the ledger using the consensus scheme that is being
deployed in the blockchain system.

TABLE 19.1
System Components – Blockchain-Enabled Smart Power Grid
Protocol Layer Infrastructure Layer Blockchain Layer
Hyperledger Fabric Client Powerplant Nodes
Token Database Transmission infrastructure Copies of all blocks of transactions
Registration Database Distribution infrastructure
Credential Database Service Provider
Marketing Agent
Operations Division
Home Area Network
Advanced Metering Infrastructure
Customers
Blockchain as a Controller of Security in Cyber-Physical Systems 345

In these systems, since the data is stored in a tree data structure, a large volume
of data can be made scalable. The data is stored in the form of a JSON file, the
interoperability issue is also addressed. A token-based access method is designed to
conserve the privacy of the end-users of the system.
Another implementation of blockchain-enabled multi-agent-based power grid
systems takes the liberty of deploying permissionless blockchain in a trustless envi-
ronment [9]. A particular subclass of CPS known as an open symbiotic blockchain-
aided CPS is used in this system. This system provides security for the data stored.
Amidst the two choices of storing data, the one within the blockchain and the other
as a cryptographic hash pointer, this system stores the data in the blockchain. This
provides security for the data. Many wonder how such a system provides security
for the data. It should be noted that blockchain storage comes with a cost for each
transaction. A party wishing to access this data needs to spend enormous computa-
tional power to gain access to the data. This in turn enhances the security of the data.
Some of the secure implementations of power grids deploy blockchain as a security
measure alone [12].

19.5.2 Nuclear Power Plants


Nuclear power plant (NPP) control systems are critical CPSs. Any breach of security
in an NPP may produce devastating results. In most cases, CPSs related to NPP con-
trol are placed in isolated networks for security reasons. It is one security measure
that helps to prevent it from being accessed from a network that is available to the
public. Despite these security measures, incidents of the compromise of security in
NPPs were reported in 2010, when Stuxnet destroyed nearly a thousand centrifuges
in the Iranian uranium enrichment facility. This is a typical example that a knowl-
edgeable intruder who has sufficient knowledge about the instrumentation and con-
trol system of a nuclear facility can initiate a cyber-attack against it. Cyber-attacks
focusing on the programmable logic controllers used in nuclear facilities have also
been identified [6]. The most common one is false code injection. It is stated that the
evidence of an attack may not be available in a conventional system. An NPP that
deploys blockchain can satisfy the following security requirements:

• Identification of illicit data exploitation


• Protection against accidental deletion and manipulation
• Examination and identification of cyber-attacks
• Prevention of tampering with data and access logs

Choi et al. proposed a reactor protection system (RPS), a system designed specifi-
cally for the protection of the control systems in the nuclear reactor to ensure the
protection of the system against false data injections. In the case of power grids, the
security issue is concerned with a public blockchain, whereas, in the case of NPP,
the security issue is concerned with that of a private blockchain. This system deploys
a novel concept, Proof-of-Monitoring (PoM) as the consensus mechanism to ensure
the integrity of the RPS.
346 AI-Driven IoT Systems for Industry 4.0

19.5.3 Healthcare
The CPS, nevertheless, has wide applications in healthcare. One of the serious data
attacks that were encountered by the healthcare industry was the outbreak of the
Wannacry ransomware that cost the industry millions of dollars. Despite the spend-
ing and compensation, many healthcare agencies and institutions were not able to
recover the data that they had lost. Medical records are equivalently critical as far
as an individual is concerned. A public blockchain implementation based on the
Ethereum blockchain is used to preserve medical data [13]. The Ethereum block-
chain is used in this model since the dependency of Ethereum on other third parties
is almost trivial. The sequence of transactions in the system is as follows:

• The healthcare data is sent to the blockchain using a decentralized applica-


tion, commonly known as DAPP.
• The Metamask wallet is used to access the Ethereum-based DAPP.
• TESTRPC generates the smart contract to perform blockchain operations.
• The blockchain transactions will be stored in each medical record and rel-
evant gas points will be deducted through the Metamask wallet.
• For each successful transaction, a unique identity is assigned.

This system is immune to any tampering or encryption of data or file systems by


ransomware.
Another healthcare CPS that relies on wireless sensor networks and cloud com-
puting has been integrated with a blockchain system [14]. The data being in a wire-
less channel is prone to several open attacks, unlike a wired system. Hence, a deep
belief network with a ResNet model is designed for diagnosis, and a convolutional
neural network is used to detect intrusions in the system. Wireless channels are inev-
itable in healthcare systems since they offer comfort for physicians and patients to
monitor and to get monitored, respectively. The risk is directly proportional to the
comfort in this case. The system is trained with the NSL-KDD 2015, CIDDS-001,
and ISIC datasets using deep belief networks. A similar implementation is found in
many other industrial CPSs [15].

19.5.4 Manufacturing Industry
A blockchain-based CPS (BCPS) in the manufacturing industry namely the 5C-CPS
architecture ensured the security of the CPSs. The tri-layer blockchain-based CPS
is classified as 5Cs based on their functionality [16]. Table 19.2 shows the correla-
tion between the BCPS layers and 5C-CPS architecture. The connection layer is
associated with the physical layer which may be connected to another device or is
operated by a human being. It also contains components like sensors, actuators, and
machines. The conversion layer comprises analytic tools, deep-learning networks,
and computation infrastructure like fog/edge computing infrastructure. The com-
ponent of the cyber layer comprises big data, digital twins, cloud computing infra-
structure, data warehouses, and other inter-cyber interactions. The cognition layer
contains the decision-support system. The configuration layer contains intelligent
decision-making components and management information systems. It also shows
the alignment of the 5Cs with the properties of blockchain.
Blockchain as a Controller of Security in Cyber-Physical Systems 347

TABLE 19.2
Alignment of BCPS Layers with 5C-CPS Layers and Blockchain Properties
BCPS Layer 5C – CPS Layer Blockchain Property

Non-intermediary
Decentralization
Transparency
Immutability

Authenticity

Distribution

Anonymity
Connection Net Connection ✓ ✓ ✓ ✓ ✓ ✓ ✓

Conversion ✓ ✓ ✓ ✓ ✓ ✓
Cyber Net
Cyber ✓ ✓ ✓ ✓ ✓ ✓ ✓

Cognition ✓ ✓ ✓ ✓ ✓ ✓
Management Net
Configuration ✓ ✓ ✓ ✓ ✓ ✓

TABLE 19.3
Blockchain Contribution to BCPS
BCPS Layers Blockchain Contribution
Management Net Advanced Cryptography, Smart Contract, Asset Tokenization, P2P interactions
Cyber Net Decentralized AI, Load distribution among nodes, Advanced Cryptography,
Micro-clouds, Smart Contract, P2P interactions
Connection Net Tracking components, P2P interactions, Smart Contracts, Shared resources,
Advanced Cryptography

The blockchain contributes to the functions described in Table 19.3 in the CPS for
enablement of data security.
The above-mentioned system guarantees a secure and reliable service in the man-
ufacturing sector. Another variant of the above-mentioned model is used for trust
management services in CPSs [17].

19.6 CONCLUSION
The decentralized nature of CPSs makes them suitable to be integrated with decen-
tralized databases like blockchains. The inherent mechanisms in a blockchain namely
consensus algorithms, distributed data, and secure protocols supplement the function-
alities of CPSs. The implementation issues, the security challenges, and other factors
relevant to the implementation of blockchains in CPSs are discussed in this chapter.
As a concluding remark, blockchains should not be deployed wherever possible, rather
they need to be deployed in spaces where they could work out and enhance the system.
348 AI-Driven IoT Systems for Industry 4.0

REFERENCES
1. X. Liang, S. Shetty, D. K. Tosh, J. Zhao, D. Li, and J. Liu, “A reliable data provenance
and privacy preservation architecture for business-driven cyber-physical systems using
blockchain,” Int. J. Inf. Secur. Priv., vol. 12, no. 4, pp. 68–81, 2018.
2. H. Rathore, A. Mohamed, and M. Guizani, “A survey of blockchain enabled cyber-
physical systems,” Sensors (Switzerland), vol. 20, no. 1, pp. 1–28, 2020.
3. A. Braeken, M. Liyanage, S. S. Kanhere, and S. Dixit, “Blockchain and cyberphysical
systems,” Computer (Long. Beach. Calif)., vol. 53, no. 9, pp. 31–35, 2020.
4. D. B. Rawat, V. Chaudhary, and R. Doku, “Blockchain technology: Emerging applica-
tions and use cases for secure and trustworthy smart systems,” J. Cybersecurity Priv.,
vol. 1, no. 1, pp. 4–18, 2020.
5. P. J. Taylor, T. Dargahi, A. Dehghantanha, R. M. Parizi, and K. K. R. Choo, “A system-
atic literature review of blockchain cyber security,” Digit. Commun. Netw., vol. 6, no. 2,
pp. 147–156, 2020.
6. M. K. Choi, C. Y. Yeun, and P. H. Seong, “A novel monitoring system for the data
integrity of reactor protection system using blockchain technology,” IEEE Access, vol. 8,
pp. 118732–118740, 2020.
7. N. Etemadi, P. Van Gelder, and F. Strozzi, “An ism modeling of barriers for block-
chain/distributed ledger technology adoption in supply chains towards cybersecurity,”
Sustain., vol. 13, no. 9, pp. 1–28, 2021.
8. J. Zhang, L. Pan, Q. L. Han, C. Chen, S. Wen, and Y. Xiang, “Deep learning based
attack detection for cyber-physical system cybersecurity: A survey,” IEEE/CAA J.
Autom. Sin., vol. 9, no. 3, pp. 377–391, 2022.
9. R. Skowroński, “The open blockchain-aided multi-agent symbiotic cyber–physical
systems,” Futur. Gener. Comput. Syst., vol. 94, pp. 430–443, 2019.
10. Y. Maleh, S. Lakkineni, L. Tawalbeh, and A. A. AbdEl-Latif, Blockchain for Cyber-
Physical Systems: Challenges and Applications, 2022. https://link.springer.com/chapter/
10.1007/978-3-030-93646-4_2
11. E. R. Griffor, C. Greer, D. A. Wollman, and M. J. Burns, “Framework for cyber-physical
systems: Volume 1, overview,” Nist, vol. 1, no. 1, p. 79, 2017.
12. G. Liang, S. R. Weller, F. Luo, J. Zhao, and Z. Y. Dong, “Distributed blockchain-based
data protection framework for modern power systems against cyber attacks,” IEEE
Trans. Smart Grid, vol. 10, no. 3, pp. 3162–3173, 2019.
13. R. Ch, G. Srivastava, Y. L. V. Nagasree, A. Ponugumati, and S. Ramachandran,
“Robust cyber-physical system enabled smart healthcare unit using blockchain technol-
ogy,” Electronics, vol. 11, no. 19, p. 3070, 2022.
14. G. N. Nguyen, N. H. Le Viet, M. Elhoseny, K. Shankar, B. B. Gupta, and A. A. A.
El-Latif, “Secure blockchain enabled cyber–physical systems in healthcare using deep
belief network with ResNet model,” J. Parallel Distrib. Comput., vol. 153, pp. 150–160,
2021.
15. S. Rathore and J. H. Park, “A blockchain-based deep learning approach for cyber secu-
rity in next generation industrial cyber-physical systems,” IEEE Trans. Ind. Inf., vol. 17,
no. 8, pp. 5522–5532, 2021.
16. J. Lee, M. Azamfar, and J. Singh, “A blockchain enabled cyber-physical system archi-
tecture for industry 4.0 manufacturing systems,” Manuf. Lett., vol. 20, pp. 34–39, 2019.
17. B. K. Mohanta, U. Satapathy, M. R. Dey, S. S. Panda, and D. Jena, “Trust Management
in Cyber Physical System using Blockchain,” 2020 11th Int. Conf. Comput. Commun.
Netw. Technol. ICCCNT 2020, 2020.
20 Energy Management in
Industry 4.0 Using AI
Jeevitha D, Deepa Jose, Sachi Nandan Mohanty,
and A. Latha

20.1 INTRODUCTION
Energy Management using AI is an emerging field that combines cutting-edge tech-
nologies with sustainable practices to optimize energy usage in industrial settings.
By utilizing the capabilities of artificial intelligence (AI), businesses can achieve
significant improvements in energy efficiency, cost savings, and environmental
sustainability.
According to a report by McKinsey & Company, AI-powered energy manage-
ment systems can reduce energy consumption by up to 20% in manufacturing indus-
tries [1]. A significant advantage of implementing AI in energy management is its
ability to enable predictive maintenance. By analyzing historical data and machine-
learning algorithms, AI systems can forecast equipment failures and maintenance
requirements, allowing for proactive interventions. This not only reduces downtime
and maintenance costs but also enhances overall operational efficiency [2].
Furthermore, AI-based energy management systems can facilitate demand man-
agement programs. By participating in such programs, organizations can optimize
their energy usage, contribute to grid stability, and potentially earn financial incen-
tives [3].
A study conducted by the International Journal of Advanced Manufacturing
Technology highlights that AI algorithms, combined with Internet of Things (IoT)
technologies, can significantly improve energy efficiency in industrial processes [4].

20.2 AI TECHNIQUES FOR ENERGY MANAGEMENT


Energy management plays a crucial role in Industry 4.0, where automation and data-
driven technologies are transforming the industrial landscape. As companies strive
for greater efficiency and sustainability, AI techniques are emerging as powerful
tools to reduce costs and maximize energy use.

20.2.1 Real-Time Data Collection with IoT and Smart Sensors


Real-time information collection using the IoT and smart sensors is revolutioniz-
ing the way businesses gather and utilize information. By combining IoT devices
and smart sensors into various systems and processes, organizations can collect

DOI: 10.1201/9781003432319-20 349


350 AI-Driven IoT Systems for Industry 4.0

real-time data from physical objects, environments, and assets, enabling them to
make informed decisions and optimize operations.
Sensors, connected devices, and actuators IoT devices are capable of capturing
and transmitting data over the internet. These devices can be embedded in various
objects and environments, including manufacturing equipment, vehicles, buildings,
and even wearable devices.
Smart sensors play a crucial role in collecting real-time data. The purpose of
these sensors is to detect and measure specific physical properties or environmental
factors. They can provide accurate and precise measurements, ensuring reliable data
collection. Temperature sensors, occupancy sensors, air quality sensors, and vibra-
tion sensors are examples of smart sensors.

20.2.2 Energy Monitoring and Visualization Using AI


Key advantages of AI in energy monitoring are to detect anomalies and abnormal-
ities in energy consumption patterns [5]. AI algorithms can review huge volumes
of information and analyze deviations from expected energy usage, indicating
potential equipment malfunctions, energy wastage, or operational inefficien-
cies. By detecting these issues promptly, businesses can take corrective actions
and optimize energy usage, resulting in cost savings and improved operational
efficiency.
In Figure 20.1, each block represents a specific component or functionality of the
energy monitoring and visualization system using AI. Data flows from left to right,
starting with data sources and moving through data preprocessing, AI analytics,
predictive analytics, real-time monitoring, energy visualization, anomaly detection,
efficiency recommendations, integration with energy management systems, cloud
connectivity, user interface, and finally, generating energy reports and insights. The
AI analytics engine plays a central role in processing data and enabling various
AI-driven functionalities.

20.2.3 Dynamic Energy Load Management with AI Algorithms


Dynamic Energy Load Management with AI algorithms for AI is to maximize
energy usage and also effectively manage the load in real-time [6]. By utilizing AI
algorithms, businesses can intelligently allocate and distribute energy resources
based on demand, environmental factors, and cost considerations.
AI algorithms are a key function in dynamic energy load management by analyz-
ing large volumes of information, weather conditions, occupancy levels, and energy
prices. These algorithms identify future load demands and make informed decisions
to optimize energy utilization.
One key benefit of dynamic energy load management using AI is the ability to
balance energy supply and demand. By continuously monitoring consumption pat-
terns and predicting future load demands, AI algorithms can optimize energy distri-
bution and reduce peak loads. This not only helps prevent overloading and blackouts
but also minimizes energy waste and improves overall grid stability.
Energy Management in Industry 4.0 Using AI 351

FIGURE 20.1 Energy monitoring and visualization.

Another advantage of AI-powered load management is its ability to incorporate


renewable energy sources effectively. AI algorithms can analyze weather forecasts,
solar irradiance data, and wind speed data to determine the optimal times to utilize
renewable energy sources. This ensures that renewable energy is maximized when
available, reducing reliance on non-renewable sources and promoting sustainability.
Organizations can modify their energy consumptions based on shifting demands
and price fluctuations. This flexibility not only results in cost savings but also con-
tributes to the stability of the energy grid.
Implementing dynamic energy load management with AI algorithms offers
numerous advantages such as improved energy efficiency, reduced costs, and
enhanced sustainability. By leveraging AI’s analytical capabilities, businesses can
optimize energy distribution, balance supply and demand, and integrate renewable
352 AI-Driven IoT Systems for Industry 4.0

energy sources more effectively. This intelligent approach to load management has
the potential to transform the energy landscape and bring about a more sustainable
future.

20.2.4 Anomaly Detection and Preventive Maintenance


Anomaly detection involves the identification of patterns or events that deviate sig-
nificantly from the expected behavior. In the context of industrial operations, this
could include unusual sensor readings, abnormal machine performance, or devia-
tions in energy consumption. By continuously monitoring data streams from sensors,
IoT devices, and other sources, machine-learning algorithms can analyze historical
patterns and identify outlier events in real time.
The benefits of anomaly detection are twofold. First, it enables early detection of
potential failures or malfunctions in machinery or systems. By flagging anomalies,
companies can investigate the causes, assess the severity of the issue, and take cor-
rective actions before the situation worsens. This helps to prevent unexpected down-
time, production delays, and equipment damage.
Second, anomaly detection contributes to proactive maintenance strategies. Rather
than relying solely on predetermined maintenance schedules or reactive repairs, busi-
nesses can adopt a predictive maintenance approach. By examining historical and real-
time data, machine-learning algorithms can forecast equipment failure probabilities
and estimate the optimal timing for maintenance activities. This allows for planned
downtime, reduced maintenance costs, and increased overall operational efficiency.

20.2.5 Optimization of Energy Systems with AI


The optimization of energy systems using AI techniques has gained a lot of attention
in recent years. AI offers powerful tools to analyze complex energy data to improve
the effectiveness of energy systems across various sectors.
One area where AI optimization can be applied is in electricity generation and
distribution. By dynamically adjusting the mix of generation technologies, such as
fossil fuel-based plants, renewable energy sources, and energy storage systems, AI
can minimize costs, reduce greenhouse gas emissions, and ensure grid stability.
Furthermore, AI significantly enhances the energy storage systems. By analyzing
historical data on energy demand and supply patterns, AI algorithms can determine
the optimal sizing, placement, and operation strategies for energy storage assets.
This helps maximize the utilization of renewable energy sources, smooth out inter-
mittent generation, and enhance grid stability.
In addition to these applications, AI optimization techniques can be applied
to various other aspects of energy systems, such as demand response programs,
microgrid management, and energy trading. AI can find the best solutions that would
be challenging or time-consuming for traditional optimization approaches.
Overall, the integration of AI in the optimization of energy systems holds great
potential for achieving energy efficiency. By leveraging AI algorithms to analyze
data, make predictions, and optimize operations, energy systems can be better man-
aged, leading to a more resilient and sustainable energy future.
Energy Management in Industry 4.0 Using AI 353

20.3 CASE STUDIES AND APPLICATIONS


20.3.1 AI-Driven Energy Management in Manufacturing Plants
AI-powered predictive models in anticipating energy demands and optimizing
resource allocation. By analyzing historical data and considering various factors
such as production schedules, weather conditions, and equipment performance,
these models can accurately forecast energy requirements. This enables manufac-
turing plants to plan their energy consumption, adjust production schedules, and
optimize energy usage during periods of lower demand.
Real-time monitoring and control systems, integrated with AI algorithms,
enable proactive energy management. By continuously collecting data from sensors
installed throughout the facility, AI-driven systems can monitor energy consumption
in real time. They can identify areas of excessive energy use or potential equipment
malfunctions and trigger alerts for immediate action. This proactive approach helps
prevent energy waste, reduce downtime, and ensure efficient operation.

20.3.2 Enhancing Energy Efficiency in Supply Chain with AI


One way AI can enhance energy efficiency is through predictive analytics. This
enables businesses to make informed decisions regarding energy consumption, such
as optimizing transportation routes, adjusting production schedules, and managing
inventory levels. By accurately predicting energy demands, businesses can reduce
energy waste and optimize resource allocation.
Optimizing lighting and HVAC systems, identifying equipment malfunctions,
and implementing energy-saving measures are crucial steps in reducing energy con-
sumption. By automating energy management processes, AI can continuously moni-
tor and adjust the usage of energy in these facilities, leading to substantial energy
savings.

20.3.3 AI-Assisted Process Design for Energy Optimization


AI-assisted process design is a powerful tool for energy optimization in various
industries. By leveraging AI, businesses can analyze and optimize their operational
processes to minimize energy consumption, reduce costs, and improve sustainability.
Here are some ways AI can assist in process design for energy optimization:

Data analysis: By understanding how different process variables impact


energy usage, businesses can identify areas for improvement and develop
strategies for optimization.
Energy efficiency recommendations: AI algorithms can provide real-time rec-
ommendations for improving energy efficiency within specific processes.
For example, by analyzing operational parameters such as temperature,
pressure, and flow rates, AI can suggest optimal set points or equipment
configurations that minimize energy consumption while maintaining
productivity.
354 AI-Driven IoT Systems for Industry 4.0

Dynamic load management: AI can optimize load distribution by intelligently


allocating energy resources based on demand patterns, energy prices, and
grid conditions. By continuously monitoring and analyzing real-time infor-
mation, AI algorithms can adjust energy usage in response to changing con-
ditions, reducing peak loads and optimizing overall energy consumption.
Process optimization: AI can optimize complex industrial processes by iden-
tifying the most energy-efficient configurations, sequences, and operating
conditions. Through simulation and optimization algorithms, AI can explore
different process design options and identify optimal solutions that minimize
energy consumption without compromising productivity or product quality.
Continuous improvement: AI can enable continuous improvement by learn-
ing from real-time data and feedback. By continuously monitoring energy
usage and performance metrics, AI algorithms can adapt and refine process
designs over time, leading to ongoing energy optimization.

20.3.4 Sustainable Energy Practices through AI Integration


Integrating AI into sustainable energy practices has revolutionized the generation
and consumption energy. By leveraging AI capabilities, we can optimize energy sys-
tems, increase renewable energy adoption, and enhance overall energy efficiency.
Here are some ways in which AI integration can drive sustainable energy practices:

• Renewable energy resource optimization


• Energy storage systems
• Smart energy management and distribution
• Energy consumption data and analysis

Table 20.1 provides a high-level overview, and the specific AI techniques and
tools used for integration can vary depending on the specific application and techno-
logical advancements.

TABLE 20.1
Sustainable Energy Practice and AI Integration Aspects
Sustainable Energy Practice AI Integration Aspects
Renewable energy generation Predictive analytics for solar/wind energy generation.
Optimization of energy output using AI algorithms
Energy storage Intelligent battery management and optimization Predictive
maintenance of energy storage systems
Smart energy management Load forecasting, demand-side management, real-time grid
monitoring, and stability analysis. Automated control for
energy routing and balancing
Energy consumption data and analysis Data collection from smart meters and IoT devices. Energy
usage pattern recognition through machine-learning
Identification of energy-saving opportunities
Energy Management in Industry 4.0 Using AI 355

20.3.5 Cost-Saving Measures and Improved ROI


Artificial-intelligence-driven energy management offers several cost-saving mea-
sures and can significantly improve return on investment (ROI) for businesses. By
using AI algorithms and real-time data analysis, energy management systems can
optimize energy usage, identify inefficiencies, and make proactive adjustments to
reduce costs. Here are some ways AI-driven energy management can achieve cost
savings and improve ROI
By implementing AI-driven energy management solutions, businesses can achieve
significant cost savings, improve energy efficiency, and enhance their ROI. These
systems enable proactive energy management and identification of opportunities for
improvement, resulting in financial benefits.

20.4 CHALLENGES AND OPPORTUNITIES


20.4.1 Information Security and Privacy
Information security and privacy have become increasingly crucial in the modern
digital era. Here are some key considerations:

• Data breaches
• Privacy laws and regulations
• Data minimization and purpose limitation
• User consent and transparency
• Secure storage and data handling
• Third-party data sharing
• Ethical considerations

20.4.2 Integration of Artificial Intelligence


with Legacy Systems

The integration of AI with legacy systems presents both challenges and oppor-
tunities for organizations. Legacy systems are typically older technology stacks
that have existed for a long time and may not be designed to work seamlessly
with AI applications. However, with careful planning and implementation, the
integration of AI can unlock new capabilities and enhance the functionality of
legacy systems.
The main challenge is the availability of computing resources. AI applications
typically require significant computational power to process large data sets and
train complex models. Organizations may need to consider cloud-based solutions or
upgrade their hardware to handle the computational demands of AI.
Integration also requires careful consideration of the existing workflows and pro-
cesses within the organization. Legacy systems often have well-established work-
flows that may not align with AI-driven processes. Organizations need to assess
how AI can enhance or streamline existing workflows and make any necessary
356 AI-Driven IoT Systems for Industry 4.0

adjustments to ensure a smooth integration. This may involve changes to data flows,
user interfaces, or backend processes to accommodate AI capabilities.

20.4.3 Skilled Workforce and Training Requirements


In a rapidly evolving business landscape, having a skilled workforce is crucial for
organizations to thrive. As industries continue to adopt new technologies and adapt
to changing market demands, the importance of workforce training and development
cannot be overstated. Here are some key considerations regarding skilled workforce
and training requirements:

• Technology advancements
• Digital literacy
• Soft skills
• Industry-specific expertise
• Lifelong learning
• Collaboration with educational institutions
• Employee engagement and retention

By prioritizing training and development, employers can equip their employees


with the necessary skills to thrive in a dynamic work environment, foster innovation,
and maintain a competitive edge.

20.4.4 Scalability and Adaptability of AI Solutions


Scalability and adaptability are crucial considerations when implementing AI solu-
tions. Here’s a breakdown of their importance in the context of AI
Scalability: AI solutions need to be scalable to handle growing data volumes,
increasing workloads, and expanding user bases. Scalability ensures that the AI sys-
tem can handle larger datasets, process more complex algorithms, and support a
higher number of concurrent users without compromising performance.

1. Horizontal scalability: This refers to the ability to add more computing


resources, such as servers or nodes, to distribute the workload and increase
processing capacity. Horizontal scalability allows AI systems to accommo-
date larger datasets and handle increased computational demands.
2. Vertical scalability: This involves increasing the resources, such as mem-
ory or processing power, of individual machines within the AI infrastruc-
ture. Vertical scalability allows AI systems to handle more intensive tasks
and complex algorithms by providing additional resources to a single
machine.
3. Cloud-based scalability: Leveraging cloud computing resources provides
on-demand scalability for AI solutions. Cloud platforms offer elastic
resources that can be dynamically allocated based on workload require-
ments, allowing organizations to scale up or down as needed.
Energy Management in Industry 4.0 Using AI 357

Adaptability: AI solutions must be adaptable to evolving business needs, chang-


ing environments, and emerging technologies. Adaptable AI systems can continue
delivering value and remain relevant over time.

1. Flexibility: AI systems should be designed with flexibility in mind,


allowing for easy integration with existing infrastructure, applications,
and data sources. This flexibility enables seamless adaptation to chang-
ing business requirements and the ability to leverage new technologies
as they emerge.
2. Continuous learning: AI models can be trained and fine-tuned using addi-
tional data to improve performance over time. Implementing mechanisms
for continuous learning ensures that AI systems can adapt to new patterns,
trends, and user preferences, resulting in more accurate predictions and
recommendations.
3. Interoperability: AI solutions should be compatible with various platforms,
frameworks, and data formats to facilitate integration and collaboration
across different systems. Interoperability allows organizations to leverage
existing infrastructure and tools while adding AI capabilities.
4. Modularity: Building AI systems with modular components enables flex-
ibility and adaptability. By dividing the system into smaller, self-contained
modules, organizations can modify or replace specific components with-
out affecting the overall system, making it easier to adapt to changing
requirements.

Scalability and adaptability are critical for successful AI implementations.


They ensure that AI solutions can handle increasing demands, accommodate
growing datasets, evolve with changing business needs, and integrate seamlessly
with existing infrastructure. By considering these factors, organizations can cre-
ate reliable and futuristic AI systems that continue to deliver value over the long
term.

20.4.5 Regulatory Compliance and Standards


Regulatory compliance and standards play an essential role in the implementation
and application of AI technologies. It ensures the ethical and responsible use of
AI systems while safeguarding privacy, security, and fairness. It encourages orga-
nizations to address biases, ensure explainability, and establish mechanisms for
accountability in AI decision-making. It is important for organizations developing
or implementing AI systems to stay updated on relevant regulatory requirements and
standards in their respective jurisdictions.
Algorithmic accountability: Various jurisdictions are considering or imple-
menting regulations related to algorithmic accountability. These regulations
aim to promote transparency and fairness in AI systems by requiring organiza-
tions to explain how their algorithms make decisions that impact individuals or
society.
358 AI-Driven IoT Systems for Industry 4.0

20.5 FUTURE DIRECTIONS AND CONCLUSIONS


20.5.1 The Evolving Landscape of Artificial Intelligence
and Industry 4.0

The evolving landscape of AI and Industry 4.0 is reshaping industries. Here’s a


glimpse into this rapidly changing landscape.

1. AI-powered automation: AI is revolutionizing automation by enabling


machines and systems to perform tasks that traditionally require human
intervention. From manufacturing and logistics to healthcare and customer
service, AI-driven automation is streamlining processes, increasing effi-
ciency, and reducing the risk of errors. This allows businesses to streamline
processes, cut costs, and improve productivity.
2. Data-driven decision-making: The rise of AI and Industry 4.0 has led
to an explosion of data. With advanced analytics and machine-learning
algorithms, organizations can now gain insightful information from large
volumes of data. Using a data-driven decision-making approach helps com-
panies to make wise decisions predict trends, identify patterns, and opti-
mize their strategies for better outcomes.
3. Enhanced customer experience: AI technologies like natural language pro-
cessing (NLP), chatbots, and virtual assistants are transforming customer
experience. Businesses are leveraging AI-powered solutions to provide per-
sonalized recommendations, deliver prompt customer support, and auto-
mate routine interactions.
4. Intelligent manufacturing and supply chain: Industry 4.0 is characterized
by this combination of physical and digital systems, creating “smart facto-
ries” and “smart supply chains.” AI-driven robotics, IoT devices, and pre-
dictive analytics are enhancing production processes, lowering downtime,
and enabling real-time monitoring and control. This results in improved
quality, agility, and cost-effectiveness in manufacturing and supply chain
operations.
5. Cybersecurity and privacy challenges: With the increasing reliance on AI
and interconnected systems, cybersecurity and privacy concerns are grow-
ing. As more devices, networks, and data become interlinked, protecting
sensitive information and ensuring system integrity becomes paramount.
Organizations need to invest in robust cybersecurity measures, implement
encryption protocols, and adhere to privacy regulations to mitigate these
risks.
6. Ethical considerations: As AI becomes more pervasive, ethical consid-
erations and responsible AI development are gaining importance. Issues
like bias in algorithms, transparency, equity, and responsibility need to be
addressed. Ethical guidelines and frameworks are being developed for soci-
etal values and respecting human rights.
7. Skills and workforce transformation: While automation may eliminate cer-
tain jobs, it also creates new roles and demands a different set of skills.
Energy Management in Industry 4.0 Using AI 359

Upskilling and reskilling programs are essential to equip individuals with


the digital literacy and technical competencies required to thrive in this
evolving landscape.

Embracing this transformative landscape offers immense opportunities for inno-


vation, efficiency, and growth. However, it is crucial to navigate the associated chal-
lenges, including cybersecurity, ethics, and workforce transformation, to ensure a
sustainable and inclusive future.

20.5.2 Potential Applications and Innovations in Energy Management


Energy management is critical in addressing global issues such as sustainability and
climate change. With advancements in technology, there are several potential appli-
cations and innovations that can revolutionize energy management and pave the way
for a more eco-friendly and effective future.

Energy storage systems: The scalable energy storage technologies are essential
for the widespread adoption of renewable energy sources. The sensory data
are forecasted through a data prediction model in the cloud [7]. Battery
technology advancements such as solid-state batteries, lithium-ion batter-
ies, and flow batteries are enabling reliable energy storage for both grid-
scale and residential applications.
Demand response systems: Demand response systems utilize real-time infor-
mation and communication technologies to modify energy consumption in
response to grid conditions and pricing signals. Demand response systems
help to relieve grid strain and promote energy efficiency.
Energy management software: Advanced software solutions are being developed
to optimize energy consumption in buildings and industrial processes. These
platforms analyze information from sensors, meters, and other tools using
machine-learning algorithms to identify energy-saving opportunities, automate
control systems, and provide actionable insights for optimizing energy usage.
IoT in energy management: By the combination of IoT devices with energy
management systems, organizations can observe energy usage, identify
inefficiencies, and implement targeted interventions to reduce energy waste.

Figure 20.2 demonstrates the role of IoT management for energy. It includes
energy cost reduction, predictive maintenance, carbon footprint, green energy
implementation, energy cost reduction, energy audit and compliance, energy effi-
ciency optimization, and reduced energy spending.

Energy-efficient building technologies: Innovations in building materials,


insulation, lighting, and HVAC systems are improving energy efficiency
in residential and commercial buildings. Energy-efficient windows, smart
thermostats, LED lighting, and occupancy sensors are just a few examples
of technology-driven solutions that can significantly reduce energy con-
sumption in buildings.
360 AI-Driven IoT Systems for Industry 4.0

FIGURE 20.2 Benefits of IoT in energy management.

Distributed energy resources (DERs): DERs comprise distributed generation


systems like rooftop solar panels, small wind turbines, and microgrids.
These localized energy sources can help create a more resilient and decen-
tralized energy infrastructure, reducing transmission losses and increasing
renewable energy adoption at the community level.

20.5.3 Collaboration and Partnerships


Here are some key considerations regarding collaboration and partnerships for
AI-driven energy management:

Industry collaboration: Collaboration among energy companies, technology


providers, and research institutions is essential to advance AI-driven energy
management. By sharing knowledge, expertise, and resources, stakeholders
can collectively address common problems and develop innovative solu-
tions. Collaborative efforts can include joint research projects, data-sharing
initiatives, and industry-wide standards development.
Technology providers and energy companies: Partnerships between technol-
ogy providers specializing in AI and energy companies are instrumental in
developing tailored solutions for efficient energy management. These part-
nerships enable the integration of AI algorithms, machine-learning models,
and data analytics into energy systems, facilitating real-time optimization,
predictive maintenance, and demand response. This collaboration can
Energy Management in Industry 4.0 Using AI 361

result in cost savings, improved operational efficiency, and reduced envi-


ronmental impact.
Data integration and sharing: Effective collaboration requires combining and
sharing data from various sources. Energy companies can collaborate with
data providers, such as weather forecasters, grid operators, and IoT device
manufacturers, to access real-time data for accurate forecasting, demand-
side management, and grid optimization. This collaboration enables better
decision-making.
Research institutions and startups: Partnerships between research institu-
tions and startups can drive innovation in energy management. Research
institutions provide valuable insights, expertise, and access to cutting-edge
research findings, while startups bring fresh ideas, entrepreneurial spirit,
and agile development capabilities. Joint initiatives can foster the develop-
ment of novel AI algorithms, optimization techniques, and energy manage-
ment models.
Government support and policy alignment: Collaboration with government
agencies and policymakers is crucial for creating an enabling environment
for AI-driven energy management. Governments can provide financial sup-
port, regulatory frameworks, and incentives to encourage collaboration and
drive technology adoption. Public-private partnerships can be formed to
leverage the expertise and resources of both sectors in implementing large-
scale AI-driven energy management projects.
International collaboration: Given that energy management is a global issue,
international collaboration and partnerships are essential. Sharing best
practices, lessons learned, and technological advancements across borders
can accelerate the adoption of AI-driven energy management globally.
Collaboration on research and development, policy frameworks, and stan-
dardization efforts can foster innovation and resilient energy.

20.5.4 Shaping a Sustainable Future through AI-Powered


Energy Management
AI-powered energy management has the power to shape a future within the context
of Industry 4.0. Here’s how AI can contribute to energy efficiency. New develop-
ments in automotive technology aim to lower emissions and stress on the vehicle’s
battery [8].

Monitoring and control: AI systems can continuously monitor energy con-


sumption across various production processes, equipment, and facilities.
This enables prompt identification of energy-intensive operations or inef-
ficiencies, allowing for immediate corrective action. Real-time monitoring
also facilitates proactive maintenance, reducing energy losses due to equip-
ment breakdowns.
Energy optimization and automation: Through machine-learning algorithms,
AI systems can optimize energy consumption by adapting equipment and
machinery operations [9]. By parameters like temperature, speed, or power
362 AI-Driven IoT Systems for Industry 4.0

usage, AI systems can minimize energy waste while maintaining optimal


production performance.
Demand response and load balancing: AI algorithms can possibly create
demand response programs, and in order to reduce strain on the power grid
during high-demand periods, industrial facilities adjust their energy usage.
AI systems can automatically optimize load balancing across different
equipment and processes.
Renewable energy integration: AI can aid by using renewable energy sources
as part of the industrial energy mix. By analyzing variables like weather
conditions, energy storage capacities, and demand patterns, AI systems
can utilize renewable energy resources efficiently and manage renewable
energy’s intermittent nature.
Carbon footprint reduction: The industry can reduce its carbon footprint with
AI-powered energy management. By identifying energy-intensive pro-
cesses, optimizing resource allocation, and recommending energy-efficient
technologies, AI systems can drive significant reductions in greenhouse gas
emissions.
Continuous improvement: AI’s ability to analyze huge amounts of information
and identify patterns enables continuous improvement in energy manage-
ment practices. Machine-learning algorithms can uncover opportunities for
efficiency gains, highlight areas of improvement, and suggest innovative
energy-saving strategies, contributing to a sustainable energy future.

The application of AI-powered energy management in Industry 4.0 has the poten-
tial to revolutionize how industries consume and manage energy resources [10]. By
optimizing energy usage, reducing waste, integrating renewable sources, and pro-
moting sustainability, AI is instrumental in shaping a greener and more sustainable
future for industrial operations.

20.6 CONCLUSION
The chapter provides a comprehensive overview about the potential of AI in trans-
forming energy management practices within the Industry 4.0 paradigm. It explores
the key AI techniques, practical implementations, and examples to demonstrate the
positive impact of AI adoption on energy efficiency, cost reduction, and environmen-
tal sustainability. Additionally, it addresses the challenges and considerations that
industries must take into account while implementing AI-driven energy manage-
ment solutions and offers insights into the future of this dynamic field.

REFERENCES
1. Lasi, H., Fettke, P., Kemper, H. G., Feld, T., Hoffmann, M. (2014). Industry 4.0. Business
& Information Systems Engineering, 6, 239–242, ISSN: 2363-7005.
2. Wang, S., Wan, J., Zhang, D., Li, D., Zhang, C. (2016). Towards smart factory for
Industry 4.0: A self-organized multi-agent system with big data based feedback and
coordination. Computer Networks, 101, 158–168, ISSN: 1389-1286.
Energy Management in Industry 4.0 Using AI 363

3. Ashton, K. (2009). That “Internet of Things” thing. RFiD Journal, 22(2009), 97–114.
4. Lycett, M. (2013). ‘Datafication’: Making sense of (big) data in a complex world.
European Journal of Information Systems, 22(4), 381–386, ISSN: 1476-9344.
5. Gantz, J., Reinsel, D. (2012). The Digital Universe in 2020: Big Data, Bigger Digital
Shadows, and Biggest Growth in the Far East United States, IDC
6. McKinsey & Company. (2018). AI in Energy: Transforming Industry and Optimizing
Resource Use.
7. Vinothkumar, T., Sivaraju, S. S., Thangavelu, A., Srithar, S. (2023). An energy-efficient
and reliable data gathering infrastructure using the internet of things and smart grids,
Automatika, 64(4), 720–732. doi: 10.1080/00051144.2023.2205724
8. Dasi, S., Sandiri, R., Anuradha, T., Santhi Sri, T., Majji, S., Murugan, K. “The State-
of-the-art Energy Management Strategy in Hybrid Electric Vehicles for Real-time
Optimization,” 2023 International Conference on Inventive Computation Technologies
(ICICT), Lalitpur, Nepal, 2023, pp. 1637–1643. doi: 10.1109/ICICT57646.2023.10134496
9. Chong, S. W., Wong, Y. S., Goh, H. H. (2018). Applications of artificial intelligence in
energy management systems: A systematic review. Energies, 11(10), 2743.
10. Karim, R. A., et al. (2020). Internet of things and artificial intelligence techniques
for industrial energy efficiency improvement. International Journal of Advanced
Manufacturing Technology, 107(5–6), 2587–2603. doi: 10.1007/s00170-020-05542-0
21 Deployment of IoT with
AI for Automation
Abith Kizhakkemuri Sunil, Preethi Nanjundan,
and Jossy Paul George

21.1 INTRODUCTION
Automation is using machines, computers, or other automated approaches to per-
form obligations that would otherwise be accomplished with the aid of people.
Automation has been around for hundreds of years; however, it has ended up more
and more sophisticated in recent years with the improvement of new technologies
such as the Internet of Things (IoT) and synthetic intelligence (artificial intelligence
[AI]). IoT refers back to the community of bodily gadgets which might be embedded
with sensors, software, and community connectivity, which permits them to gather
and change information. AI refers back to the potential of machines to learn and
make selections without human intervention.
The aggregate of IoT and AI is creating new possibilities for automation in a huge
range of industries, along with production, healthcare, transportation, and logistics.
For instance, IoT sensors can be used to collect records at the overall performance
of machines, which can then be analyzed via AI algorithms to become aware of
capability issues and take corrective movement earlier than they reason an outage or
other disruption.
Moreover, the demands of today’s interconnected world have given rise to several
pressing needs that are effectively addressed by the integration of AI and automation
within the realm of IoT. One such critical need is efficiency enhancement: With the
exponential growth in the number of connected devices, the ability to streamline
operations and optimize resource utilization becomes paramount. AI-driven auto-
mation ensures that tasks are executed precisely and promptly, minimizing waste
and maximizing productivity.
Furthermore, data deluge management is a key concern in the IoT landscape. The
sheer volume of data generated by IoT devices can overwhelm traditional data man-
agement approaches. AI’s capability to swiftly process and interpret vast data sets
enables organizations to extract meaningful insights, facilitating informed decision-
making and strategic planning.
In addition, the advent of IoT has fueled the need for Proactive Insights and
Action, as opposed to reactive measures. By harnessing AI algorithms, businesses
can anticipate trends and potential issues by analyzing data patterns. This foresight
empowers them to take preventative actions, averting disruptions and downtime that
could lead to substantial losses.

364 DOI: 10.1201/9781003432319-21


Deployment of IoT with AI for Automation 365

Personalized experiences have become an expectation across various sectors. The


amalgamation of AI with IoT facilitates the creation of tailored services and prod-
ucts. For instance, in the retail industry, customer preferences collected through IoT
devices can be analyzed by AI to offer individualized recommendations, enhancing
customer satisfaction.
The concept of remote monitoring and control gains prominence in industries
spanning from manufacturing to healthcare. AI-powered systems enable real-time
monitoring and remote management of devices, promoting operational efficiency
and reducing the need for physical presence.
Lastly, the rapid evolution of technology necessitates systems that can quickly
adapt to new circumstances and devices. AI’s learning capabilities combined with
automation allow IoT ecosystems to seamlessly integrate novel technologies and
adapt strategies, ensuring long-term viability and relevance.

21.1.1 Setting the Stage: IoT and AI in the Automation Landscape


IoT and AI are already being used in loads of automation packages, which includes
the following:

Predictive protection: IoT sensors can be used to gather data at the performance
of machines, which can then be analyzed through AI algorithms to perceive
capacity problems earlier than they reason an outage or other disruption [1].
Quality manipulation: IoT sensors can be used to monitor the first-rate of
products or services, and AI algorithms may be used to perceive defects
and take corrective motion [2].
Supply chain control: IoT sensors may be used to track the place of goods and
substances in a delivery chain, and AI algorithms may be used to optimize
the routing and scheduling of deliveries [3].
Safety: IoT sensors may be used to display the environment for risks, and AI
algorithms may be used to take corrective motion to prevent accidents [4].

These are only some examples of the way IoT and AI are being used in automa-
tion today. As those technologies evolve, we are able to assume even extra revolution-
ary and complicated automation packages inside the destiny.

21.1.2 Significance of Integration: Powering Efficiency and Intelligence


The integration of IoT and AI can assist to improve the efficiency and intelligence of
automation systems in some approaches. For example,

IoT sensors can provide AI algorithms with real-time records approximately


the surroundings, which can be used to make extra knowledgeable choices.
AI algorithms can be used to analyze massive amounts of facts from IoT sen-
sors, which can assist to discover patterns and tendencies that could be
tough to peer with the bare eye.
366 AI-Driven IoT Systems for Industry 4.0

AI algorithms may be used to automate responsibilities that would otherwise


be done by using people, which may free up human assets for other tasks.

The integration of IoT and AI has the capacity to revolutionize the manner we
automate obligations. By combining the real-time statistics collection competencies
of IoT with the analytical strength of AI, we are able to create clever automation
structures that may make choices and take moves without human intervention. This
can result in large upgrades in efficiency, productiveness, and safety in a huge variety
of industries.

21.2 FOUNDATIONS OF IoT AND AI


The IoT and AI are some of the most transformative technologies of our time. IoT
refers to the community of physical objects which are embedded with sensors, soft-
ware programs, and network connectivity, which enables them to gather and change
facts. AI refers to the potential of machines to analyze and make decisions without
human intervention [5].
The mixture of IoT and AI is developing new possibilities for automation in an
extensive variety of industries, together with manufacturing, healthcare, transporta-
tion, and logistics. For instance, IoT sensors may be used to acquire facts on the over-
all performance of machines, which could then be analyzed through AI algorithms
to identify capability troubles and take corrective motion earlier than they cause an
outage or other disruption.
In this phase, we can talk about the foundations of IoT and AI. We will first
introduce the principles of IoT and AI and discuss their records and evolution. We
will then explain the important things which can be worried about in IoT and AI,
consisting of sensors, actuators, networks, and algorithms. Finally, we can talk
about the benefits of IoT and AI, such as expanded performance, productiveness,
and protection.

21.2.1 Unpacking IoT: Connecting the Physical World


IoT may be used to connect the physical international to the virtual world in numer-
ous approaches (see Figure 21.1). For instance, IoT sensors may be used to accumu-
late records at the surroundings, such as temperature, humidity, and mild ranges.
These statistics can then be used to display and control the surroundings, which
includes adjusting the temperature of a building or the lighting in a room.
IoT can also be used to track the motion of items. For instance, IoT sensors can be
used to track the location of goods in a supply chain or the motion of human beings
in a construction. These records can then be used to optimize the routing of goods or
to go with the flow of human beings. The system for calculating the distance between
two points can be used to tune the motion of items or humans.
The formula for calculating the distance between two points in two dimensions
is as follows [7]:

d = √ ( x2 − x1) 2 + ( y2 − y1) 2
Deployment of IoT with AI for Automation 367

FIGURE 21.1 Diagram reflecting our approach to IoT architecture [6].

where d is the distance between the two points, x1 and y1 are the coordinates of the
first point, and x2 and y2 are the coordinates of the second point.
The formula for calculating the distance between two points in three dimensions
is as follows:

d = √ ( x2 − x1) 2 + ( y2 − y1) 2 + ( z2 − z1) 2

where d is the distance between the two points, x1, y1, and z1 are the coordinates of
the first point, and x2, y2, and z2 are the coordinates of the second point.

21.2.2 Embarking on AI: Enhancing Intelligence via Algorithms


AI is a broad term encompassing a multitude of technologies, including system
learning and deep learning. Machine learning algorithms serve as a crucial sub-
set within AI, enabling the extraction of insights from data for predictive purposes.
For example, these algorithms can effectively forecast machinery failures or predict
the likelihood of customer churn. This field is characterized by its rich diversity of
368 AI-Driven IoT Systems for Industry 4.0

algorithms, each possessing distinct capabilities and limitations. Among the widely
recognized approaches are linear regression and logistic regression, tailored for con-
tinuous and binary predictions, respectively [8]. Decision trees, on the other hand,
construct predictive flowcharts, while support vector machines excel at class sepa-
ration optimization. Through the fusion of decision trees, random forests enhance
prediction accuracy [9]. Deep learning emerges as another prominent category,
harnessing artificial neural networks to yield exceptional outcomes, such as convo-
lutional neural networks facilitating image recognition, recurrent neural networks
powering natural language processing, and generative adversarial networks capable
of data synthesis [10].
The journey into AI also entails confronting significant challenges. These chal-
lenges encompass the scarcity of data, the potential for algorithmic bias rooted in
biased training data, the complexity of explaining AI’s decision-making process,
and the exposure of vulnerabilities that can be exploited for malicious purposes.
Despite these obstacles, AI’s trajectory is undoubtedly promising, poised to reshape
industries and elevate our daily experiences. However, the responsible and beneficial
integration of AI necessitates a concerted effort to address these challenges compre-
hensively [11].
Delving deeper, deep learning algorithms represent a subset of machine learning
techniques that function hierarchically, learning intricate patterns from data. These
algorithms have demonstrated remarkable achievements across diverse domains,
including remarkable advancements in tasks like image recognition and natural lan-
guage processing [10].

21.3 IoT AND AI FOR AUTOMATION


21.3.1 AIoT Convergence
21.3.1.1 A Symbiotic Relationship: How IoT and AI
Complement Each Other
IoT gives AI the statistics it wishes to make knowledgeable decisions [12]. AI
can help IoT to make better selections about how to gather and use statistics. For
example, IoT sensors may be used to collect statistics on the environment, together
with temperature, humidity, and light tiers. AI algorithms can then be used to
research these records and make choices about a way to control the surroundings.
IoT sensors can also be used to acquire information about the behavior of humans
or objects. AI algorithms can then be used to analyze these records and discover
styles that can be used to improve performance, optimize operations, and prevent
troubles.

21.3.1.2 Shaping Automation: Real-Time Decision-Making


and Predictive Insights
IoT and AI may be used to make actual-time decisions and to advantage predic-
tive insights. This may be used to improve performance, optimize operations, and
save you troubles. For instance, IoT sensors may be used to monitor the situation of
devices in a manufacturing unit. AI algorithms can then be used to investigate these
Deployment of IoT with AI for Automation 369

statistics and identify potential issues earlier than they arise. This can help to prevent
costly downtime and enhance the overall performance of the manufacturing unit [4].
IoT and AI can also be used to make actual-time decisions about the way to
allocate sources. For example, IoT sensors can be used to track the stock levels in a
warehouse. AI algorithms can then be used to investigate these statistics and make
decisions approximately when to reserve new inventory. This can assist to improve
the efficiency of the warehouse and reduce the threat of stockouts [13].

21.3.2 Architecting the Future: Combining IoT and AI


The mixture of IoT and AI is creating a brand-new era of automation, with the ability
to revolutionize the manner we live and work [14]. For example, IoT gadgets may be
used to reveal the situation of gadgets in a manufacturing unit, while AI algorithms
may be used to analyze these statistics and perceive capability problems before they
occur. This can assist to save you pricey downtime and enhance the general perfor-
mance of the manufacturing unit.
IoT and AI can also be used to personalize studies, consisting of way of recom-
mending services or products to customers primarily based on their past conduct.
The possibilities are limitless, and the destiny of IoT and AI could be very shiny.
However, there also are some demanding situations that want to be addressed before
IoT and AI can be broadly followed. One assignment is the security of IoT gadgets.
IoT gadgets are frequently related to the internet, which makes them prone to cyber-
attacks. Another challenge is the privateness of facts gathered with the aid of IoT
gadgets. IoT gadgets collect a whole lot of records about people, and it’s far critical
to ensure that this information is included [15].

21.3.2.1 Building Blocks: Hardware and Software


Components for Integration
The hardware and software components for integrating IoT and AI may be divided
into three foremost classes [16]:

Sensors: Sensors gather information from the physical world. They may be
used to measure a range of things, such as temperature, humidity, mild
tiers, and movement.
Networks: Networks transmit information among sensors and the cloud. They
can be wired or wireless.
Cloud computing: Cloud computing shops and analyzes statistics. It also gives
the computing energy needed to run AI algorithms.

In addition to these three fundamental classes, there are some other components
that can be used to combine IoT and AI, together with gateways, aspect computing
gadgets, and record visualization tools.
The desire of hardware and software program additives will depend on the pre-
cise application. For example, a manufacturing unit that uses IoT and AI to screen
the situation of a system will need special components more than an employer. This
is the usage of IoT and AI to customize customer studies.
370 AI-Driven IoT Systems for Industry 4.0

21.3.2.2 Blueprint for Success: Architectural Design Considerations


The architectural design of an IoT and AI system is critical to its achievement.
There are a number of things that need to be taken into consideration, together
with [17]

• The sort of information that wishes to be amassed


• The frequency of record collection
• The degree of protection required
• The price of the machine
• The scalability of the device

It is also crucial to not forget the moral implications of the use of IoT and AI. For
example, how will the facts be used? Will or not it be shared with third parties? How
will privateness be included?
By cautiously thinking about these elements, companies can lay out IoT and AI
structures which might be successful and moral.

21.4 INDUSTRY APPLICATIONS OF AI-DRIVEN IoT AUTOMATION


21.4.1 Manufacturing Excellence: Revolutionizing Production Lines
Real-time monitoring and predictive renovation: IoT sensors may be used to screen
the condition of a system in real time that could assist to identify capability issues
earlier than they cause downtime. AI algorithms can then be used to be expecting
when preservation is needed, which may help to prevent pricey maintenance. For
example, a manufacturing unit that uses IoT sensors to monitor the circumstance of
its machines can perceive problems early on and take corrective action earlier than
the machines wreck down. This can store the manufacturing facility a number of
money in maintenance and downtime [18].
Automated high-quality manipulation: IoT sensors can be used to collect infor-
mation at the best of merchandise being synthetic, which can be used to identify
defects and make certain that products meet first-class standards [19]. AI algo-
rithms can then be used to automate the first-rate management technique that could
lose up to human people to awareness of other duties. For example, a food manu-
facturing plant that makes use of IoT sensors to acquire data at the weight and
temperature of its products can automate the fine control procedure. This can free
up human workers to consciousness on different responsibilities, including pack-
aging and delivery.
Optimized production scheduling: IoT sensors can be used to acquire statistics
at the availability of substances and the popularity of manufacturing lines, which
may be used to optimize manufacturing scheduling. AI algorithms can then be used
to generate production schedules which are green and meet purchaser calls for. For
example, an automobile manufacturer that uses IoT sensors to track the supply of
elements and the popularity of its manufacturing traces can optimize its manufactur-
ing scheduling. This can assist the producer to satisfy customer calls and decrease
charges [20].
Deployment of IoT with AI for Automation 371

21.4.2 Healthcare Transformed: Enhancing Patient


Care and Diagnostics
Remote patient monitoring: IoT gadgets may be used to remotely reveal patients’
crucial symptoms and other fitness records. This can assist to improve affected per-
son’s care by permitting doctors to interfere early if there are any issues [21]. For
example, a patient who has been identified with coronary heart disease can use an
IoT tool to reveal their coronary heart rate and blood stress at home. These facts may
be sent to their doctor, who can then display the patient’s situation and make adjust-
ments to their remedy plan as wished.
Personalized medicinal drug: IoT devices may be used to accumulate information
on patients’ (man or woman) health profiles. This information can then be utilized
by AI algorithms to broaden personalized remedy plans which are tailor-made to
each affected person’s specific wishes. For instance, an affected person with cancer
can use an IoT device to track their signs and treatment reaction [22]. These records
may be utilized by their doctor to increase a customized treatment plan that is most
probably to be powerful.
AI-powered surgical treatment: AI algorithms can be used to assist surgeons in
acting surgical operation [23]. This can help to improve the accuracy and precision
of surgery, as well as lessen the risk of complications. For example, an AI algorithm
can be used to manualize a healthcare professional’s hand during a delicate operation.
This can help to ensure that the surgical operation is performed effectively and safely.

21.4.3 Agriculture in the AI Age: Precision Farming and Beyond


Precision agriculture: IoT sensors can be used to collect soil conditions, crop health,
and weather data. This information can then be used by AI systems to develop preci-
sion farming systems that improve crop yields and reduce the use of pesticides and
fertilizers. For example, a farmer can eat with IoT sensors used to manage water in
their soil. AI algorithms can then use this data to determine when crops should be
irrigated. This can help increase crop yields and reduce water consumption [24].
Animal tracking: IoT sensors can be used to track the movement of animals. This infor-
mation can be used to improve livestock practices and ensure animal welfare. For example,
a farmer can use IoT sensors to track the movement of their cows [25]. This information
can then be used to ensure that cattle have access to food and water and are not disturbed.
Food traceability: IoT sensors can be used to track the flow of food from the farm
to the dog. This information can be used to improve food safety and traceability [26].
For example, a food manufacturer could use IoT sensors to monitor how its products
move through the supply chain. This information can then be used to identify poten-
tial problems with the food supply chain and ensure food is safe to eat.

21.5 SCALABILITY AND INFRASTRUCTURE


21.5.1 Scaling Heights: Planning for Growth in IoT and AI Systems
The number of connected devices is expected to increase significantly in the com-
ing years. These developments will put a strain on IoT and AI systems, which need
372 AI-Driven IoT Systems for Industry 4.0

scalability to handle increasing volumes of data. There are factors to consider when
planning for scalability, such as data volume, data update frequency, and latency
requirements.
Potential strategies for IoT and AI systems include the following:

• Using cloud computing to provide scalable storage and processing resources [21].
• Using edge computing to bring data processing closer to devices [27].
• Implementation of a hybrid approach that integrates cloud edge computing [28].
• The specific scaling strategy that is best for a particular system will depend
on the specific needs of the application.

21.5.2 Cloud and Edge: Infrastructure Considerations


for Data Handling

Cloud and edge are two types of computing infrastructure that can be used to man-
age data in IoT and AI systems. The cloud is a centralized computing infrastructure
that provides storage and scalability [29]. An edge is a distributed computing system
that is close to the devices that collect the data [30]. Each model has advantages and
disadvantages.
The cloud is scalable and can handle large amounts of data but can be expensive
and have high latency. The edges cannot be measured in size, but they can be much
more durable and much safer. The best implementation will depend on the specific
needs of the application.

21.5.2.1 Deployment Strategies and Best Practices


There are a number of different deployment strategies that can be used for IoT and
AI systems.
The choice of deployment strategy will depend on the specific needs of the appli-
cation, such as the size of the system, the security requirements, and the budget.
Some of the common deployment strategies include the following:

• On-premises deployment: This involves deploying the system on the orga-


nization’s own infrastructure [31].
• Cloud deployment: This involves deploying the system on a cloud comput-
ing platform [32].
• Hybrid deployment: This involves deploying the system on a combination
of on-premises and cloud infrastructure.

There are a number of best practices that should be followed when deploying IoT
and AI systems.
These best practices include the following:

• Designing the system for scalability and security


• Using a secure communication protocol
• Ensuring that the system is properly tested and validated
• Monitoring the system for performance and security issues
Deployment of IoT with AI for Automation 373

21.5.2.1.1 Taking Flight: Steps for Successful IoT and AI Integration


The integration of IoT and AI can be a complex and challenging process.
However, there are a number of steps that can be taken to increase the chances
of success.
Some of the key steps include the following:

• Clearly defining the goals of the integration [33].


• What are the specific business problems that you are trying to solve
with IoT and AI? What are the desired outcomes? Once you know your
goals, you can start to identify the data that you need to collect and
analyze [33].
• Identifying the data that needs to be collected and analyzed: What data will
be used to train the AI models? How will the data be collected and stored?
This is an important step, as the quality and quantity of data will have a big
impact on the accuracy of the AI models. The data that you need to collect
will depend on the specific business problems that you are trying to solve.
For example, if you are trying to improve efficiency in a manufacturing
plant, you might need to collect data on the performance of machines, the
movement of materials, and the quality of products [27].
• Selecting the right IoT and AI technologies: There are many different IoT
and AI technologies available, so it is important to choose the right ones for
your application. The right technologies will depend on the data that you
are collecting, the type of AI models that you are using, and your budget.
For example, if you are collecting large amounts of data, you might need to
use a cloud-based IoT platform. If you are using deep learning models, you
might need to use specialized hardware, such as GPUs [34].
• Developing a plan for integrating the systems: How will the IoT and AI
systems be integrated? Will they be integrated on-premises or in the cloud?
This is an important step, as it will determine how the data will be col-
lected, stored, and analyzed. The integration plan should also consider the
security and privacy of the data [35].
• Testing and validating the integration: Once the systems are integrated, it
is important to test and validate them to ensure that they are working cor-
rectly. This will help to ensure that the integration is successful and that the
AI models are accurate. The testing and validation process should include
testing the data collection, storage, and analysis pipelines [36].
• Monitoring the integration for performance and effectiveness.

21.5.2.1.2 Smooth Sailing: Testing, Validation, and Optimization Guidelines


Once an IoT and AI system has been deployed, it is important to test [37], vali-
date, and optimize it to ensure that it is performing as expected. Testing should be
conducted to ensure that the system is able to collect and analyze data correctly.
Validation should be conducted to ensure that the system is meeting the expectations
of the users [33]. Optimization should be conducted to improve the performance of
the system. These activities should be conducted on an ongoing basis to ensure that
the system continues to meet the needs of the users.
374 AI-Driven IoT Systems for Industry 4.0

21.6 DATA MANAGEMENT AND INSIGHTS


21.6.1 Flow of Data: Collection, Storage, and Preprocessing in IoT
The flow of data in IoT systems can be divided into three main stages: collection,
storage, and preprocessing.
Collection: The first stage is the collection of data from sensors and other devices.
This data can be collected in real time or at regular intervals.
Storage: The second stage is the storage of data in a central repository. This data
can be stored in a variety of ways, such as in a database, a cloud storage system, or
a local file system.
Preprocessing: The third stage is the preprocessing of data. This involves clean-
ing and formatting the data so that it can be analyzed by AI algorithms.
Figure 21.2 shows that data is collected from sensors and other devices. This data
is then stored in a central repository. The data is then preprocessed to clean and for-
mat it for analysis by AI algorithms. The processed data can then be used to make
predictions, identify anomalies, or cluster data.

21.6.2 Analyzing for Action: AI’s Role in Extracting Value from Data


AI can be used to extract value from data in a variety of ways. One way is to use AI
to identify patterns and trends in data. This can be used to make predictions about
future events or to identify anomalies in data. Another way is to use AI to classify
data. This can be used to categorize data into different groups or to identify the most
important features in data.
AI can also be used to cluster data. This can be used to group data points that are
similar to each other.

FIGURE 21.2 Diagram representing the data flow in an IoT system [38].
Deployment of IoT with AI for Automation 375

21.6.3 Security, Privacy, and Ethics


The integration of IoT and AI raises a number of security, privacy, and ethical
concerns.
Security: One of the biggest concerns is the security of IoT devices. These devices
are often connected to the internet, which makes them vulnerable to cyberattacks.
Privacy: Another concern is the privacy of data collected by IoT devices. This
data can be used to track people’s movements, habits, and preferences.
Ethics: There are also ethical concerns about the use of AI in IoT systems. For
example, AI could be used to discriminate against certain groups of people or to
invade people’s privacy.

21.6.3.1 Safeguarding Systems: Ensuring Security in IoT and AI Integration


There are a number of measures that can be taken to safeguard IoT and AI systems.
These measures include the following:

• Using strong passwords and encryption: IoT devices are often connected to
the internet, which makes them vulnerable to attack. One of the best ways
to protect IoT devices is to use strong passwords and encryption. Strong
passwords should be at least 12 characters long and should include a mix
of upper and lowercase letters, numbers, and symbols. Encryption can be
used to protect data that is transmitted between IoT devices and between
IoT devices and servers [39].
• Keeping software up to date: IoT devices often have software that needs
to be updated regularly. Software updates often include security patches
that can help to protect IoT devices from attack. It is important to keep IoT
devices up to date with the latest software releases [40].
• Implementing security best practices: There are a number of security best
practices that can be implemented to safeguard IoT and AI systems. These
best practices include the following:
1. Segmenting networks: This involves dividing the network into smaller
networks, which can help to isolate devices and data [41].
2. Using firewalls: Firewalls can be used to control traffic between net-
works and to block unauthorized access [42].
3. Using intrusion detection systems (IDS): IDS can be used to detect
malicious activity on the network [43].
4. Using intrusion prevention systems (IPS): IPS can be used to block
malicious activity on the network [44].
• Educating users about security risks: It is important to educate users about
the security risks associated with IoT and AI systems. Users should be
aware of the importance of using strong passwords and encryption, and
they should be aware of the risks of clicking on malicious links or opening
attachments from unknown senders [40].

21.6.3.2 Balancing Act: Privacy Concerns in a Connected World


There are a number of ways to address privacy concerns in a connected world.
376 AI-Driven IoT Systems for Industry 4.0

These measures include the following:

• Giving users control over their data


• Using anonymized data
• Implementing privacy-preserving technologies
• Building trust with users

21.6.3.3 Ethical Imperatives: Navigating Automation's Social Implications


There are a number of ethical issues that need to be considered when using AI in
IoT systems.
These issues include the following:

• The potential for bias in AI algorithms


• The impact of AI on employment
• The use of AI for surveillance
• The responsibility for the actions of AI systems

21.7 CASE STUDIES IN ACTION

CASE STUDY 1: REVOLUTIONIZING


SUPPLY CHAIN MANAGEMENT
The global supply chain is a complex network of people, organizations, activi-
ties, information, and resources involved in moving products from suppliers to
customers. It is estimated that the global supply chain costs trillions of dollars
each year.
AI can be used to revolutionize supply chain management by improving
visibility, optimizing routing, and predicting demand. This can help to reduce
costs, improve efficiency, and deliver products to customers faster.
Visibility: AI can be used to track the movement of products through the
supply chain in real time. This can help to identify bottlenecks and delays and
to make better decisions about routing and scheduling. For example, Walmart
is using AI to track the movement of products through its supply chain. This
has helped the company to reduce its transportation costs by 10%.
Routing: AI can be used to optimize the routing of products through the
supply chain. This can help to reduce the distance that products travel and to
avoid congestion. For example, DHL is using AI to optimize the routing of its
delivery trucks. This has helped the company to reduce its fuel consumption
by 5%.
Demand prediction: AI can be used to predict demand for products. This
can help to avoid stockouts and overstocks. For example, Amazon is using AI
to predict demand for products. This has helped the company to improve its
inventory accuracy by 99%.
Deployment of IoT with AI for Automation 377

CASE STUDY 2: AI-POWERED HEALTHCARE DIAGNOSTICS


AI can be used to address a number of challenges in healthcare, including
rising costs, increasing demand, and a shortage of doctors. AI can be used to
improve the accuracy of diagnoses, reduce the cost of care, and provide per-
sonalized treatment plans.
Accuracy of diagnoses: AI can be used to analyze medical images, such as
X-rays and MRI scans, to identify diseases earlier and more accurately than
humans can. For example, Google Health is using AI to develop a new app
that can help people to track their health data and identify potential health
problems.
Cost of care: AI can be used to automate tasks, such as scheduling
appointments and managing patient records. This can help to reduce the
administrative costs of healthcare. For example, IBM Watson Health is
using AI to automate the scheduling of appointments for patients with can-
cer. This has helped the company to reduce the administrative costs of can-
cer care by 20%.
Personalized treatment plans: AI can be used to develop personalized treat-
ment plans for patients based on their individual medical history and genetics.
This can help to improve the effectiveness of treatment and reduce the risk of
side effects. For example, Novartis is using AI to develop personalized treat-
ment plans for patients with rheumatoid arthritis. This has helped the company
to improve the effectiveness of treatment by 15%.

CASE STUDY 3: SMART AGRICULTURE FOR


SUSTAINABLE FOOD PRODUCTION
The global food production system is facing a number of challenges, including
climate change, water scarcity, and soil degradation. AI can be used to address
these challenges by improving crop yields, reducing the use of pesticides and
fertilizers, and increasing the efficiency of water use.
Crop yields: AI can be used to develop new crop varieties that are more
resistant to pests and diseases. This can help to increase crop yields and reduce
the need for pesticides. For example, Monsanto is using AI to develop new
crop varieties that are resistant to drought and heat.
Pesticides and fertilizers: AI can be used to optimize irrigation schedules
and to develop precision farming techniques that can reduce the use of pes-
ticides and fertilizers. This can help to protect the environment and improve
the sustainability of food production. For example, John Deere is using AI to
develop precision farming techniques that can reduce the use of pesticides and
fertilizers by 10%.
378 AI-Driven IoT Systems for Industry 4.0

Water use: AI can be used to develop water conservation techniques that


can help to increase the efficiency of water use in agriculture. For example, the
IBM Food Trust is using AI to develop water conservation techniques that can
help to reduce water use in irrigation by 20%.
These are just a few examples of how AI is being used to solve real-world
problems in the supply chain, healthcare, and agriculture. As AI technology
continues to develop, we can expect to see even more innovative and impactful
applications in the future.

21.8 CONCLUSION
The rapid evolution of IoT and AI holds large potential to revolutionize automa-
tion. Their blended pressure forms an effective synergy able to automating duties,
enhancing efficiency, and enabling superior decision-making. Currently in its nascent
ranges, the integration of IoT and AI reveals programs across numerous sectors like
production, healthcare, and agriculture. The future of automation gleams brightly,
with IoT and AI assuming pivotal roles in shaping it. Despite its intricacy, harmoniz-
ing IoT and AI stays vital for ahead-looking groups. Challenges together with safety,
privateness, and ethics ought to be tackled, yet the potential blessings of IoT and
AI-driven automation are sizable. Organizations put money into this integration role
themselves advantageously for the instances beforehand.
The horizon of IoT and AI automation holds a realm of boundless possibilities,
encompassing numerous noteworthy trends at the horizon. These developments con-
sist of the ascension of 5G networks, poised to usher in faster information switch
speeds and heightened connection reliability. Concurrently, the evolution of contem-
porary AI algorithms promises amplified energy and performance. Furthering this
panorama is the growth of facet computing, an advancement positioning AI in nearer
proximity to the information it necessitates for processing. An additional fashion
of importance lies within the surging adoption of blockchain era, efficaciously bol-
stering records and transaction protection. These interconnected tendencies together
form a destiny where IoT and AI maintain to adapt, intertwining technological
advancements for transformative consequences.
Beyond IoT and AI, a host of other new technologies are gambling a pivotal role
in shaping the destiny of automation. These encompass robotics, which are becom-
ing more and more crucial to various industries; virtual truth and augmented real-
ity, which can be reworking how we have interaction with digital environments;
quantum computing, which holds monstrous promise for solving complicated issues;
and herbal language processing, enabling computer systems to recognize and talk
in human language. As these technologies preserve to strengthen, they collectively
contribute to a transformative landscape where automation is pushed by way of a
diverse array of current innovations.
The fusion of IoT and AI automation brings forth a number of ethical concerns,
together with the danger of bias within AI algorithms, the implications of automation
Deployment of IoT with AI for Automation 379

on activity opportunities, the moral implications of using automation for surveil-


lance functions, and the duty for the moves undertaken by means of AI systems. As
we navigate toward an increasingly automatic future, it becomes essential to con-
front and interact with these worries in a proactive manner.

REFERENCES
1. D. U. Mishra and S. Sharma, Revolutionizing Industrial Automation Through the
Convergence of Artificial Intelligence and the Internet of Things. IGI Global, 2022.
Available: https://play.google.com/store/books/details?id=rw-bEAAAQBAJ
2. R. Anandan, S. Gopalakrishnan, S. Pal, and N. Zaman, Industrial Internet of Things
(IIoT): Intelligent Analytics for Predictive Maintenance. John Wiley & Sons, 2022.
Available: https://books.google.com/books/about/Industrial_Internet_of_Things_IIoT.
html?hl=&id=E8p6EAAAQBAJ
3. L. Zhang, L. Yao, Z. Lu, and H. Yu, “Current Status of Quality Control in Screening
Esophagogastroduodenoscopy and the Emerging Role of Artificial Intelligence,”
Dig. Endosc., Jul. 2023, doi: 10.1111/den.14649. Available: http://dx.doi.org/10.1111/
den.14649
4. P. Chawla, A. Kumar, A. Nayyar, and M. Naved, Blockchain, IoT, and AI Technologies
for Supply Chain Management. CRC Press, 2023. Available: https://play.google.com/
store/books/details?id=fDaxEAAAQBAJ
5. A. Pasumpon Pandian, T. Senjyu, S. M. S. Islam, and H. Wang, Proceeding of the
International Conference on Computer Networks, Big Data and IoT (ICCBI - 2018).
Springer, 2019. Available: https://books.google.com/books/about/Proceeding_of_the_
International_Conferen.html?hl=&id=ZYvrxQEACAAJ
6. A. Grizhnevich, “IoT Architecture: Building Blocks and How They Work,” ScienceSoft,
Apr. 01, 2018. Available: https://www.scnsoft.com/blog/iot-architecture-in-a-nutshell-
and-how-it-works. [Accessed: Aug. 25, 2023]
7. G. Strang, Linear Algebra and Learning from Data. Wellesley-Cambridge Press, 2019.
Available: https://books.google.com/books/about/Linear_Algebra_and_Learning_from_
Data.html?hl=&id=L0Y_wQEACAAJ
8. A. K. Dubey, A. Kumar, S. Rakesh Kumar, N. Gayathri, and P. Das, AI and IoT-
Based Intelligent Automation in Robotics. John Wiley & Sons, 2021. Available:
https://books.google.com/books/about/AI_and_IoT_Based_Intelligent_Automation.
html?hl=&id=NSAlEAAAQBAJ
9. F. Khan, R. Alturki, M. A. Rehman, S. Mastorakis, I. Razzak, and S. T. Shah,
“Trustworthy and Reliable Deep Learning-based Cyberattack Detection in Industrial
IoT,” IEEE Trans. Industr. Inform., vol. 19, no. 1, pp. 1030–1038, Jan. 2023, doi:
10.1109/tii.2022.3190352. Available: http://dx.doi.org/10.1109/tii.2022.3190352
10. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016. Available:
https://play.google.com/store/books/details?id=omivDQAAQBAJ
11. T. Ahamed, IoT and AI in Agriculture: Self-Sufficiency in Food Production to Achieve
Society 5.0 and SDG’s Globally. Springer Nature, 2023. Available: https://play.google.
com/store/books/details?id=AY-4EAAAQBAJ
12. P. Friess, Digitising the Industry – Internet of Things Connecting the Physical, Digital
and Virtual Worlds. River Publishers, 2016. Available: https://books.google.com/books/
about/Digitising_the_Industry_Internet_of_Thin.html?hl=&id=nYktDwAAQBAJ
13. P. Fraga-Lamas, S. I. Lopes, and T. M. Fernández-Caramés, “Green IoT and Edge AI
as Key Technological Enablers for a Sustainable Digital Transition towards a Smart
Circular Economy: An Industry 5.0 Use Case,” Sensors, vol. 21, no. 17, Aug. 2021, doi:
10.3390/s21175745. Available: http://dx.doi.org/10.3390/s21175745
380 AI-Driven IoT Systems for Industry 4.0

14. B. Hajji, A. Mellit, G. M. Tina, A. Rabhi, J. Launay, and S. E. Naimi, Proceedings of


the 2nd International Conference on Electronic Engineering and Renewable Energy
Systems: ICEERE 2020, 13-15 April 2020, Saidia, Morocco. Springer Nature, 2020.
Available: https://play.google.com/store/books/details?id=Aa33DwAAQBAJ
15. E. H. Houssein, M. A. Elaziz, D. Oliva, and L. Abualigah, Integrating Meta-Heuristics
and Machine Learning for Real-World Optimization Problems. Springer Nature, 2022.
Available: https://play.google.com/store/books/details?id=ol1zEAAAQBAJ
16. S. T. Rassia and P. M. Pardalos, Smart City Networks: Through the Internet of
Things. Springer, 2017. Available: https://play.google.com/store/books/details?id=
AJU4DwAAQBAJ
17. S.-A. Elvy, A Commercial Law of Privacy and Security for the Internet of Things.
Cambridge University Press, 2021. Available: https://books.google.com/books/about/A_
Commercial_Law_of_Privacy_and_Security.html?hl=&id=jW8yEAAAQBAJ
18. A. M. Musleh Al-Sartawi, A. Razzaque, and M. M. Kamal, From the Internet of
Things to the Internet of Ideas: The Role of Artificial Intelligence: Proceedings of
EAMMIS 2022. Springer Nature, 2022. Available: https://play.google.com/store/books/
details?id=YvGcEAAAQBAJ
19. B. A. Mozzaquatro, C. Agostinho, D. Goncalves, J. Martins, and R. Jardim-Goncalves,
“An Ontology-Based Cybersecurity Framework for the Internet of Things,” Sensors,
vol. 18, no. 9, Sep. 2018, doi: 10.3390/s18093053. Available: http://dx.doi.org/10.3390/
s18093053
20. A. I. Awad and J. Abawajy, Security and Privacy in the Internet of Things:
Architectures, Techniques, and Applications. John Wiley & Sons, 2021. Available:
https://books.google.com/books/about/Security_and_Privacy_in_the_Internet_
of.html?hl=&id=-htREAAAQBAJ
21. A. Rayes and S. Salam, Internet of Things from Hype to Reality: The Road to
Digitization. Springer, 2016. Available: https://play.google.com/store/books/details?id=
bDlPDQAAQBAJ
22. PGP-UK Consortium, “Personal Genome Project UK (PGP-UK): A Research and
Citizen Science Hybrid Project in Support of Personalized Medicine,” BMC Med.
Genomics, vol. 11, no. 1, p. 108, Nov. 2018, doi: 10.1186/s12920-018-0423-1. Available:
http://dx.doi.org/10.1186/s12920-018-0423-1
23. J. C. Hu and J. Shoag, Robotic Urology: The Next Frontier, An Issue of Urologic
Clinics. Elsevier Health Sciences, 2020. Available: https://play.google.com/store/
books/details?id=VD0LEAAAQBAJ
24. Q. Zhang, Precision Agriculture Technology for Crop Farming. CRC Press, 2015.
Available: https://books.google.com/books/about/Precision_Agriculture_Technology_
for_Cro.html?hl=&id=aV74DwAAQBAJ
25. A. Castrignano, G. Buttafuoco, R. Khosla, A. Mouazen, D. Moshou, and O.
Naud, Agricultural Internet of Things and Decision Support for Precision Smart
Farming. Academic Press, 2020. Available: https://play.google.com/store/books/
details?id=kyzJDwAAQBAJ
26. Food and Agriculture Organization of the United Nations, Thinking about the Future
of Food Safety: A Foresight Report. Food & Agriculture Org., 2022. Available:
https://books.google.com/books/about/Thinking_about_the_future_of_food_safety.
html?hl=&id=xr1qEAAAQBAJ
27. H. Xu, W. Yu, D. Griffith, and N. Golmie, “A Survey on Industrial Internet of Things:
A Cyber-Physical Systems Perspective,” IEEE Access, vol. 6, 2018, doi: 10.1109/
access.2018.2884906. Available: http://dx.doi.org/10.1109/access.2018.2884906
28. S. Goundar, S. B. Bhushan, and P. K. Rayani, Architecture and Security Issues in Fog
Computing Applications. IGI Global, 2019. Available: https://play.google.com/store/
books/details?id=mmqtDwAAQBAJ
Deployment of IoT with AI for Automation 381

29. M. Daneshvar, B. Mohammadi-Ivatloo, and K. Zare, “An Innovative Transactive


Energy Architecture for Community Microgrids in Modern Multi-Carrier Energy
Networks: A Chicago Case Study,” Sci. Rep., vol. 13, no. 1, p. 1529, Jan. 2023, doi:
10.1038/s41598-023-28563-7. Available: http://dx.doi.org/10.1038/s41598-023-28563-7
30. Management Association and Information Resources, Research Anthology on Edge
Computing Protocols, Applications, and Integration. IGI Global, 2022. Available:
https://play.google.com/store/books/details?id=hWV2EAAAQBAJ
31. W. Zhang, Software Engineering and Knowledge Engineering: Theory and Practice:
Selected papers from 2012 International Conference on Software Engineering,
Knowledge Engineering and Information Engineering (SEKEIE 2012). Springer
Science & Business Media, 2012. Available: https://play.google.com/store/books/
details?id=J4LSbzVPGZMC
32. K. Kaushik, S. Dahiya, A. Bhardwaj, and Y. Maleh, Internet of Things and Cyber
Physical Systems: Security and Forensics. CRC Press, 2022. Available: https://play.
google.com/store/books/details?id=BvugEAAAQBAJ
33. V. Chowdary, A. Sharma, N. Kumar, and V. Kaundal, Internet of Things in Modern
Computing: Theory and Applications. CRC Press, 2023. Available: https://play.google.
com/store/books/details?id=MhfBEAAAQBAJ
34. IEEE Staff, 2018 Third International Conference on Fog and Mobile Edge Computing
(FMEC). 2018. Available: https://books.google.com/books/about/2018_Third_International_
Conference_on_F.html?hl=&id=kAXVtwEACAAJ
35. Y. Wang and T. Li, Knowledge Engineering and Management: Proceedings of the
Sixth International Conference on Intelligent Systems and Knowledge Engineering,
Shanghai, China, Dec 2011 (ISKE 2011). Springer Science & Business Media, 2011.
Available: https://play.google.com/store/books/details?id=2AMhFCpHTLQC
36. R. Rao and N. Raghavendra, Enterprise Management Strategies in the Era of Cloud
Computing. IGI Global, 2015. Available: https://play.google.com/store/books/details?id=
CF00CgAAQBAJ
37. E. Udoh, C.-H. Hsu, and M. Khan, International Journal of Grid and High Performance
Computing (IJGHPC). 2015. Available: https://books.google.com/books/about/
International_Journal_of_Grid_and_High_P.html?hl=&id=XjiAAQAACAAJ
38. “Website.” Available: https://www.researchgate.net/figure/Data-flow-in-an-IoT-system_fig1_
326226121
39. S. Cheruvu, A. Kumar, N. Smith, and D. M. Wheeler, Demystifying Internet of Things
Security: Successful IoT Device/Edge and Platform Security Deployment. Apress,
2019. Available: https://books.google.com/books/about/Demystifying_Internet_of_
Things_Security.html?hl=&id=YSKpDwAAQBAJ
40. S. Fleming, Accelerated DevOps with AI, ML & RPA: Non-Programmer’s Guide to
AIOPS & MLOPS. Stephen Fleming, 2020. Available: https://play.google.com/store/
books/details?id=1ebwDwAAQBAJ
41. S. K. Sharma, B. Bhushan, and N. C. Debnath, Security and Privacy Issues in IoT
Devices and Sensor Networks. Academic Press, 2020. Available: https://play.google.
com/store/books/details?id=CWrnDwAAQBAJ
42. A. Sabella, R. Irons-Mclean, and M. Yannuzzi, Orchestrating and Automating Security
for the Internet of Things: Delivering Advanced Security Capabilities from Edge to
Cloud for IoT. Cisco Press, 2018. Available: https://play.google.com/store/books/
details?id=SjFbDwAAQBAJ
43. P. Kumar, V. Ponnusamy, and V. Jain, Industrial Internet of Things and Cyber-Physical
Systems: Transforming the Conventional to Digital: Transforming the Conventional
to Digital. IGI Global, 2020. Available: https://play.google.com/store/books/details?id=
WrgIEAAAQBAJ
44. W. Mutwiri, Home Security Systems. Intrusion Detection with GSM. GRIN Verlag,
2019. Available: https://play.google.com/store/books/details?id=i1LHDwAAQBAJ
22 A Comparison of
the Performance of
Different Machine
Learning Algorithms for
Detecting Face Masks
Andra Likhith Sai, Gona Jordan Ryan,
Namoju Prudhvi Sai, and Neeraj Kumar Misra

22.1 INTRODUCTION
The field of artificial intelligence (AI) known as machine learning (ML) focuses on
developing algorithms, constructing models, and implementing decisions based on
datasets fed into the system [1]. Supervised learning and unsupervised learning are
the two main categories of ML algorithms. The term “supervised learning” refers to
the process in which algorithms learn from a training dataset and are simultaneously
trained to supervise the learning process itself. “Unsupervised machine learning”
refers to a type of ML in which the input data is uncategorized. In contrast, unsuper-
vised learning does not require prior knowledge about the data and does not specify
the number of classes [2]. Many different ML techniques exist, such as decision tree
(DT), support vector machine (SVM), k-nearest neighbor (KNN), Gaussian Naive
Bayes (GNB) classifier, and convolutional neural network (CNN) [3].
COVID-19 is currently the most significant threat that the world faces, surpassing
even the most catastrophic natural calamities [4]. Even though it’s been more than a
year, a resolution to the problem is not even close to being in sight. However, there
are still very few strategies to contain the spread in accordance with WHO (World
Health Organization) directives [5, 6].
The research aims to determine whether or not people are wearing face masks
(FMs) at a public gathering or event. To achieve the goal, ML algorithms such as DT,
SVM, KNN, GNB classifier, and CNN were used in the research work. As an input
dataset, we use an image that contains both some masked and non-masked individu-
als. Among the steps that have to be undertaken in order to complete this research
work and achieve its goal are the pre-processing, the data augmentation, the training,
the testing, and the image segmentation [7]. After completing the procedures, a seg-
mented image of the input dataset consisting of masked and non-masked individu-
als is obtained using the CNN algorithm [8, 9]. Then, the comparative analysis is

382 DOI: 10.1201/9781003432319-22


A Comparison of the Performance of Different Machine Learning Algorithms 383

performed using ML parameters, which gives the results of those who conceal their
identity with a mask and those who do not, in percentages. The goal of this research
is to develop a model capable of identifying individuals who do not wear masks in
public. In this research project, which addresses the crisis caused by the COVID-19
virus, the main goal is to follow the recommendations of the WHO website to pre-
vent further transmission of the virus [10, 11]. Many of the existing works deal with
face recognition, but very few articles address the performance parameters of ML
[12–18]. The objective of this work is to study a variety of ML models for the detec-
tion of FMs. According to analysis model comparisons, we concluded that the CNN
model is best per performance comparison considering parameters such as accuracy,
recalls, precision, and F1 score. We have used pickle, cv2, NumPy, perceptron itera-
tive learning (PIL), and sklearn libraries to analyze the performance parameters and
compare the various ML models.
The most important contributions that this research paper has made are outlined
in the following:

• We reviewed lots of existing work to determine where this approach of FM


detection was lacking, and then we concluded based on comparing different
models to several parameters.
• We train the various models using training dataset 8016 and testing 2004.
• In order to measure ML parameters, we have utilized the Pickle Python
library in addition to a few additional libraries.
• We have done a variety of models and analyzed how well they do FM
identification.
• We analyze the performance of the DT, SVM, KNN, GNB classifier, and
CNN models based on accuracy, precision, recall, and F1 score. The mod-
els are tested using a variety of datasets.
• We analyze the ML parameters and find a better model to predict accurately
FM detection based on the available dataset.

The following is the format for this body of scientific work: The first section
describes the work that already exists. Learning models are discussed in the second
section. Some examples of these models include DT, SVM, KNN, GNB classifier,
and CNN. In the third section, we will present the model of the suggested system.
The fourth section discusses the model layout process as well as the comparison of
parameters with different learning algorithms.

22.2 MOTIVATION TO WORK ON FACE MASK DETECTION


USING ML ALGORITHMS
There has been a lot of disruption to people’s daily lives since the deadly virus
COVID-19 was discovered a year ago, the results of which have been felt across the
globe [4]. Masking one’s face has become a standard practice these days when going
out [5]. However, there are still a great number of people who are not aware of the
issue and who refuse to wear a FM or avoid doing so. This research work assists
384 AI-Driven IoT Systems for Industry 4.0

in the identification of individuals who do not have their FMs detected by AI and
ML algorithms. The research study that was conducted utilized a well-known ML
method to identify the owner of a FM. The code was executed on Google Colab.

22.3 RELATED WORKS


A CNN is a type of network design that can be applied to deep learning (DL) algo-
rithms to address tasks such as mask image identification and other tasks requiring
the processing of datasets. CNNs are a type of AI network design that can handle
various ML tasks [4]. In the field of DL, there are also various kinds of neural net-
works that are used to detect and recognize different kinds of objects [5]. Therefore,
such enforcement must utilize face detection systems that are both automated and
effective. This model categorizes the images as either “with mask” or “without a
mask,” depending on whether or not they contain a mask [6].
This system uses a combination of object categorization, images, and tracking of
an object to develop a face recognition system that can work with or without masks
when it is in the form of an image dataset. The system test results with a 98.66% F1
score when taking FM recognition test data. Based on these impressive test results,
the model, training, and testing method is capable of working with 10,020 images.
To develop the proposed model, DL methods have been used in conjunction with
some popular DL methods.
There has been a substantial amount of research that has been done in the lit-
erature on the detection of the use of FMs [12–14]. FM detection was studied by
Gagandeep Kaur et al. using the Keras library as another supporting library and
simply taking the CNN model into account [13]. P. Gupta et al. demonstrated FM
detection using various machine technology. Deep neural network (DNN)-based
measurements had an accuracy of 97.05% [14]. In order to do FM detection, K. J.
Bhojane et al. used MATLAB and the Raspberry Pi platform [15]. M. Rahman et al.
employed the CNN model to recognize FMs with a medium level of accuracy [16].
M.R. Bhuiyan et al. predicted FM recognition with a medium level of accuracy using
computer vision and DL models [17]. FM identification using the CNN model was
demonstrated by H. Goyal et al. with good accuracy [18]. All previous research has
concentrated on a few technologies to determine accuracy, precision, and F1 score,
but in this study, we have successfully used every model currently in use, including
DT, SVM, KNN, GNB classifier, and CNN, to measure ML parameter calculation.

22.4 MATERIAL AND METHODS


In this research, we used the SVM, KNN, GNB classifier, and CNN for FM
identification.

22.4.1 Support Vector Machine


In a supervised ML approach, the SVM uses a predetermined dataset as an input [8].
It creates a model based on the information that is already known. It makes a predic-
tion about the result of the new data based on the model. Training and examination
A Comparison of the Performance of Different Machine Learning Algorithms 385

are the two primary phases that make up the supervised learning process. During
the training phase, the supervised algorithm creates the model, and later, during the
testing phase, the same model is used to make predictions about the output.

22.4.2 k-Nearest Neighbor


The KNNs) technique is one of the basic techniques of ML [12]. In pattern recogni-
tion, KNN is utilized for both the classification and regression processes. The input
for both classification and regression is determined by the k training instances that
are located closest to one another in the sample space, while the output is determined
by whether classification or regression was performed. In the fields of statistical esti-
mation and pattern recognition, KNN has been utilized.

22.4.3 Gaussian Naive Bayes Classifier


Bayes’ theorem, used in Bayesian statistics, is the foundation for a Bayes classifier,
a straightforward probabilistic classifier that makes strong (naive) assumptions of
independence [14]. The Bayes classifier makes the assumption that the presence (or
absence) of one characteristic of a class is unrelated to the presence (or absence) of
any other characteristic.

22.4.4 CNN Model
There are three layers in a CNN: the input layer, the hidden layer, and the output
layer [6]. CNNs are able to perform convolutions by using layers within the hidden
layers used to perform the convolutions within the hidden layers. A test dataset of
photos is used to train the model, and the photos are divided into two classes, those
with masks and those without masks, based on the information in the dataset. Using
the dataset that was taken, the model is being trained, and then the model is being
deployed. There are four stages in the CNN, as shown in Figure 22.1. Each stage has
a different label such as input, convolution, pooling, flattering, and softmax.

22.5 PROPOSED SYSTEM MODEL LAYOUT


As a result of the increase in the capabilities of computers to perform complex com-
putations in recent years, DL has become increasingly popular over the course of the
last decade, coinciding with the increasing popularity of DL. Python libraries like
TensorFlow, Keras, pickle, cv2, PIL, NumPy, and sklearn are used in the process of
training and testing all the ML techniques such as DT, SVM, KNN, GNB classifier,
and CNN in this research work.

FIGURE 22.1 Stages in CNN model.


386 AI-Driven IoT Systems for Industry 4.0

FIGURE 22.2 The proposed face mask detection model layout.

The FM research is done with a dataset collected from the popular Kaggle web-
site. The dataset had approximately 10,020 pictures. We performed testing in 2004
and trained about 8016 in the dataset that is available from Kaggle. Supervised ML
uses the thorough stages of training and development. Training: After performing
pre-processing on the dataset pictures, we trained a model by feeding it into a neural
network. Once the model has been trained, we may proceed with loading the FM
detector, performing facial detection, and then classifying faces into the categories
of masked and unmasked. There has been a proposal for a FM detection model lay-
out with detailed training and development, as shown in Figure 22.2.
In mask detection we have used the CNN. We use the 10,020 dataset, a dataset
of images from Kaggle. There are a number of methods that can be used to achieve
the goal of FM detection, as described in Algorithm 22.1. Python programming is
required at each stage of the process to achieve the goal. The main steps are import-
ing libraries, importing an image dataset, labeling the images, resizing the images,
plotting the results, invoking the model and performing various CNN layers, fitting
the model, predicting the model, performing an accuracy test, and performing all
steps to evaluate performance parameters such as accuracy, precision, F1 score, and
recall. The CNN model is used in Google Colab to apply all of the datasets that were
presented to it and to make predictions for both individuals who were wearing masks
and those who were not wearing masks, as well as the accuracy of the prediction in
terms of a percentage scale.

Algorithm 22.1: CNN-Based Face Mask Detection

1. Import Libraries.
2. Import dataset of Images.
3. Labeling.
4. Image resize.
5. Plotting.
A Comparison of the Performance of Different Machine Learning Algorithms 387

6. Call the model and proceed to different stages: convolution, pooling,


convolution+ReLu, pooling, etc.
7. Fit the model.
8. Predict from the model.
9. Accuracy test.
10. Apply CNN model.

22.5.1 Design Requirement
Python environments like Google Colab should be able to read the models and data-
set first. The collected dataset, such as “with mask” or “without mask,” must be con-
verted to a 2D scale so that it can be used on any computer system (see Figure 22.3).
All models, such as DT, SVM, KNN, GNB classifier, and CNN, are expected to dis-
play the ML parameters. It is expected that the model will predict “with mask” and
“without mask” with a higher accuracy value. To incorporate the dataset into Google
Colab, we use an application programming interface (API) to create the dataset. The
JupyterLab or Google Colab is used to run all the ML algorithms.

22.5.2 Model Layout Process


In this section, we have discussed the model layout process to train and test the CNN
model. First, take input images, then perform the operation of convolution + ReLu (acti-
vation function), after pooling and again, the process repetitively goes on for a while
until the needed parameters are needed for our model, and then perform the operation
such as flattening (converts 2D array to 1D array); in last Softmax is used to reduce the
loss. The complete block diagram of CNN with many stages is displayed in Figure 22.4.

FIGURE 22.3 Image plot using the Matplotlib library.

FIGURE 22.4 Block diagram of CNN-based face mask detection.


388 AI-Driven IoT Systems for Industry 4.0

22.6 PERFORMANCE PARAMETERS OF THE


MACHINE LEARNING MODEL
DL is about feature engineering, which in turn requires expertise. FM identification
is used in DL-based algorithms. There is a first stage in the proposed task that is
feature detection (FD), and accurate FD is necessary for the identification of FMs.
FD is a computer vision algorithm that can be used in the extraction of frontal faces
from images of people where they can be detected. It is used for training and testing
FM recognition techniques based on a CNN-based architecture after the FM discov-
ery step has taken place. There are several basic mathematical formulas that can be
used to develop ML models, as shown in Table 22.1. In Table 22.1, all the terms are
common terminology as usual in ML. As shown in Table 22.2, there are different

TABLE 22.1
Standard Equation for Parameter Calculation
Parameters Standard Equation
TP
Precision
TP + FP
TP
Recall
TP + FN
PRECISION × RECALL
F1 score 2×
PRECISION + RECALL
TP + TN
Accuracy
TP + FP + FN + TN

TABLE 22.2
Performance Parameter Comparison
Performance Gaussian Naive
Parameters DT SVM KNN Bayes Classifier CNN
Accuracy (%) 81.63 95.11 75.99 78.24 98.103
Precision (%) 81.4 96 95.41 77.67 97.25
Recall (%) 81.72 96 95.41 76.205 99.001
F1 Score (%) 81.72 96 76 78.20 98.12
% Improvement of 16.14 36.86 23.24 21.13
accuracy w.r.t CNN
% Improvement of 16.56 3.030 23.43 21.31
Precision w.r.t CNN
% Improvement of 15.87 2.69 23.17 21.04
Recall w.r.t CNN
% Improvement of 16.27 2.69 23.17 21.04
F1 score w.r.t CNN
A Comparison of the Performance of Different Machine Learning Algorithms 389

FIGURE 22.5 SVM-based face mask detection.

metrics that measure the performance of a wide range of ML models. According


to our comparison, the CNN model performs better than the DT, SVM, KNN, and
GNB classifiers. The SVM-based FM recognition is shown in Figure 22.5.

22.7 RESULTS AND DISCUSSION


CNN is an artificial neural network (ANN) designed to process pixel input for image
recognition and processing. In the computer vision task of segmenting a picture,
CNNs are one of the most fundamental components of how a picture is segmented.
This CNN is implemented using a Python library such as pickle, sklearn, cv2, or
pandas in order to complete the implementation. It is then necessary to invoke the pre-
processing function to obtain the results for individuals wearing masks and those not
wearing them as well as the accuracy, precision, recall, and F1 score in terms of per-
centages for both groups of individuals. It has been shown in this study that by com-
bining a variety of ML models, including DTs, SVMs, KNNs, GNB, and CNNs, a
number of performance parameters can be related among the various ML models. As
a result of this comparison, it appears that the CNN model offers the best parameters
in terms of percentages. These performance characteristics are shown in Table 22.2,
regarding the graphical bar diagram of several models, as displayed in Figure 22.6.
FM implementation uses a CNN-based architectural diagram and Python mod-
ules to train and test a supervised model that can be used to assess performance
parameters for this study effort and produce results for FM detection with and with-
out a mask to analyze the expected output. Figure 22.7 illustrates how epoch value
and loss value interact, making it evident that as epoch value rises, loss value falls.
The core of this study is a collection of datasets that implement the CNN utiliz-
ing several libraries. We analyze the behavior of the epoch value with accuracy after
execution. Figure 22.8 helps us see what will happen if the accuracy value remains
constant throughout time, even while the epoch value increases.
390 AI-Driven IoT Systems for Industry 4.0

FIGURE 22.6 Performance comparison with models.

It takes supervised ML models and a lot of training data to recognize faces with
and without masks. All input photos are converted to 2D images, and the original
data and altered data are returned. The input image is used by the pickle, cv2, and
sklearn image data generators to supervise ML models. Figure 22.9 shows that the
val_loss value will be enormous if the epoch is short.

FIGURE 22.7 Visualization of the relationship between epoch value and loss value using
the CNN model.
A Comparison of the Performance of Different Machine Learning Algorithms 391

FIGURE 22.8 Visualization of the relationship between epoch value and accuracy value
using the CNN model.

FIGURE 22.9 Visualization of the relationship between epoch value and val_loss value
using the CNN model.

The CNN architecture displays the many operations—including convolution, poll-


ing, and softmax—that is used in the research project, and the thorough design demon-
strates the model’s full functionality in terms of FM recognition, both with and without
a mask. The relation between the epoch value and the val_acc value is depicted in
Figure 22.10. The graph showed that the value of val_acc is high at high epochs.
The Python code is used to develop the FM detection ML model. In the KNN
model, we have used Python and the IDE to run code in Jupyter Notebook or Google
Colab. Using the KNN model after executing the Python code, we get without mask
5020 and worn mask 5000 with an accuracy of 75.99% based on the dataset. Figure
22.11 shows the FM using the KNN model.
392 AI-Driven IoT Systems for Industry 4.0

FIGURE 22.10 Visualization of the relationship between epoch value and val_acc value
using the CNN model.

FIGURE 22.11 Successful recognization of face mask using the KNN model.
A Comparison of the Performance of Different Machine Learning Algorithms 393

FIGURE 22.12 CNN model trained for various epochs.

CNN model using convolution, pooling, convolution+ReLu, pooling, and output


predictions is implemented by Python code that was executed in Google Colab. The
CNN diagram illustrates the various processes that will be carried out as part of the
research, and the detailed design illustrates the full workings of each layer, which
can be used to gain a better understanding of the research work and carry it out
effectively in order to achieve the desired results. The detailed CNN model trained
for various epochs is shown in Figure 22.12. The CNN model accuracy report is
shown in Figure 22.13.
In order to develop this application, Python code was used in a Jupyter Notebook.
Several ML models are used in this research, namely DT, SVM, KNN, GNB clas-
sifier, and CNN, which can be used to detect faces with and without masks. A code
has been built to predict F1 scores, accuracy, precision, and recall to visualize the
confusion matrix.
The confusion matrix based on the CNN, KNN, GNB, and DT models is visual-
ized in Figure 22.14. An image segmentation process that is performed by the use of
computer vision is based upon the CNN as the most fundamental element. A dataset
can be divided into two distinct classes: the “with mask” and the “without mask.”
This dataset picture was obtained via Kaggle. Regarding overall performance, CNN
is the model that performs best among all other models. This is the primary conclu-
sion drawn from the research study based on the many different algorithms and
implementations that were offered. Figure 22.15 shows the successful recognition of
the face mask and CNN model summary report.

FIGURE 22.13 CNN model accuracy report.


394 AI-Driven IoT Systems for Industry 4.0

FIGURE 22.14 Visualization of confusion matrix: (a) CNN, (b) KNN, (c) GaussianNB,
(d) DT model, and (e) SVM. (Continued)
A Comparison of the Performance of Different Machine Learning Algorithms 395

FIGURE 22.14 (Continued)

FIGURE 22.15 Recognition of face mask and CNN model summary report.
396 AI-Driven IoT Systems for Industry 4.0

22.8 CONCLUSIONS
In the last few years, COVID-19 has become more widespread globally. It is essential
that we control this issue if we are to be able to return to our everyday lives. With the
correct use of the results of this research, it could be used to help identify those who
are not wearing any masks as a result of the research. For the model to be able to
operate in an effective manner, more training data has been provided (around 80%)
and more testing data has been provided (20%). As a result of the application of this
research work in a wide range of public gatherings, such as congested areas, it will
be much easier to identify those who are violating the use of masks. In tests that were
performed using the 10,020 dataset, the proposed CNN model got an accuracy rate of
98.57% when using the dataset. This chapter compares the performance parameter
of different ML models such as DT, SVM, KNN, GNB classifier, and CNN in order
to achieve the main contribution of this chapter which is to determine the best model.
A significant contribution of this chapter is its use of performance comparisons with
other model libraries, including cv2, pickle, NumPy, and sklearn, for the purpose of
illustrating all confusion matrices and measurement of the relation between epoch
value and accuracy value, val_loss, and val accuracy value. This work will be con-
ducted in cloudy and foggy environments in the future, which may make it more
difficult for the proposed approach to detect faces in these environments.

REFERENCES
1. Zhou, Z.H., 2021. Machine learning. Springer Nature.
2. Kumar, B.A. and Misra, N.K., 2024. Masked face age and gender identification using
CAFFE-modified MobileNetV2 on photo and real-time video images by transfer learn-
ing and deep learning techniques. Expert Systems with Applications, 246, p. 123179.
3. Loey, M., Manogaran, G., Taha, M.H.N. and Khalifa, N.E.M., 2021. A hybrid deep
transfer learning model with machine learning methods for face mask detection in the
era of the COVID-19 pandemic. Measurement, 167, p. 108288.
4. Vijitkunsawat, W. and Chantngarm, P., 2020, October. Study of the performance
of machine learning algorithms for face mask detection. In 2020-5th International
Conference on Information Technology (InCIT) (pp. 39–43). IEEE.
5. Sethi, S., Kathuria, M. and Kaushik, T., 2021. Face mask detection using deep learning:
An approach to reduce risk of Coronavirus spread. Journal of Biomedical Informatics,
120, p. 103848.
6. Pooja, S. and Preeti, S., 2021. Face mask detection using AI. Predictive and Preventive
Measures for Covid-19 Pandemic, pp. 293–305.
7. Kodali, R.K. and Dhanekula, R., 2021, January. Face mask detection using deep learn-
ing. In 2021 International Conference on Computer Communication and Informatics
(ICCCI) (pp. 1–5). IEEE.
8. Asif, S., Wenhui, Y., Tao, Y., Jinhai, S. and Amjad, K., 2021, May. Real time face mask
detection system using transfer learning with machine learning method in the era of
COVID-19 pandemic. In 2021 4th International Conference on Artificial Intelligence
and Big Data (ICAIBD) (pp. 70–75). IEEE.
9. Boulila, W., Alzahem, A., Almoudi, A., Afifi, M., Alturki, I. and Driss, M., 2021,
December. A deep learning-based approach for real-time facemask detection. In 2021
20th IEEE International Conference on Machine Learning and Applications (ICMLA)
(pp. 1478–1481). IEEE.
A Comparison of the Performance of Different Machine Learning Algorithms 397

10. Sen, S. and Sawant, K., 2021, February. Face mask detection for covid_19 pandemic
using PyTorch in deep learning. In IOP Conference Series: Materials Science and
Engineering (Vol. 1070, No. 1, p. 012061). IOP Publishing.
11. Dondo, D.G., Redolfi, J.A., Araguás, R.G. and Garcia, D., 2021. Application of deep-
learning methods to real time face mask detection. IEEE Latin America Transactions,
19(6), pp. 994–1001.
12. Mbunge, E., Simelane, S., Fashoto, S.G., Akinnuwesi, B. and Metfula, A.S., 2021.
Application of deep learning and machine learning models to detect COVID-19 face
masks-A review. Sustainable Operations and Computers, 2, pp. 235–245.
13. Kaur, G., Sinha, R., Tiwari, P.K., Yadav, S.K., Pandey, P., Raj, R., Vashisth, A. and
Rakhra, M., 2022. Face mask recognition system using CNN model. Neuroscience
Informatics, 2(3), p. 100035.
14. Gupta, P., Saxena, N., Sharma, M. and Tripathi, J., 2018. Deep neural network for
human face recognition. International Journal of Engineering and Manufacturing
(IJEM), 8(1), pp. 63–71.
15. Bhojane, K.J. and Thorat, S.S., 2018. Face recognition based car ignition and secu-
rity system. International Research Journal of Engineering and Technology (IRJET),
5(05), pp. 3565–3668.
16. Rahman, M.M., Manik, M.M.H., Islam, M.M., Mahmud, S. and Kim, J.H., 2020,
September. An automated system to limit COVID-19 using facial mask detection in
smart city network. In 2020 IEEE International IOT, Electronics and Mechatronics
Conference (IEMTRONICS) (pp. 1–5). IEEE.
17. Bhuiyan, M.R., Khushbu, S.A. and Islam, M.S., 2020, July. A deep learning based
assistive system to classify COVID-19 face mask for human safety with YOLOv3. In
2020 11th International Conference on Computing, Communication and Networking
Technologies (ICCCNT) (pp. 1–5). IEEE.
18. Goyal, H., Sidana, K., Singh, C., Jain, A. and Jindal, S., 2022. A real time face
mask detection system using convolutional neural network. Multimedia Tools and
Applications, 81(11), pp. 14999–15015.
Index
A Comparative analysis, 27
Complex systems, 51
Accuracy, 24 Computer vision, 24
Adaptive manufacturing, 126 Convolutional Neural Network(s) (CNNs),
Adaptive supply chain management, 264 51, 382
Advanced technologies, 94 Cost savings, 83
AI and ML, 71 Covid-19, 99
AI integration, 46 CPU, 25
AI optimization, 352 Customer demands, 256
AI-based project, 24 Cyber-physical system(s) (CPS), 53, 82
AI-driven energy management, 353 Cybersecurity, 75
AI-driven IoT systems for Industry 4.0, 26 Cybersecurity and data privacy challenges,
Algorithmic bias, 199 135
Algorithms, 39
Anaconda, 28
Android software, 39 D
Anomaly detection, 128 Data analysis, 54
Application cooperation, 247 Data analytics, 70
Artificial Intelligence (AI), 42 Data classification, 303
Artificial Neural Networks (ANNs), 206 Data collection, 56
Asset optimization, 267 Data deluge management, 364
Augmented reality (AR), 50 Data distribution, 226
Autonomic nervous system, 289 Data flow, 149
Autonomous robots, 189 Data fusion, 179
Autonomous systems, 71 Data governance, 137
Availability, 62 Data integration, 85
Data interoperability, 342
B Data management, 70
Data mining, 175
Bias in AI algorithms, 24, 376 Data preprocessing, 77
Big data, 54 Data processing, 72
Big data analysis, 97 Data protection, 91
Big data analytics, 70 Data quality, 92
Blockchain, 80 Data security, 77
Blockchain deployment, 344 Data storage, 77
Blockchain technology, 176 Data synchronization, 244
Bosch, 128 Data transformation, 194
Data-driven insights, 81
C Decentralized, 174
Decentralized decision-making, 42
Cartography, 38 Decision boundary, 299
Cervical cell images, 323 Decision support systems, 185
Change management, 76 Decision tree (DT), 382
Chronic stress, 291 Decision-making, 51
Cloud and edge computing, 72 Deep learning, 56
Cloud computing, 42 Deep learning algorithms, 63
Cloud manufacturing (CMfg), 275 Deep learning models, 206
Cognitive automation, 43 Deep learning techniques, 59
Cognitive systems, 260 Demand forecasting, 78
Collaboration, 52 Demand management, 261
Collaboration and communication, 89 Demand response, 352

399
400 Index

Demand response programs, 362 Human labor, 144


Detection and prevention, 103 Human-machine collaboration, 81
Digital elevation model (DEM), 207 Human-machine interaction, 72
Digital platforms, 266 Hyperledger, 341
Digital technologies, 117
Digital transformation, 85 I
Digital twin and simulation, 90
Digital twins, 79 Image classification, 194
Dimensionality reduction, 194 Image processing, 56
Distributed data, 202 Image registration, 329
Distributed ledger, 176 Image resizing, 209
Documentation, 86 Image segmentation, 208
Industrial automation, 126
E Industrial Internet of Things (IIoT), 181
Industrial robotics, 254
Edge computing, 72 Industrial sensors, 170
Efficiency and accuracy, 85 Industry 4.0, 52
Efficiency enhancement, 159 Industry 5.0, 183
Employee well-being, 101 Integration, 52
Enhanced customer experience, 85 Integration challenges, 91
Ensemble learning, 110 Interconnected devices, 159
Environmental impact, 76 Internet of Medical Things (IoMT), 292
Environmental sustainability, 349 Internet of Things (IoT), 70
Ethics, 55 Interoperability, 75
Event processing, 246 IoT, 53
IoT Devices, 74
F IoT in Industry 4.0, 75
IoT integration, 75
Facial detection, 386 IoT platform setup, 280
Feature detection (FD), 388 IoT technology, 99
Feature extraction, 106
Feature selection, 194 J
Fourth Industrial Revolution, 70
Fraud detection, 118 Jupyter Notebook, 391
Future research, 137
Future scope, 68 K

G K-Nearest Neighbor, 316


Key generation, 2
Galvanic skin response (GSR), 289 KNN algorithm, 317
Gaussian Naive Bayes (GNB) classifier, 382 KUKA robot, 284
Gaussian RBF SVM, 322
Gender roles, 291 L
Gesture recognition systems, 27
Gesture-based human-computer interaction, 23 LabelImg, 24
Google assistant, 27 Lack of skilled workforce, 91
LightGBM, 300
H Linear discriminant analysis (LDA), 299

Haar cascade, 58 M
Hand detection, 23
Hand gesture recognition, 23 Machine control, 131
Health concerns, 291 Machine interaction, 72
Healthcare, 53 Machine learning, 317
Healthcare monitoring, 83 Machine learning algorithms, 367
Holistic skill development, 151 Machine learning techniques, 368
Human dignity, 53 Machinery failures, 367
Index 401

Machine-to-machine, 342 Real-time data collection, 94


Malicious QR code, 103 Real-time monitoring, 70
MATLAB, 384 Real-time performance, 180
Medical diagnosis, 329 Real-time tracking, 191
Medical treatment, 57 Regression model, 61
Mental health, 55 Regularization, 321
Mental stress, 289 Regulatory Compliance, 76
Model, 320 Remote monitoring and control, 81
Model training, 210 Remote sensing, 214
Multidisciplinary approach, 182 Renewable energy, 351
Multi-layer perceptron neural networks Renewable energy integration, 362
(MLPNN), 299 Resource allocation, 123
Resource optimization, 145
N Risk management, 77
Rule-based engine, 249
Natural language processing (NLP), 186
Noise reduction, 216 S
Non-invasive, 302
Scalability, 85
O Secure protocols, 347
Security, 77
Object detection, 65 Security and privacy, 91
OpenCV, 61 Security protocols, 43
Operational efficiency, 74 Sensor data, 78
Optimization, 57 Sensor networks, 102
Outsourcing, 96 Sensor technologies, 239
Sensor types, 169
P Sensors, 77
Sentiment analysis, 192
Personalized experiences, 159 Setup, 123
Predictive analysis, 68 Sign language, 23
Predictive analytics, 79 Single-shot multibox (SSD), 26
Predictive insights, 93 Skill gap, 136
Predictive maintenance, 70 Skin temperature (ST), 289
Predictive models, 353 Smart factory, 161
Prescriptive analytics, 186 Smart factory architecture, 277
Principal component analysis (PCA), Smart manufacturing, 83
209 Smart sensors, 145
Process design, 353 Supervised learning, 194
Process optimization, 70 Supply chain, 78
Production lines, 74 Supply chain disruptions, 261
Production management, 240 Supply chain integration, 255
Proximity sensors, 171 Supply chain management, 264
Python programming, 386 Supply chain optimization, 74
Sustainability, 255
Q Sustainable future, 352
Sympathetic nervous system, 289
QRLjacking, 105 Synthetic data, 62
Quality control, 71
Quality management, 251 T
Quishing attack, 105
TCP/IP, 282
R Tensorflow libraries, 24
Training dataset, 382
Random forest algorithm, 109 Transfer learning, 317
Random forest classifier, 110 Transparency, 341
Real-time data, 79 True dark factories, 307
402 Index

U W
Uncertainty, 60 Wireless communication, 319
Unsupervised learning, 382 Wireless sensor networks, 346
Workflow, 355
V Workforce displacement, 313

Virtual reality (VR), 50


Visual inspections, 48

You might also like