0% found this document useful (0 votes)
24 views28 pages

Trustworthy XAI and Application

The document discusses the importance of trustworthy explainable artificial intelligence (XAI) in addressing the complexities and ethical dilemmas associated with AI systems. It emphasizes the need for transparency, explainability, and trustworthiness to foster human-AI interaction and ensure accountability in various applications, including healthcare and autonomous vehicles. The authors also explore the evolution of AI technologies and the significance of XAI techniques in enhancing user understanding and confidence in AI decision-making processes.

Uploaded by

hkwwbcbn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views28 pages

Trustworthy XAI and Application

The document discusses the importance of trustworthy explainable artificial intelligence (XAI) in addressing the complexities and ethical dilemmas associated with AI systems. It emphasizes the need for transparency, explainability, and trustworthiness to foster human-AI interaction and ensure accountability in various applications, including healthcare and autonomous vehicles. The authors also explore the evolution of AI technologies and the significance of XAI techniques in enhancing user understanding and confidence in AI decision-making processes.

Uploaded by

hkwwbcbn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Trustworthy XAI and Application

MD Abdullah Al Nasim1*, Parag Biswas2 , Abdur Rashid3 ,


Angona Biswas4 , Kishor Datta Gupta5
1, 4 Researchand Development Department, Pioneer Alpha, Dhaka,
Bangladesh.
2, 3 MSEM Department, Westcliff university, California, United States.
5 Department of Computer and Information Science, Clark Atlanta

University, Georgia, USA.


arXiv:2410.17139v1 [cs.AI] 22 Oct 2024

*Corresponding author(s). E-mail(s): [email protected];


Contributing authors: [email protected];
[email protected]; [email protected];
[email protected];

Abstract
One of today’s most significant and transformative technologies is the rapidly
developing field of artificial intelligence (AI). Defined as a computer system that
simulates human cognitive processes, AI is present in many aspects of our daily
lives, from the self-driving cars on the road to the virtual assistants in our smart-
phones. The term ”black box” is often used to describe artificial intelligence (AI)
because some AI systems are so complex and opaque. With millions of parameters
and layers, these systems—deep neural networks in particular—make it difficult
for humans to comprehend how they make judgments. Even if AI is capable of
producing correct findings, questions of accountability, prejudice, and justice are
raised by the opaqueness of its decision-making process. AI has a lot of potential,
but it also comes with a lot of difficulties and moral dilemmas. In the context of
explainable artificial intelligence (XAI), trust is crucial as it ensures that AI sys-
tems behave consistently, fairly, and ethically. In the present article, we explore
XAI, reliable XAI, and several practical uses for reliable XAI. Once more, we go
over the three main components—transparency, explainability, and trustworthi-
ness of XAI—that we determined are pertinent in this situation. We present an
overview of recent scientific studies that employ trustworthy XAI in relation to
these fundamental components, as well as an analysis of how trustworthy XAI
is applied in various application fields. In the end, trustworthiness is crucial for

1
establishing and maintaining trust between humans and AI systems, facilitat-
ing the integration of AI systems into various applications and domains for the
benefit of society.

Keywords: Artificial Intelligence(AI), Trustworthy XAI, Explainable Artificial


Intelligence (XAI), Healthcare, Autonomous Vehicles

1 Introduction
Philosophers who sought to characterize human thought as the mechanical manip-
ulation of symbols laid the foundations of modern-day artificial intelligence. These
efforts resulted in the invention of the programmable digital computer [1] in the 1940s.
Though it is now lost, Alan Turing may have published the first article on the sub-
ject of artificial intelligence in 1941, indicating that he was at least thinking about
the concept at that time. The public was first introduced to Turing’s notion of the
Turing test in his seminal work ”Computing Machinery and Intelligence” from 1950
[2]. In it, Turing cast doubt on the viability of building thinking machines. The term
artificial intelligence (AI) was first used in 1950 [3] but its general adoption and use
in healthcare has been hindered by many shortcomings of the original models. The
introduction of deep learning in the early 2000s eliminated many of these limitations.
We are entering a new era of technology where AI can be used in clinical practice
through risk assessment models that improve workflow efficiency and diagnostic accu-
racy. AI systems can now analyze complex algorithms and learn on their own. The
performance of AI systems has improved significantly in recent years. These new mod-
els expand on their capabilities to include text-image synthesis based on nearly any
prompt, whereas previous systems focused primarily on generating facial images.
The applicability and potential of artificial intelligence (AI) to transform business
is already evident in the wide range of areas in which it is applied: In the field of
natural language processing (NLP) [5], artificial intelligence (AI) makes it easier for
computers to understand and generate human language, enabling tasks such as senti-
ment analysis, machine translation, and spam filtering. Furthermore, computer vision
[6] enables computers to comprehend visual information, promoting developments
in fields like object identification, facial recognition, and self-driving automobiles.
Computers can now learn from data thanks to machine learning (ML), which has
applications in fraud detection, recommendation systems, predictive analytics, and
other fields. Robotics [7] is a branch of artificial intelligence that deals with the design,
development, and use of machines. These machines are used in a variety of indus-
tries, including space exploration, manufacturing, healthcare [8], [9], and many more.
Additionally, the incorporation of AI into business intelligence (BI) [10] signals better
data gathering, processing, and display, which promotes data-driven decision-making
and increases efficiency. AI offers advances in patient outcomes and medical develop-
ments in the healthcare industry [11],[12], [13] by assisting in illness diagnosis, therapy
development, and tailored care. AI’s promise in education is evident in its ability to
personalize instruction, engage students, and automate administrative tasks—all of

2
Fig. 1 Domains throughout which artificial intelligence finds applications. [4]

which open the door to more customized learning opportunities. Through data-driven
tactics, artificial intelligence (AI) in agriculture maximizes crop productivity, lowers
expenses, and assures environmental sustainability. Similar to this, AI in manufactur-
ing uses task automation and process optimization to increase productivity, efficiency,
and quality. AI’s influence is felt not just in these domains but also in banking, retail,
energy, transportation [14], handwriting detection [15], and government, where it is
transforming operations, improving services, and reshaping global industry landscapes.
Figure 1 illustrates the wide range of industries in which artificial intelligence finds
widespread use. These industries include retail, security, healthcare, e-commerce, man-
ufacturing, finance, transportation and logistics, and home furnishings. These apps
depend on reasonably developed AI technologies including ML, NLP, and computer
vision.

1.1 Third Wave of Artificial Intelligence (3AI)


The majority of commercial AI technology today is classified as ”narrow AI,” meaning
that it consists of extremely specialized systems that excel at a limited number of
clearly defined jobs and nothing else. Even the most amazing autonomous cars need
a combination of limited artificial intelligence algorithms. The reliance of modern AI
on enormous training data sets is another drawback. For example, a three-year-old

3
Fig. 2 Past, Present, and Future of AI waves. [16]

child can recognize cats from only a few instances, while a standard machine learning
algorithm would need to be given tens of thousands of cat photographs before it can
recognize them with any degree of precision. The idea of ”Third Wave AI” emerges
because artificial intelligence has to become more humanlike in a number of ways in
order to overcome these constraints and realize its full potential.
According to the Defense Advanced Research Projects Agency (DARPA) [17],
third-wave AI systems will be able to understand context, apply that contextual aware-
ness with common sense, and adapt to changing conditions. This will make it possible
for AI systems and human users to connect in a more organic and intuitive way [17].
One of the few DARPA projects underway, XAI is anticipated to pave the way for
”third-wave AI systems,” in which computers comprehend the context and surround-
ings in which they function and gradually develop the underlying explanatory models
necessary to describe occurrences in the actual world.
• First Wave AI was centered on rules, logic, and constructed knowledge.
• Big data, statistical learning, and probabilistic techniques were introduced by
Second Wave AI.
• The goal of third-wave AI is to develop common sense and contextual adaption
skills.
According to Tractica [16], the worldwide market for artificial intelligence software
is predicted to generate revenues of 118.6 billion by 2025, up from around 9.5 billion
US dollars in 2018. The focus of this data is on developing AI systems that not only
perform tasks accurately but also provide explanations of their decisions and actions
in ways that humans can understand.
The third wave of AI refers to the evolution of AI technologies beyond traditional
machine learning methods towards more advanced approaches that integrate reason-
ing, context awareness, and human-like understanding. This wave aims to develop AI
systems capable of understanding and interacting with the world in more nuanced and
sophisticated ways, often drawing inspiration from cognitive science and neuroscience.
Explainable AI (XAI), on the other hand, focuses on improving the transparency
and interpretability of AI systems, especially machine learning models, so that humans

4
can understand and trust the decisions made by these systems. XAI techniques aim to
provide explanations of AI predictions and actions, enabling users, including develop-
ers, regulators, and end users, to understand the underlying considerations and factors
that influence AI outcomes.
The relationship between the third wave of AI and XAI lies in their shared goal
of creating AI systems that are not only capable of making accurate predictions or
decisions but also capable of providing meaningful explanations for those predictions
or decisions. By incorporating XAI techniques into the development of third-wave
AI systems, developers can ensure that these systems are not only advanced in their
capabilities but also transparent and interpretable, fostering trust and acceptance
among users.

1.2 Concept of Explainable AI


Artificial Intelligence is often criticized for being hard to explain. Many opponents
argue that it is hard to trust the results of an AI if one does not know how it can
arrive at a certain result or conclusion. This issue becomes particularly problematic
when AI-based programs and systems are unable to accomplish their goals. Developing
explainability is necessary to boost public confidence in the computational execution.
Figure 3 shows the AI in use today and the AI expected to be used tomorrow. If we
are to hold the system accountable, we must take steps to address and minimize input
inefficiencies [18].
Artificial intelligence is known to suffer from the ”black box” syndrome due to
a lack of understanding of how the system works. This has ramifications for human
confidentiality, arbitrary discrimination, obfuscation, and legitimacy. There are often
underlying prejudices and tendencies present along with this lack of openness. By
enhancing users’ comprehension of how AI-powered systems reason, XAI seeks to
improve their performance. The goal of Transparent AI is to make artificial intelligence
(AI) safer and more accessible than ever before [18]. Therefore, each of the TAI’s
diverse features has to be considered separately, and their many facets need to be
talked about.
AI’s use of machine learning (ML) techniques falls into two categories: white box
and black box [19]. The results of a white box model are understandable to a subject
matter expert. In contrast, a black box model is very difficult to explain and can be con-
fusing even to domain experts [20]. The three criteria of interpretability, explainability,
and transparency are adhered to by the XAI algorithm [21]. A model is transparent
when the underlying principles of the machine learning model and decision-making
can be understood ”when the process of extracting model parameters from training
data and generating labels from test data is described and justified by the designers
of the approach. ” Because communicating in a way that people can understand is
called interpretability [22]. Currently, there is no universally accepted definition of the
concept of explainability, but its importance is recognized. An alternative is ”a col-
lection of interpretable features of a domain that can help generate a decision (such
as classification or regression) about a given example.” If an algorithm follows these
guidelines, it provides a basis for recording and validating the decision, improving the
algorithm, and discovering new information.

5
Fig. 3 Explainable Artificial Intelligence (XAI): A look at AI now and tomorrow. [18]

Fig. 4 Performance of artificial intelligence vs explainability. [23]

From an explainability perspective, academics are actively studying intelligent sys-


tems, which is a key topic. Formal compliance may occasionally depend on the system’s
understanding. Many black box algorithms, as shown in Figure 4, present a trade-off
between incredible learning performance (accuracy) and provable AI.

1.3 Classification Tree of XAI


XAI techniques are divided into two categories: transparent and post-hoc methods.
A transparent approach is one that represents the model’s capabilities and decision-
making process in an easy-to-understand way [24]. Transparent models include
Bayesian approaches, decision trees, linear regression, and fuzzy inference systems.
Transparent approaches can be useful when the internal feature correlations are
highly complex or linear. A comprehensive classification of different XAI methods and
approaches related to different types of data is shown in Figure ?? [? ].
Posterior approaches are useful for interpreting the complexity of a model, espe-
cially when there are nonlinear relationships or high data complexity. When a model
does not follow a direct relationship between data and features, posterior techniques

6
Fig. 5 XAI categorization according on data type [24]

can be an effective tool to explain what the model has learned [24]. Inference using
local feature weights is provided by transparent methods such as Bayesian clas-
sifiers, support vector machines, logistic regression, and K-nearest neighbors. This
model category meets three properties: simulability, decomposability, and algorithmic
transparency [24].

1.4 Definition of Transparency in Artificial Intelligence


In explainable artificial intelligence (XAI), transparency refers to the ability of an AI
system to provide understandable justification for its decisions and actions [25]. In
fact, transparency is one of the key components of explainable artificial intelligence
(XAI). In many real-world applications, particularly those with large social implica-
tions, deciphering the reasoning behind an AI system’s choice is just as crucial as the
decision itself. Assuring transparency in XAI helps prevent AI systems from being
viewed as ”black boxes” and instead as instruments that enable users to get insightful
and informative responses [26].
The general transparency of AI systems is further enhanced by the insights offered
by XAI approaches. Users have the capacity to scrutinize the decision-making proce-
dures, detect any partialities, and evaluate the dependability and equity of the model’s
results [27]. Transparent systems are essential for guaranteeing accountability and
ethical concerns in fields where the implications of AI choices might have large reper-
cussions, such as healthcare, finance, and autonomous cars. XAI approaches enable
users to find patterns, comprehend relationships, and discover any flaws or biases by
providing insightful information about the inner workings of AI models [27]. Stake-
holders are better equipped to make judgments, confirm that the model’s predictions
are accurate, and take necessary action as a result of the improved openness.
The examination of the ethical criteria revealed a correlation between explainability
and transparency and a number of other quality needs. The nine quality standards
pertaining to explainability and openness are shown in Figure 6.

7
Fig. 6 Qualitative standards of explainability and openness + Assistance; – Disagreements. [28]

The growth of AI systems’ explainability and transparency is facilitated by their


understandability. When discussing the significance of understandability, the trans-
parency guidelines addressed three points: 1) ensuring that people comprehend the AI
system’s behavior and the methods for using it (O5, O12); 2) communicating in an
intelligible manner the locations, purposes, and methods of AI use (O15); and 3) mak-
ing sure people comprehend the distinction between real AI decisions and those that
AI merely assists in making (O2) [28]. Thus, by guaranteeing that people are informed
about the use of AI in a straightforward and comprehensive manner, understandabil-
ity promotes explainability and transparency. The necessity of tracking the decisions
made by AI systems is highlighted by traceability in transparency requirements (O2,
O12) [28]. In order to ensure openness, Organization O12 also noted how crucial it is
to track the data utilized in AI decision-making.

1.5 Transparency Vs Explainability in AI


Transparency and explainability can be compared [29]. When an AI is transparent,
the ”basic elements of data and decisions must be available for inspection during and
after AI use,” according to McLarney et al. [30]. Transparency exists when a user can
observe how decisions are made or have access to their data. Explainability, on the
other hand, is about understanding why AI succeeds or fails and revealing how it draws
on the knowledge and decision-making processes of the people it will affect. It provides
a rational explanation for the AI’s actions. Users need to be able to understand what
data is being collected, how the AI program processes it, and how it produces trust-
worthy outcomes for each individual affected. This straightforward explanation ignores
the challenges we confront in simplifying ”black box” algorithms, the context that is
lost, and the accuracy needed when giving consumers clear explanations. The question
thus becomes, is minimal explainability preferable to nothing? [30]. Other important
factors to consider include the belief that explanations can adequately account for the
dynamic nature of the rich information ecosystem and the appropriateness of dealing
with anomalies.
Interestingly, while certain AI algorithms analyze data automatically, an increas-
ing number of AI systems are designed to explain how their algorithms work and

8
Fig. 7 Output from the Bing search engine’s conversation feature explaining a failure. a partial
screenshot taken using an Android smartphone on March 2, 2023. [17]

the reasoning behind certain decisions [17]. For example, the Bing search engine’s
conversation mode offers brief explanations of how it works (Fig. 7). End users may
occasionally find these explanations enough, but occasionally they may get confused
about how an AI arrived at a specific conclusion or behaved in a specific way. It is
impractical to expect people to become more computer literate when they are more
perplexed by the explanation that is provided [17]. Rather, we need to either enhance
the AI system or the justification.

1.6 Definition of Trustworthiness in Artificial Intelligence


Artificial intelligence (AI) systems that incorporate trustworthiness must take a multi-
faceted approach that takes organizational, ethical, and technical factors into account.
Establishing criteria for assessing trustworthiness is the first step in this process. These
measures should include accountability, security, privacy, openness, fairness, and eth-
ical compliance. Foundational elements include transparent algorithms that provide
intelligible justifications for AI-driven judgments and high-quality, impartial data.
Strong security measures and privacy-preserving strategies protect sensitive data and
fight off online attacks. Responsible AI usage is encouraged through the establishment
of accountability systems and adherence to moral principles and governance structures.
User-centric design, ongoing observation, and training guarantee that AI systems sat-
isfy users’ demands while developing over time to retain their credibility. Organizations
may create trustworthy, transparent, equitable, and ethical systems that inspire confi-
dence in both users and stakeholders by incorporating these principles into all phases
of the AI lifecycle.
The three elements depicted in Figure 8 - algorithmic ethics, data ethics, and prac-
tice ethics - come together to form trustworthy AI. These elements offer an abstraction

9
Fig. 8 The three primary elements of a reliable AI. [31]

level for ethical issues that is data-centric [31]. Numerous unresolved problems arise
when trying to address ethical concerns with AI systems. In the research paper [31],
authors describe the requirements for Trustworthy AI where the mentioned:
1) Human agency and oversight: AI systems must uphold the values of human free-
dom. AI systems must promote user agency, uphold fundamental rights, and
enable human oversight to realize a democratic and equal society.
2) Security and technical robustness: Security and technical robustness lead to the
prevention of harm. For an AI system to function reliably while minimizing
harm, risks must be taken into account during development. This must cover any
changes in the working environment as well as possible attacks on the system by
adversaries.
3) Data protection and data governance: As a fundamental right that is particularly
affected by the extensive data collection required by AI systems, data protection
is also closely related to preventing harm. Preventing privacy harm also requires
data governance that addresses the quality and integrity of the data used, its
relevance, access protocols, and the ability to process it in a privacy-preserving
way.
4) Transparency: Explainability and transparency are closely related requirements.
The aim is to make all relevant aspects of an AI system transparent, includ-
ing data, technology, and business models. In the era of ubiquitous computing,
transparency is essential to support large-scale data collection and its benefits to
consumers.
5) Diversity, non-discrimination, and fairness: Achieving trustworthy AI requires
enabling inclusivity and diversity throughout AI systems. This is important not
only to consider and involve all affected parties but also to ensure fair access and
treatment. Fairness and this need go hand in hand.

10
6) Social and environmental welfare: In the spirit of justice and harm prevention,
the environment and the broader community should be considered as stakehold-
ers. Research into AI solutions to address global challenges should be promoted,
making AI systems more environmentally friendly and sustainable. AI systems
need to benefit everyone, including those who will come after us.
7) Accountability: The idea of fairness and the need for accountability go hand in
hand: we need mechanisms to ensure accountability and accountability for AI
systems and the results they produce, both during and after their development,
application, and deployment.

1.7 An Overview of Necessities for Reliable AI


The conditions for reliable AI are still unclear and are addressed incongruously by
many institutions and groups, despite contentious social debates over it. The princi-
ples of Fairness, Responsibility, Accuracy, Verifiability, and Accountability in Machine
Learning (FAT-ML) include accountability, explainability, verifiability, and fairness at
a global level [32]. Among the numerous needs under review, explainability, fairness,
privacy, and robustness will all be covered in this study (Table 1).

Table 1 Conditions necessary for trustworthy artificial intelligence (AI)

Concept Description
Explainability To help consumers comprehend, the method by which the AI model
generates its output might be demonstrated.
Fairness Regardless of certain protected variables, the AI model’s output can
be shown.
Privacy It is feasible to prevent issues with personal data that might arise
while the AI is being developed.
Robustness The AI model can fend against outside threats while continuing to
operate correctly.

2 Trustworthy XAI Vs AI
A paradigm change in the field of artificial intelligence (AI) has been brought about
with the introduction of Trustworthy Explainable AI (XAI). Oftentimes, conventional
AI systems operate as opaque black boxes, making it challenging for users to com-
prehend the decision-making process. On the other hand, Trustworthy XAI tackles
significant concerns surrounding the adoption of AI by focusing on accountability,
interpretability, and transparency. Reputable XAI seeks to build consumers’ confi-
dence by providing clear justifications for its choices. Users are able to evaluate the
fairness and dependability of AI-driven results because to this transparency. While
typical AI systems are capable of producing precise forecasts or suggestions, they do
not have the openness required to establish credibility.
The way that reliable XAI and traditional AI make decisions is what sets them
apart from one another. Although AI systems have the potential to produce precise

11
forecasts or suggestions, Reputable XAI highlights the necessity of providing justifi-
cations that clarify the processes that lead to these results. Users may evaluate the
fairness and dependability of AI-driven outcomes thanks to the clear justifications
that trustworthy XAI systems give for their decisions.
Furthermore, trustworthy XAI takes into account more ethical factors than just
explanations. Incorporating the Fairness, Accountability, and Transparency (FAT)
principles into AI development procedures, guarantees that AI systems abide by moral
and legal requirements. Trustworthy XAI seeks to reduce biases, discrimination, and
other possible negative effects of AI technology by placing a high priority on ethical
norms. An approach to AI development known as trustworthy AI places a high value
on user safety and openness. Since no model is flawless, trustworthy AI developers
take care to explain to clients and the wider public how the technology was developed,
its intended applications, and its limitations.

Table 2 Seven Requirements to Meet in Order to Develop Reliable AI

Principles Explanation Rights GDPR Ref


Human Authority Artificial intelligence technology ought The right to get human Recital 71, Art
and Supervision to uphold human agency and basic assistance 22
rights, instead of limiting or impeding
human autonomy.
Robustness and Systems must be dependable, safe, Art 22
Safety robust enough to tolerate mistakes or
inconsistencies, and capable of deviat-
ing from a totally automated decision
Data Governance Individuals should be in total control Notification and infor- Art 13, 14, and
and Privacy of the information that is about them, mation access rights 15
and information about them should regarding the logic
not be used against them used in automated
processes
Transparency Systems using artificial intelligence The right to get clarifi- Recital 71
ought to be transparent and traceable cation
Diversity and AI systems have to provide accessibil- Right to not have deci- Art 22
Fairness ity and take into account the whole sions made only by
spectrum of human capacities, require- machines
ments, and standards
Environmental and AI should be utilized to promote social Accurate knowl- Art 13, 14, and
Social Well-Being change, accountability, and environ- edge regarding the 15
mental sustainability importance and pos-
sible consequences
of making decisions
exclusively through
automation
Accountability Establishing procedures to guarantee Right to be informed Art 13, 14
that AI systems and their outcomes are when decisions are
held accountable is essential made only by machines

12
3 Applications of Trustworthy XAI
Authentic Explainable Artificial Intelligence (XAI) has numerous uses in sectors where
accountability, interpretability, and transparency are essential. XAI can provide an
explanation for a diagnosis or therapy recommendation in medical diagnosis and rec-
ommendation systems. Financial institutions can employ XAI for risk assessment,
fraud detection, and credit scoring. XAI can help attorneys with contract analysis,
lawsuit prediction, and legal research. In autonomous vehicles, XAI plays a significant
role in providing context for the decisions made by the AI systems, particularly in
high-stakes scenarios such as accidents or unanticipated roadside incidents. XAI can be
applied to process optimization, predictive maintenance, and quality control in manu-
facturing settings. By offering justifications for automated responses or suggestions in
chatbots and virtual assistants, XAI can improve customer service. By providing an
explanation for the recommendations and assessments made by adaptive learning sys-
tems, XAI can help with individualized learning. By providing an explanation for the
recommendations and assessments made by adaptive learning systems, XAI can help
with individualized learning. We shall concentrate on a few particular applications in
this section and go into detail about them.

3.1 Application of Trustworthy XAI in Medical Science


The field of artificial intelligence (AI) is rapidly growing on a global scale. The poten-
tial uses of artificial intelligence in healthcare are a hot topic for research [33]. There
are many opportunities to use AI technology in the healthcare sector, where people’s
lives and well-being are in danger, because of its essential relevance and the enormous
quantity of digital medical data that have been gathered [34]. Artificial intelligence
(AI) has made it possible to accomplish tasks quickly that were before unfeasible
for traditional technologies. Trustworthy AI is a huge concern these days. Since inci-
dents involving AI-powered chatbots such as Tay and Iruda [35], there has been an
increase in interest in the topic of whether an AI’s judgment and decision-making
system is reliable. The credibility of AI in the medical and healthcare areas requires
more investigation. Clinical decision support systems (CDSS) in the medical field use
AI technology to aid with important medical duties such as diagnosis and therapy
planning [36]. Misuse can have serious consequences in areas where lives are at stake,
even if the scope of use is limited to assisting healthcare practitioners. False alarms,
for example, which occur often in scenarios involving urgent patients, may exhaust
medical personnel.
The study [37] adds significantly to the field of medical skin lesion diagnostics in a
number of ways. Before anything else, it adapts an existing eXplainable Artificial Intel-
ligence (XAI) technique to increase user confidence and trust in AI decision-making
systems. This modification entails describing an AI model that is skilled at differenti-
ating between different kinds of skin lesions. Synthetic exemplar and counter-exemplar
images are used to create explanations that illustrate the important characteristics
that influence classification choices. This research [37] is based on training a deep learn-
ing classifier with the ISIC 2019 dataset using the ResNet architecture. This enables

13
practitioners to use the explanations offered to reason effectively. All things consid-
ered, the study’s original contributions are found in the way it refined and assessed
the XAI technique in an actual medical setting, examined the latent space and car-
ried out an extensive user study to gauge the efficacy of the explanations, especially
among subject matter experts.
This research paper [38] addresses the challenge of recognizing brain tumors in
MRI images by merging two powerful algorithms: fuzzy C-means (the FCM method)
and Artificial Neural Network (ANN). The authors want to increase the segmentation
process’s interpretability as well as the accuracy of tumor identification by merging
these techniques. Their main objective is to improve medical decision-support system
tools so that physicians can diagnose brain tumors more accurately. This discovery
has two main benefits: first, it improves the ability to identify brain cancers in medical
imaging with more accuracy, which is important for early diagnosis and treatment.
Second, the researchers make the decisions made by their models more transparent and
intelligible to patients and medical experts by integrating explainable AI principles
into the segmentation process. In the end, this improved interpretability raises the
general level of reliability and acceptance of AI-driven systems for medical picture
segmentation in clinical settings.
In another research [39], the field of computational pathology—which uses machine
learning and artificial intelligence (AI) to diagnose whole slide images (WSIs)—is
discussed. Because artificial intelligence is opaque, there are doubts regarding its reli-
ability despite the technology’s potential to improve efficiency and accuracy. The
paper suggests employing explainable AI (xAI) techniques, which can shed light on
the choices made by AI algorithms, to allay these worries. Computational pathol-
ogy systems become more clear and dependable with the addition of xAI, especially
when it comes to crucial activities like pathology diagnosis. Additionally, it presents
HistoMapr-Breast, a software program with xAI capabilities intended for breast core
biopsies.
In a recent research [40], it is discussed how crucial it is to guarantee the correctness
and resilience of AI-based systems in the healthcare industry, especially with regard
to their interpretability and defense against hostile attacks. As AI systems are used in
medical contexts more frequently, it is important to confirm that the predictions they
are generating are based on accurate features. Numerous model interpretability and
explainability techniques have been put out in an effort to overcome this. This work
shows that even with strong training, adversarial attacks can affect the explainability
of a model. The authors also introduce two attack classifiers: one to determine the
type of attack and another to differentiate between benign and opposed inputs.
This research paper [41] studies the rising area of explainable machine learning in
cardiology. It addresses challenges with the interpretability of complicated prediction
models and their ramifications for key healthcare choices. This study delves into the
underlying principles and tactics of explainable machine learning, providing cardiol-
ogists with a greater grasp of the approach’s strengths and limitations. The research
intends to help decision-making processes in clinical settings by providing a rule of
thumb for deciding between interpretable and black-box models. This will ultimately
improve patient outcomes while maintaining accountability and transparency in model

14
Fig. 9 A random forest-based model for heart disease prediction is described, using both local and
global decision trees. Because decision trees are evaluated from top to bottom, the global diagram
shows that the model begins by deciding if patients’ thallium stress test results are normal. The
model investigates the patient’s ST depression if the thallium stress test discloses an issue, and so on.
The local graphic depicts the path a patient took to fall from the tree, showing the reasoning for each
individual prognosis. The patient was less than 54.5 years old, had a maximum heart rate of higher
than, and had a normal thallium stress test result, indicating a very low chance of cardiac disease [41].

predictions. Figure 9 was created by training a single new decision tree based on the
random forest model predictions. The global tree diagram depicts the overall opera-
tion of the random forest. Individual predictions can then be investigated by tracking
the patient’s progress through the global tree. This type of explanation has the advan-
tage of making it easy to understand both the overall operation of the model and the
reasoning behind each particular forecast. Decision trees are ideal for disciplines such
as cardiology because they provide rule-based reasoning similar to clinical decision
criteria.
Figure 10 depicts the LIME explanations for our heart failure model’s two local
predictions. The authors demonstrate how these predictions are integrated as a clinical
decision support tool in Epic, an electronic health record created with doctors in mind
(Epic Systems Corporation, Verona, Wisconsin, USA).37 This type of explainability
approach has the advantage of providing detailed explanations for the clinical factors
that impact each prediction. It is noteworthy that this kind of explanation can be
incorporated into an EHR, as this could enhance a black-box model’s actionability
by allowing forecasts and clear explanations to be effortlessly integrated into clinical
workflow.
Reliable medical AI implementation and widespread usage need substantial
research and social consensus on prerequisites such as explainability, fairness, privacy
protection, and robustness [32]. Optimized requirements and standards must be sat-
isfied in any therapeutic environment where AI is used, and these requirements and
standards need to be updated often. Furthermore, laws must be put in place specifying

15
Fig. 10 Heart disease prediction explanation produced using Local Interpretable Model-Agnostic
Explanations (LIME). This illustration shows how clinical decision assistance can be integrated into
an Epic electronic health record by means of a local explanation utilizing the LIME algorithm. To
help clinicians identify patients who are likely to be at a high risk of heart disease, probabilities are
color-coded. To improve the predictability and actionability of the results for doctors, the clinical
factors that are most significant to the prediction are shown on the right [41].

who is in charge in the event of a medical AI-related mishap or accident—designers,


researchers, medical personnel, or patients [42].

3.2 Explainability and Interpretability of Autonomous Systems


Explainability and interpretability in the context of autonomous systems refer to the
ability to understand and make sense of the systems’ decisions and behaviors. Explain-
ability refers to an autonomous system’s ability to provide clear arguments for its
decisions and behaviors [43]. It is critical for increasing acceptance and confidence in
AI systems, particularly in areas such as banking, healthcare, and autonomous autos.
While explainability and interpretability are closely connected, interpretability places
more emphasis on the capacity to comprehend the internal workings and procedures
of the autonomous system [44]. An interpretable system provides users with insight
into the elements and criteria considered while making decisions, allowing them to
comprehend how the system came to its findings.
The research paper The research article [18] focuses on trust and dependability
in autonomous systems. Autonomous systems have the potential for system opera-
tion, rapid information dissemination, massive data processing, working in hazardous
environments, operating with greater resilience and tenacity than humans, and even
astronomical examination [45], [46]. Following years of research and development,
today’s automated technologies represent the peak of progress in computer recognition,
responsive systems, user-friendly interface design, and sensing automation. According

16
Fig. 11 An automated vehicle that provides a valid and understandable rationale for the decision it
made at that particular instant, acting as the archetypal example of XAI in automated driving [18].

to [43], the global market for automotive intelligent hardware, operations, and innova-
tion is expected to rise from $1.25 billion in 2017 to $28.5 billion by 2025. According
to Intel’s research of the predicted advantages of autonomous cars, employing these
advances on public roads would decrease commute time by 250 million hours annu-
ally and save over 500,000 lives in the United States alone between 2035 and 2045
[43]. Modern automobiles employ artificial intelligence (AI) for several tasks, including
intelligent cruise control, automatic driving and parking, and blind-spot identification
(Figure 111).
Authors [18] describe the challenges of autonomous systems, like, people sometimes
tend to be overly excited about the potential of new ideas and ignore, or at least appear
to be unaware of, the potential drawbacks of cutting-edge developments. Even in the
early stages of robotics and autonomous system implementation, humanity preferred
to put up with faulty goods and services, but they have gradually come to understand
the importance of trustworthy and dependable autonomous systems [? ]. Numerous
examples have demonstrated how operators’ use of automation is greatly impacted by
trustworthiness.
When artificial intelligence (AI) has become prevalent in autonomous vehicle (AV)
operations, user trust has been identified as a major issue that is essential to the
success of these operations. Explainable artificial intelligence (XAI), which calls for
the AI system to give the user explanations for every decision it makes, is a viable
approach to fostering user trust for such integrated AI-based driving systems [47]. This
work develops explainable Deep Learning (DL) models to improve trustworthiness in
autonomous driving systems, driven by the need to improve user trust and the poten-
tial of innovative XAI technology in addressing such requirements. The main concept
of this [47] research is to frame the decision-making process of autonomous vehicles
(AVs) as an image-captioning task, generating textual descriptions of driving scenarios
to serve as understandable explanations for humans. The proposed multi-modal deep
learning architecture (Shown in Figure 12), based on Transformers, effectively models
the correlation between images and language, generating meaningful descriptions and

17
Fig. 12 The Transformers-based multi-modal deep learning architecture that is being suggested [47]

driving actions. Its contributions lie in formulating the traditional AV decision process
for explainability, developing a fully Transformer-based model for generating descrip-
tions and actions, and demonstrating superior performance over baseline models. The
outcome is a model that enhances user trust, provides insights for AV developers, and
offers superior interpretability through its attention mechanisms and end-to-end goal
induction.
This research [48] aims to investigate the integration of explainable artificial intel-
ligence (XAI) into autonomous vehicular systems to improve transparency and human
trust. It delves into the functioning of multiple inner vehicle modules, emphasizing the
importance of understanding the vehicle’s decision-making processes for user credibil-
ity and reliability. The main contribution lies in introducing XAI to the domain of
autonomous vehicles, showcasing its role in fostering trust, and highlighting advance-
ments through comparative analysis. The output comprises the creation of visual
explanatory techniques and an intrusion detection classifier, which show consider-
able advances over previous work in terms of transparency and safety in autonomous
transportation systems.

3.3 Applications of XAI for Operations in the Industry


The process industry is a subset of businesses that manufacture items from raw mate-
rials (not components) using formulae or recipes. Given the magnitude and dynamic
nature of operations in the process sector, it becomes evident that the next great
step ahead will be the capacity for people and AI systems to collaborate to ensure
production stability and dependability [49]. AI systems must successfully inform the
individuals who share the ecosystem about their objectives, intentions, and findings as
the first step toward collaboration. In the future, people will work ”with” automation
rather than ”around” it, thanks in part to the systematic approach to XAI.
This research [50] focuses on Explainable Artificial Intelligence (XAI) applications
in the process industry. The research argues that current AI models are not transparent
enough for process industry applications, and highlights the need for XAI models
that can be understood by human experts. The main contribution is outlining the

18
Table 3 Examples of AI applications in process industry operations, including pertinent data, users,
and procedures. (RNN = Recurrent Neural Network; KNN = K-Nearest Neighbor; ANN = Artificial
Neural Network; SVM = Support Vector Machine; SVR = Support Vector Regression; RF = Random
Forest; IF = Isolation Forest) [50]

Reference Relevant End Users Application AI Methods


Data
[51], [52], [53] Process signals Operator, Process Process moni- RNN, KNN
Engineer, Automation toring
engineer
[54], [55], [56] Process signals, Process engineer, Fault diagnosis ANN, SVM, Bayes
Alarms, Vibra- Automation engineer, Classifier
tion Operator, Mainte-
nance engineer
[57], [58], [59] Process sig- Operator Event predic- ANN
nals, Acoustic tion
signals
[60], [61], [62] Process signals Operator Soft sensors SVR, ANN, RF
[63], [64], [65] Vibration, Pro- Operator, Maintenance Predictive RNN, IF
cess signals engineer, Scheduler maintenance

challenges and research needs for XAI in the process industry. The outcome is to
develop XAI models that are safe, reliable, and meet the needs of human users in the
process industry.
Table 3 shows examples of AI applied to operational activities in the process indus-
try. This table should give an idea of the breadth of use cases, users, relevant data
sources, and applicable AI methodologies; however, it is not intended to be a full or
systematic examination.

4 Future of Trustworthy (XAI)


The precise position of each XAI domain and how they relate to the human user
are shown in Figure 13. The majority of AI system explanations that are given are
usually static and only contain one message [66]. Understanding cannot be attained
by explanations alone [67]. Because most existing XAI libraries lack user involvement
and customization of explanations, users should be able to explore the system using
interactive explanations to gain a better understanding of it. This is a promising
research direction for extending the XAI field [67] and [66]. To improve human-machine
cooperation and move beyond static explanations, a number of efforts have also been
proposed.
Explainable Artificial Intelligence (XAI) has great promise for redefining the rela-
tionship between humans and AI systems as it stands at the nexus of technological
innovation and societal integration. As AI technologies advance, it is more important
than ever to ensure accountability and transparency. Within this framework, XAI
becomes a crucial facilitator, entrusted with shedding light on the murky inner work-
ings of AI models and cultivating user confidence. A wide range of breakthroughs are
anticipated in XAI, from heightened model transparency and human-centric design
principles to regulatory compliance requirements and the rise of hybrid AI systems.

19
Fig. 13 Assessing the user’s interaction with XAI [27].

XAI approaches will place a high value on user-centric design, providing explana-
tions that are both actionable and understandable. This will foster acceptance and
confidence in AI systems.
Moreover, it is anticipated that regulatory frameworks would require the incorpo-
ration of XAI in essential applications, guaranteeing compliance with accountability
and transparency norms. Future XAI systems will be distinguished by their contex-
tual sensitivity and interactive explanations, which will enable users to interact with
AI decisions in real time and adjust to a variety of situations. To guarantee that AI
systems follow ethical standards and social values, as well as to democratize access to
XAI technologies, efforts must be made to enhance digital literacy and address ethical
challenges. Fundamentally, XAI’s success is predicated on its capacity to bridge the
communication gap between AI systems and human users, fostering mutual respect,
trust, and collaboration in an increasingly AI-dependent world.
This study [68] offers a thorough analysis of Explainable Artificial Intelligence
(XAI), focusing on two primary areas of inquiry: general XAI difficulties and research
directions, as well as ML life cycle phases-based challenges and research directions.
In order to shed light on the significance of formalism, customization of explanations,
encouraging reliable AI, interdisciplinary partnerships, interpretability-performance
trade-offs, and other topics, the study synthesizes important points from the body of
existing literature. The primary contribution is the methodical synthesis and analysis
of the body of literature to identify important problems and future directions for XAI
research [68]. The research offers a thorough review of the current state of XAI research
and provides insightful information for future studies and breakthroughs in the area
by structuring the debate around general issues and ML life cycle phases. The primary
finding of the study is the identification and clarification of 39 important points that
cover a range of issues and potential avenues for future XAI research. The importance

20
Fig. 14 Issues and Future Research Paths for XAI throughout its Deployment Stage [68].

of conveying data quality, utilizing human expertise in model development, applying


rule extraction for interpretability, addressing security concerns, investigating XAI
for reinforcement learning and safety, and taking into account the implications of
privacy rights in explanation are just a few of the many topics covered by these points.
Furthermore, the paper indicates directions for further research and application by
highlighting the potential contributions that XAI may make to a number of fields,
including digital forensics, IoT, and 5G.
Deploying machine learning solutions begins the deployment process and continues
until we cease utilizing them, possibly even after that. Figure 14 illustrates the XAI
research directions and challenges that were explored for this phase.

5 Conclusions
Explainable Artificial Intelligence (XAI) is gaining popularity in a range of fields
due to its critical role in addressing critical issues connected to AI adoption. As AI
systems become more integrated into society, transparency and interpretability become
increasingly important. By offering tools to clarify how AI models make decisions, XAI
helps users develop a sense of confidence and comprehension. XAI’s primary objective
is to make AI models clear and intelligible. With the help of XAI, the general public
will be able to peer inside the black box and comprehend the aspects that affect the
AI’s decision-making process. The paper discusses the essential details of XAI and
offers a comprehensive overview for a solid understanding. Furthermore, this article
discusses in detail the three main application fields of XAI. Lastly, the authors attempt
to outline the difficulties in applying XAI and suggest potential future paths.
Acknowledgements. The authors would like to express their sincere gratitude to
everyone who encourages and appreciates their scientific work.

21
Declarations
Not applicable

References
[1] Stephens, E.: The mechanical turk: A short history of ‘artificial artificial
intelligence’. Cultural Studies 37(1), 65–87 (2023)

[2] Kaul, V., Enslin, S., Gross, S.A.: History of artificial intelligence in medicine.
Gastrointest Endosc 92(4), 807–812 (2020) https://doi.org/10.1016/j.gie.2020.06.
040 . Epub 2020 Jun 18

[3] Roser, M.: The brief history of artificial intelligence: The world has changed fast–
what might be next? Our World in Data (2023)

[4] Wang, L., Liu, Z., Liu, A., Tao, F.: Artificial intelligence in product lifecycle
management. The International Journal of Advanced Manufacturing Technology
114, 771–796 (2021)

[5] Shamshiri, A., Ryu, K.R., Park, J.Y.: Text mining and natural language
processing in construction. Automation in Construction 158, 105200 (2024)

[6] Khang, A., Abdullayev, V., Litvinova, E., Chumachenko, S., Alyar, A.V., Anh,
P.: Application of computer vision (cv) in the healthcare ecosystem. In: Computer
Vision and AI-Integrated IoT Technologies in the Medical Ecosystem, pp. 1–16.
CRC Press, ??? (2024)

[7] Vallès-Peris, N., Domènech, M.: Caring in the in-between: a proposal to intro-
duce responsible ai and robotics to healthcare. AI & SOCIETY 38(4), 1685–1695
(2023)

[8] Biswas, A., Islam, M.S.: Mri brain tumor classification technique using fuzzy
c-means clustering and artificial neural network. In: International Conference
on Artificial Intelligence for Smart Community: AISC 2020, 17–18 December,
Universiti Teknologi Petronas, Malaysia, pp. 1005–1012 (2022). Springer

[9] Biswas, A., Abdullah Al, N.M., Ali, M.S., Hossain, I., Ullah, M.A., Talukder,
S.: Active learning on medical image. In: Data Driven Approaches on Medical
Imaging, pp. 51–67. Springer, ??? (2023)

[10] Zohuri, B., Moghaddam, M.: From business intelligence to artificial intelligence.
Journal of Material Sciences & Manufacturing Research. SRC/JMSMR/102 Page
3 (2020)

[11] Biswas, A., Islam, M.S.: A hybrid deep cnn-svm approach for brain tumor clas-
sification. Journal of Information Systems Engineering & Business Intelligence

22
9(1) (2023)

[12] Biswas, A., Islam, M.: Ann-based brain tumor classification: Performance analysis
using k-means and fcm clustering with various training functions. In: Explainable
Artificial Intelligence for Smart Cities, pp. 83–102. CRC Press, ??? (2021)

[13] Biswas, A., Md Abdullah Al, N., Imran, A., Sejuty, A.T., Fairooz, F., Puppala,
S., Talukder, S.: Generative adversarial networks for data augmentation. In: Data
Driven Approaches on Medical Imaging, pp. 159–177. Springer, ??? (2023)

[14] Gong, T., Zhu, L., Yu, F.R., Tang, T.: Edge intelligence in intelligent transporta-
tion systems: A survey. IEEE Transactions on Intelligent Transportation Systems
(2023)

[15] Biswas, A., Islam, M.S.: An efficient cnn model for automated digital handwrit-
ten digit classification. Journal of Information Systems Engineering and Business
Intelligence 7(1), 42–55 (2021)

[16] Malik, A.: Explainable Intelligence Part 1 - XAI, the


Third Wave Of AI. https://www.linkedin.com/pulse/
explainable-intelligence-part-1-xai-third-wave-ai-ajay-malik/

[17] Schoenherr, J.R., Abbas, R., Michael, K., Rivas, P., Anderson, T.D.: Design-
ing ai using a human-centered approach: Explainability and accuracy toward
trustworthiness. IEEE Transactions on Technology and Society 4(1), 9–23 (2023)

[18] Chamola, V., Hassija, V., Sulthana, A.R., Ghosh, D., Dhingra, D., Sikdar, B.: A
review of trustworthy and explainable artificial intelligence (xai). IEEE Access
(2023)

[19] Guleria, P., Sood, M.: Explainable ai and machine learning: performance evalu-
ation and explainability of classifiers on educational data mining inspired career
counseling. Education and Information Technologies 28(1), 1081–1116 (2023)

[20] Mirzaei, S., Mao, H., Al-Nima, R.R.O., Woo, W.L.: Explainable ai evaluation:
A top-down approach for selecting optimal explanations for black box models.
Information 15(1), 4 (2023)

[21] Vyas, B.: Explainable ai: Assessing methods to make ai systems more transparent
and interpretable. International Journal of New Media Studies: International Peer
Reviewed Scholarly Indexed Journal 10(1), 236–242 (2023)

[22] Wang, A.Q., Karaman, B.K., Kim, H., Rosenthal, J., Saluja, R., Young, S.I.,
Sabuncu, M.R.: A framework for interpretability in machine learning for medical
imaging. IEEE Access (2024)

[23] Ghnemat, R., Alodibat, S., Abu Al-Haija, Q.: Explainable artificial intelligence

23
(xai) for deep learning based medical imaging classification. Journal of Imaging
9(9), 177 (2023)

[24] Gohel, P., Singh, P., Mohanty, M.: Explainable ai: current status and future
directions. arXiv preprint arXiv:2107.07045 (2021)

[25] Wang, P., Ding, H.: The rationality of explanation or human capacity? under-
standing the impact of explainable artificial intelligence on human-ai trust and
decision performance. Information Processing & Management 61(4), 103732
(2024)

[26] Herm, L.-V.: Algorithmic decision-making facilities: Perception and design of


explainable ai-based decision support systems. PhD thesis, Universität Würzburg
(2023)

[27] Thalpage, N.: Unlocking the black box: Explainable artificial intelligence (xai) for
trust and transparency in ai systems. Journal of Digital Art & Humanities 4(1),
31–36 (2023)

[28] Balasubramaniam, N., Kauppinen, M., Rannisto, A., Hiekkanen, K., Kujala,
S.: Transparency and explainability of ai systems: From ethical guidelines to
requirements. Information and Software Technology 159, 107197 (2023)

[29] Arrieta, A.B., al.: Explainable artificial intelligence (XAI): Concepts taxonomies
opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

[30] McLarney, E., al.: NASA framework for the ethical use of artificial intelligence
(AI) (2021)

[31] Kumar, A., Braud, T., Tarkoma, S., Hui, P.: Trustworthy ai in the age of pervasive
computing and big data. In: 2020 IEEE International Conference on Perva-
sive Computing and Communications Workshops (PerCom Workshops), pp. 1–6
(2020). https://doi.org/10.1109/PerComWorkshops48775.2020.9156127

[32] Kim, M., Sohn, H., Choi, S., Kim, S.: Requirements for trustworthy artificial
intelligence and its application in healthcare. Healthcare Informatics Research
29(4), 315 (2023)

[33] Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen,
H., Wang, Y.: Artificial intelligence in healthcare: past, present and future. Stroke
and vascular neurology 2(4) (2017)

[34] Davenport, T., Kalakota, R.: The potential for artificial intelligence in healthcare.
Future healthcare journal 6(2), 94 (2019)

[35] Tidjon, L.N., Khomh, F.: Never trust, always verify: a roadmap for trustworthy
ai? arXiv preprint arXiv:2206.11981 (2022)

24
[36] Jaspers, M.W., Smeulers, M., Vermeulen, H., Peute, L.W.: Effects of clinical
decision-support systems on practitioner performance and patient outcomes: a
synthesis of high-quality systematic review findings. Journal of the American
Medical Informatics Association 18(3), 327–334 (2011)

[37] Metta, C., Beretta, A., Guidotti, R., Yin, Y., Gallinari, P., Rinzivillo, S., Gian-
notti, F.: Improving trust and confidence in medical skin lesion diagnosis through
explainable deep learning. International Journal of Data Science and Analytics,
1–13 (2023)

[38] Akpan, A.G., Nkubli, F.B., Ezeano, V.N., Okwor, A.C., Ugwuja, M.C., Offiong,
U.: Xai for medical image segmentation in medical decision support systems.
Explainable Artificial Intelligence in Medical Decision Support Systems 50, 137
(2022)

[39] Tosun, A.B., Pullara, F., Becich, M.J., Taylor, D.L., Fine, J.L., Chennub-
hotla, S.C.: Explainable ai (xai) for anatomic pathology. Advances in Anatomic
Pathology 27(4), 241–250 (2020)

[40] Agrawal, N., Pendharkar, I., Shroff, J., Raghuvanshi, J., Neogi, A., Patil, S.,
Walambe, R., Kotecha, K.: A-xai: adversarial machine learning for trustable
explainability. AI and Ethics, 1–32 (2024)

[41] Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations
of explainable machine learning in cardiology. Canadian Journal of Cardiology
38(2), 204–213 (2022)

[42] Rajpurkar, P., Chen, E., Banerjee, O., Topol, E.J.: Ai in health and medicine.
Nature medicine 28(1), 31–38 (2022)

[43] Atakishiyev, S., Salameh, M., Yao, H., Goebel, R.: Explainable artificial intel-
ligence for autonomous driving: A comprehensive overview and field guide for
future research directions. arXiv preprint arXiv:2112.11561 (2021)

[44] Alexandrov, N.: Explainable ai decisions for human-autonomy interactions. In:


17th AIAA Aviation Technology, Integration, and Operations Conference, p. 3991
(2017)

[45] Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable ai: A
brief survey on history, research areas, approaches and challenges. In: Natural
Language Processing and Chinese Computing: 8th CCF International Conference,
NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8, pp.
563–574 (2019). Springer

[46] Yazdanpanah, V., Gerding, E., Stein, S., Dastani, M., Jonker, C.M., Norman, T.:
Responsibility research for trustworthy autonomous systems (2021)

25
[47] Dong, J., Chen, S., Miralinaghi, M., Chen, T., Li, P., Labi, S.: Why did the ai make
that decision? towards an explainable artificial intelligence (xai) for autonomous
driving systems. Transportation research part C: emerging technologies 156,
104358 (2023)

[48] Madhav, A.S., Tyagi, A.K.: Explainable artificial intelligence (xai): connecting
artificial decision-making and human trust in autonomous vehicles. In: Proceed-
ings of Third International Conference on Computing, Communications, and
Cyber-Security: IC4S 2021, pp. 123–136 (2022). Springer

[49] Hoffmann, M.W., Drath, R., Ganz, C.: Proposal for requirements on industrial ai
solutions. In: Machine Learning for Cyber Physical Systems: Selected Papers from
the International Conference ML4CPS 2020, pp. 63–72 (2021). Springer Berlin
Heidelberg

[50] Kotriwala, A., Klöpper, B., Dix, M., Gopalakrishnan, G., Ziobro, D., Potschka,
A.: Xai for operations in the process industry-applications, theses, and research
directions. In: AAAI Spring Symposium: Combining Machine Learning with
Knowledge Engineering, pp. 1–12 (2021)

[51] Mamandipoor, B., Majd, M., Sheikhalishahi, S., Modena, C., Osmani, V.: Mon-
itoring and detecting faults in wastewater treatment plants using deep learning.
Environmental Monitoring and Assessment 192(3), 148 (2020)

[52] Cecı́lio, I., Ottewill, J., Pretlove, J., Thornhill, N.: Nearest neighbors method
for detecting transient disturbances in process and electromechanical systems.
Journal of Process Control 24, 1382–1393 (2014)

[53] Banjanovic-Mehmedovic, L., Hajdarevic, A., Kantardzic, M., Mehmedovic, F.,


Dzananovic, I.: Neural network-based data-driven modelling of anomaly detec-
tion in thermal power plant. Automatika: časopis za automatiku, mjerenje,
elektroniku, računarstvo i komunikacije 58, 69–79 (2017)

[54] Ruiz, D., Canton, J., Nougués, J., Espuna, A., Puigjaner, L.: On-line fault diagno-
sis system support for reactive scheduling in multipurpose batch chemical plants.
Computers & Chemical Engineering 25, 829–837 (2001)

[55] Yélamos, I., Graells, M., Puigjaner, L., Escudero, G.: Simultaneous fault diagnosis
in chemical plants using a multilabel approach. AIChE Journal 53, 2871–2884
(2007)

[56] Lucke, M., Stief, A., Chioua, M., Ottewill, J., Thornhill, N.: Fault detection and
identification combining process measurements and statistical alarms. Control
Engineering Practice 94, 104195 (2020)

[57] Dorgo, G., Pigler, P., Haragovics, M., Abonyi, J.: Learning operation strategies
from alarm management systems by temporal pattern mining and deep learning.

26
Computer Aided Chemical Engineering 43, 1003–1008 (2018)

[58] Giuliani, M., Camarda, G., Montini, M., Cadei, L., Bianco, A., Shokry, A.,
Baraldi, P., Zio, E., et al.: Flaring events prediction and prevention through
advanced big data analytics and machine learning algorithms. In: Offshore
Mediterranean Conference and Exhibition (2019). Offshore Mediterranean Con-
ference

[59] Carter, A., Briens, L.: An application of deep learning to detect process upset dur-
ing pharmaceutical manufacturing using passive acoustic emissions. International
journal of pharmaceutics 552, 235–240 (2018)

[60] Desai, K., Badhe, Y., Tambe, S., Kulkarni, B.: Soft-sensor development for
fed-batch bioreactors using support vector regression. Biochemical Engineering
Journal 27, 225–239 (2006)

[61] Shang, C., Yang, F., Huang, D., Lyu, W.: Data-driven soft sensor development
based on deep learning technique. Journal of Process Control 24, 223–233 (2014)

[62] Napier, L., Aldrich, C.: An isamill™ soft sensor based on random forests and
principal component analysis. IFAC-PapersOnLine 50, 1175–1180 (2017)

[63] Amihai, I., Gitzel, R., Kotriwala, A., Pareschi, D., Subbiah, S., Sosale, G.: An
industrial case study using vibration data and machine learning to predict asset
health. In: 2018 IEEE 20th Conference on Business Informatics (CBI), vol. 1, pp.
178–185 (2018). IEEE

[64] Amihai, I., Chioua, M., Gitzel, R., Kotriwala, A., Pareschi, D., Sosale, G.,
Subbiah, S.: Modeling machine health using gated recurrent units with entity
embeddings and k-means clustering. In: 2018 IEEE 16th International Conference
on Industrial Informatics (INDIN), pp. 212–217 (2018). IEEE

[65] Kolokas, N., Vafeiadis, T., Ioannidis, D., Tzovaras, D.: Fault prognostics in
industrial domains using unsupervised machine learning classifiers. Simulation
Modelling Practice and Theory, 102109 (2020)

[66] Abdul, A., Vermeulen, J., Wang, D., Lim, B.-Y., Kankanhalli, M.: Trends and
trajectories for explainable, accountable and intelligible systems: An hci research
agenda. In: Proceedings of the 2018 CHI Conference on Human Factors in Com-
puting Systems, New York, NY, USA, pp. 1–18 (2018). Association for Computing
Machinery

[67] Adadi, A., Berrada, M.: Peeking inside the black-box: A survey on explainable
artificial intelligence (xai). IEEE Access 6, 52138–52160 (2018) https://doi.org/
10.1109/ACCESS.2018.2870052

27
[68] Saeed, W., Omlin, C.: Explainable ai (xai): A systematic meta-survey of cur-
rent challenges and future opportunities. Knowledge-Based Systems 263, 110273
(2023)

28

You might also like