0% found this document useful (0 votes)
53 views

Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey

Uploaded by

cuongduong172839
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey

Uploaded by

cuongduong172839
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Explainable AI (XAI) in Image Segmentation in Medicine,

Industry, and Beyond: A Survey

Rokas Gipiškis1 , Chun-Wei Tsai2 , Olga Kurasova1


1 Vilnius University, Institute of Data Science and Digital Technologies
2 National Sun Yat-sen University, Department of Computer Science and Engineering
arXiv:2405.01636v1 [cs.CV] 2 May 2024

Abstract
Artificial Intelligence (XAI) has found numerous applications in computer vision. While image
classification-based explainability techniques have garnered significant attention, their counter-
parts in semantic segmentation have been relatively neglected. Given the prevalent use of image
segmentation, ranging from medical to industrial deployments, these techniques warrant a sys-
tematic look. In this paper, we present the first comprehensive survey on XAI in semantic image
segmentation. This work focuses on techniques that were either specifically introduced for dense
prediction tasks or were extended for them by modifying existing methods in classification. We
analyze and categorize the literature based on application categories and domains, as well as the
evaluation metrics and datasets used. We also propose a taxonomy for interpretable semantic
segmentation, and discuss potential challenges and future research directions.
Key words: XAI, interpretable AI, interpretability, image segmentation, semantic segmentation.

1. Introduction
In the past decade, Artificial Intelligence (AI) systems have achieved impressive results, most
notably in natural language processing and computer vision. The performance of such systems
is typically measured by evaluation metrics that vary depending on the task but aim to assess
the system’s outputs. Today’s leading AI systems largely rely on deep learning (DL) models,
multi-layered neural networks that tend to exhibit increasingly complicated structures in terms
of model parameters. The growing complexity of such systems resulted in them being labeled as
“black boxes.” This highlights that the evaluation metric does not show the full picture: even if
its measurement is correct, it does not give insights into the inner workings of the model.
The field of explainable AI (XAI) encompasses different branches of methods that attempt
to give insights into a model’s inner workings, explain outputs, or make the entire system more
interpretable to end users, such as human decision-makers. There is ongoing debate regarding
XAI terminology. Concepts like interpretability, explainability, understanding, reasoning, and
trustworthiness are challenging to formalize. While some authors use “interpretable” and “ex-
plainable” interchangeably [1], others distinguish between the two [2], [3]. When the distinction

∗ Correspondingauthor.
Email address: [email protected] (Rokas Gipiškis1 )
May 6, 2024
is made, it is usually to demarcate post-hoc explanations, a type of XAI techniques applied to
the already-trained model, and inherently interpretable models [2]. This way interpretability be-
comes associated with the transparency of the model itself and depends on the ease with which
one can interpret the model. For instance, a simple decision tree-based model might be consid-
ered more interpretable than a DL model composed of millions of parameters, provided that the
former is not too deep. Explainability, in contrast, is often limited to understanding the model’s
results rather than the model as a whole. While we acknowledge such distinction, throughout
this survey we will use “interpretable” and “explainable” synonymously, reserving more specific
“architecture-based” and “inherently interpretable” terms when discussing model-specific XAI
modifications. This is because not many of the surveyed papers use the term interpretability in a
second sense. Since most papers in explainable segmentation do not make this distinction, this
might avoid unnecessary confusion when discussing their contents. It should also be noted that
interpretability and ease of understanding vary according to the specific audience, whether it be
the general public or a more specialized group with specific training, such as radiologists.
XAI is not a new development, particularly in rule-based expert systems [4], [5] and machine
learning (ML) [6], but it has experienced unprecedented growth ever since the revived interest
[7] in neural networks. This growth correlates with the increasing interest in DL and is further
driven by 1) the need for trustworthy models due to widely expanding industrial deployments;
2) bureaucratic and top-down political emphasis on AI regulation; and 3) concerns within the
ML safety community [8] about the general trajectory of AI development in the short and long
runs. AI deployment is increasing across different sectors, and is significant both in terms of its
size and impact. According to AI Index Report 2023 [9], the proportion of companies adopting
AI has more than doubled from 2017 to 2022. In 2022, the medical and healthcare sectors have
attracted the most investment, with a total of 6.1 billion dollars [9]. IBM Global AI Adoption
Index 2023 [10], conducted by Morning Consult on behalf of IBM, indicates that about 42% of
their surveyed (> 1, 000 employees) enterprise-scale companies reported actively deploying AI,
and an additional 40% exploring and experimenting with AI, out of which 59% reported an accel-
eration in their rollout or investments. Even with rapid deployment, critical high-impact sectors
have to move at a slower pace. One could expect even more healthcare-related applications and
clinical deployments if AI methods were more interpretable. To a large extent, this applies to
other industries as well. According to the same IBM report, most of the surveyed IT profession-
als (83% among companies already exploring or deploying AI) stated that it is important to their
business to explain how their AI reached the decision. Another accelerating trend is that of AI
regulation (Fig. 1). The recent survey [11] indicates that 81% of respondents (N > 6,000) expect
some form of external AI regulation, with 57-66% of respondents reporting that they would be
more willing to use AI systems if trustworthiness-assuring mechanisms were in place. AI trust-
worthiness and transparency are further emphasized in regulatory discussions, ranging from the
EU’s AI Act [12] to AI executive order [13] in the United States.
XAI in image segmentation is a relatively new field, with the first articles on the subject
appearing in the late 2010s [14], [15], [16]. Since then, the topic has gained more attention.
Semantic image segmentation is an essential task in computer vision, with applications ranging
from autonomous driving [17] to medical image analysis [18]. Its study is further motivated
by the rapidly growing remote sensing and video data. Increasing deployments in medical AI
are also contributing to the need for explainable segmentation. Both radiologists and surgeons
need to know accurate boundaries for the anatomical structures of interest. Precise and reliable
segmentation is required when working with most pathologies in different imaging modalities,
ranging from magnetic resonance imaging (MRI) to computed tomography (CT).
2
250000

200000

150000
Articles

100000

50000

0
2015 2016 2017 2018 2019 2020 2021 2022 2023
Explainable AI Interpretable AI AI Regulation

Figure 1: Publications with “explainable AI,” “interpretable AI,” and “AI regulation” as keywords. Publication data
gathered from app.dimensions.ai

Segmentation is commonly viewed as a dense prediction task where classification is per-


formed on a pixel level. However, most XAI literature so far has focused on image classification
tasks. Nonetheless, a growing number of works address the issue of interpreting semantic seg-
mentation results by either extending classification-based methods or by proposing their own
modifications. Two Ph.D. dissertations [19], [20] on XAI in image segmentation have been writ-
ten in the past year. Image segmentation methods have been reviewed in the medical domain
[21], however, the focus has been just on the post-hoc techniques. In this work, we provide the
first comprehensive survey in the area of explainable semantic segmentation, encompassing dif-
ferent application domains and all XAI method groups. Throughout the paper, when referring
to the date of the publication, we indicate the date of its first online appearance, including the
preprint version where applicable.
This survey offers the following contributions:

• A comprehensive review of up-to-date publications, covering both their theoretical contri-


butions and practical applications.
• A taxonomy to distinguish various interpretability techniques based on five subgroups.
• Analysis of the literature based on evaluation metrics, datasets used, and application do-
mains.

• A detailed discussion of open issues and identification of future research directions.

The paper is structured as follows. Section 2 begins with the general scope of the problem
and then provides the background for the fields of XAI and semantic image segmentation. Sec-
tion 3 reviews the most important taxonomical dichotomies in classification and introduces a
method-centered taxonomy for XAI in image segmentation. Section 4 presents illustrative ex-
amples of each method group, includes formalizations for gradient-based and perturbation-based
methods, and outlines XAI evaluation metrics. Short summaries of each method with their main
3
contributions, grouped by application area, are provided in Section 5. Lastly, Section 6 points
out future research directions, while Section 7 draws the main conclusions from this study.

2. Background

2.1. Development of the Field of XAI in Computer Vision


There is a great variety of XAI methods in classification, with new techniques being pro-
posed weekly. Typically, these methods employ some form of feature attribution, indicating the
model’s sensitivity or insensitivity to various features, such as certain pixel configurations in
the input space. The most popular explainable classification methods, still influential in today’s
DL models, fall into gradient-based or perturbation-based categories. Here, we only highlight
key developments, particularly focusing on the methods that have influenced interpretable image
segmentation. For an accessible introduction to and treatment of explainable classification, we
refer the reader to [22]. For a more detailed survey on these topics, [23] is also recommended.
The first gradient-based explainability techniques for classification in convolutional neural
networks (CNNs) are proposed in [24]. The initial method generates artificial images that max-
imize the score for the selected class of interest. The second method, also referred to as vanilla
gradient, produces a saliency map that highlights important regions in the input space. This is
based on the gradient for the class of interest with respect to the input. The authors also observe
that this method can be used for weakly supervised segmentation. This marks the possibility
to use XAI tools instrumentally, not just for the sake of explainability. In [25], the influential
Grad-CAM technique is introduced. Its calculation is based on the gradient flow into the last
convolutional layer. Since Grad-CAM is calculated for intermediate model activations, the re-
sulting explanation needs to be upsampled. This upsampling process might negatively impact the
quality of pixel-level explanations [26]. Similar to [24], the Grad-CAM technique also demon-
strates the potential for instrumental use in weakly supervised localization.
Another area of explainable classification methods encompasses occlusion or perturbation-
based techniques, such as occlusion sensitivity measurements [27], LIME [28], SHAP [29], and
RISE [30]. In [27], occlusion sensitivity is introduced. It proposes systematically occluding the
input image with a smaller grey filter and measuring the effect on the model’s output. The likeli-
hood of the model classifying the image as belonging to the actual class should decrease when the
object of that class is occluded in the input space. Other noteworthy methods in explainable clas-
sification have focused on optimization. Activation maximization, previously proposed in [31],
initially focused on Restricted Boltzmann Machines, a type of unsupervised models. In [24], it
has been specifically implemented in supervised classification models. In [32], this technique
was further popularized by demonstrating the results across different network layers. Unlike
the previously discussed XAI techniques, this type of explanation method can be described as
global because the generated image does not depend on a particular input image but rather on the
model’s internal weights.

2.2. Specifics of Semantic Segmentation


Most of the literature on interpretable computer vision focuses on classification. However,
DL-based semantic segmentation techniques have achieved significant results. Classical encoder-
decoder models such as U-Net [33] or SegNet [34] as well as their modifications, have been
deployed in various fields. Vision transformer-based segmentation architectures have also been
proposed [35]. There have even been attempts to combine these two approaches [36]. During
4
semantic segmentation, class labels are assigned to each pixel, and the output is typically the
same resolution as the input image. Modern segmentation models can be composed of millions
of parameters, making their interpretation difficult and often resulting in their description as
“black boxes.”
Interpretability in semantic image segmentation is a challenging area of study. On one hand,
it can be viewed as an extension of a relatively intuitive interpretable classification. However, it
requires combining the relative influence of each classified pixel of interest. On the other hand,
interpreting its own explanations is not so straightforward or intuitive. One problem with in-
terpretability methods, not limited to semantic segmentation, is that we generally lack ground
truths for the explanations. Furthermore, it is uncertain what the ideal explanation should look
like or whether one interpretable instance can be limited to a single explanation. However, in the
case of classification, we can at least have some candidates for good explanations. Based on this,
qualitative human-based studies [37] can be conducted. Conducting a similar study for semantic
segmentation is more complex, as it is less clear what constitutes good explanation candidates:
should the interpretability saliency map focus on the entire area of the class of interest or just its
boundaries? Can there be instances where the most salient features are outside the class area?
What if the segmentation area is correct, but the attributed class is not? Moreover, semantic im-
age segmentation is notorious for inter-observer variability, especially in manual delineations in
medical images. One way to demonstrate the usefulness of explainable segmentation is to detect
instances where the segmentation of one semantic class appears heavily dependent on the pres-
ence of different class pixels, whether nearby or otherwise. In [16], such a case is demonstrated
when the U-Net detects the sky primarily due to the nearby trees, which belong to the ’Nature’
class. Interpretable semantic segmentation techniques prove most useful when the segmentation
is incorrect.

(a) (b)

(c) (d)

Figure 2: Explanation for single pixels: the selected pixels (top leftmost and centermost) are shown on the left, with
their corresponding gradient-based explanations on the right.

Since we can frame the segmentation task in terms of classification, it is relatively easy to
apply explainable classification methods to it, focusing on a single pixel as seen in Fig. 2. For
instance, a gradient for the selected output pixel of a chosen class can be calculated with respect
to the entire input image. However, an explanation map for the classification of a single pixel is
not particularly useful. It is less accessible to the human interpreter, as evaluating thousands of
5
different explanations for just a single class in a single image would be required. Therefore, we
need to consider the effects of a larger number of pixels. Most popular explainable segmentation
techniques operate under the underlying assumption of pixel importance. This assumption is
particularly relevant to perturbation-based methods, where introducing noise to important pixels
would degrade a model’s performance more significantly than adding it to less critical pixels. To
explain the whole image (i.e., all pixels) instead of just a single pixel, most explainable segmenta-
tion techniques must visualize the relative contributions of all pixels simultaneously. Otherwise,
the analysis of separate single-pixel-based explanation maps would be too tedious. The most
popular way to do it involves using logit values, unnormalized probabilities before the Softmax
layer, typically used in classification. This could be achieved by summing up the logits of the
class of interest for the pixels of interest, for instance. This new scalar value can then be used
when generating a single explanation for the entire image, just like in the case of a single pixel.

2.3. Limitations
Feature attribution and saliency-based XAI methods in particular have faced criticism [38],
[39], [40]. Although these criticisms have solely focused on explainable classification, they de-
serve a thorough examination as they could also extend to segmentation. Some of the XAI meth-
ods act as regular edge detectors, independently from the underlying model and training dataset.
This independence is troubling because a local post-hoc XAI method should explain a specific
model’s prediction for a particular data point. In [41], limitations of feature attribution meth-
ods such as SHAP and integrated gradients are emphasized both theoretically and empirically,
showing that they cannot reliably infer counterfactual model behavior. The authors observe that
the analyzed attribution methods resemble random guessing in tasks like algorithmic recourse
and spurious feature identification. Similar experimental results are observed with gradients,
SmoothGrad [42], and LIME.
Attribution methods have also been criticized for confirmation bias [43]. An appealing but
incorrect explanation might be judged more favorably than a more realistic one. A better under-
standing of the goals of an idealized attribution method is needed to develop improved quanti-
tative tools for XAI evaluation [43]. In [44], the limitations of post-hoc explanations are inves-
tigated. The authors question their effectiveness in detecting unknown (to the user at test time)
spurious correlations. These inefficiencies are detected in three types of post-hoc explanations:
feature attribution, concept activation, and training point ranking. However, the authors acknowl-
edge that these three classes do not fully cover all post-hoc explanation methods. Other methods
have been criticized for their weak or untrustworthy causal relationships. In [45], saliency maps
are criticized for their frequent unfalsifiability and high subjectivity. The study also highlights
their causal unreliability in reflecting semantic concepts and agent behavior in reinforcement
learning environments. In [46], it is argued that feature attribution techniques are not more ef-
fective than showing the nearest training-set data point when tested on humans. The limitations
of attribution methods in cases of non-visible artifacts [47] have also been investigated.
Despite the critical studies on explainable classification and their potential extensions to seg-
mentation, the widespread prevalence of image segmentation requires investigating different ex-
plainability tools and their working mechanisms. Although some studies point out the limitations
of these techniques, better alternatives have yet to be developed. As observed in [48], the devel-
opment of interpretability methods is dialectical: a new method is introduced, its failure modes
are identified, and as a result, a new method is proposed, with the ongoing aim of making them
more reliable. Current methods have much room for improvement, especially considering that
the entire field is in the early stages of development. The above criticisms can serve as sanity
6
checks for XAI methods. Despite the limitations, some techniques, such as gradients and Grad-
CAM in the case of [38], do pass certain sanity checks. Even some critical literature [48] agrees
that certain explainability techniques can be useful for exploratory use cases. To our knowledge,
the specifics of XAI limitations in image segmentation have not yet been explored.

3. Taxonomy
Different XAI taxonomies have been introduced in classification, both with respect to spe-
cific subgroups of interpretability methods [49], [50], and with respect to more abstract concep-
tual terms [51]. Even meta-reviews of various existing taxonomies have been proposed [52],
[53]. Since image segmentation can be seen as an extension of classification, many taxonomy-
related aspects can be validly transferred from research in explainable classification. In most
taxonomies, a particularly important role is played by three dichotomies: post-hoc vs ad-hoc
(sometimes also referred to as inherent interpretability), model-specific vs model-agnostic, and
local vs global explanations.

3.1. Scope: Local vs Global


The first prevalent dichotomy distinguishes between local and global explanations. Here,
locality refers to the use of a single input image with respect to which the explanation is given. A
global explanation, on the other hand, would aim to explain the model’s behavior across a range
of different images, not limiting itself to just one. According to meta-surveys [52], [53], the
local-global dichotomy is prevalent in numerous XAI taxonomies. This distinction is essential
in explainable segmentation as well, with most methods falling under local explanations.

3.2. Method and its timing: Post-hoc vs Ad-hoc


The distinction between post-hoc and ad-hoc explanations highlights that one can either apply
XAI techniques to an already-trained model without any interference or apply them during and
as part of the training process. Sometimes, these explanations are also described as passive and
active approaches [23]. Under this definition, active approaches require modifications to the
network or the training process. Such changes influence both the model’s performance in terms
of evaluative metrics and its interpretability. Therefore, an accuracy-interpretability trade-off
cannot be avoided in ad-hoc XAI methods, but it is avoided in the case of post-hoc applications.
This widely accepted dichotomy can nonetheless be slightly misleading, as both terms can
be meant to emphasize different distinctive criteria. Post-hoc can be understood as referring to
the fact that XAI techniques are applied after the training, hence “post”. Naturally, it would seem
that ad-hoc should be understood as referring to XAI techniques that are applied during training.
However, sometimes, as a direct opposition to “post-hoc”, terms like ’inherent interpretability’
[2] or “self-explainability” [54] are used, pointing to an entirely different aspect: the architecture
or the type of XAI method. In some cases, such interpretation could allow for XAI methods that
are both inherently interpretable and post-hoc [22], which might cause confusion.

3.3. Range: Model-specific vs Model-agnostic


The third distinction evaluates the flexibility of a given XAI technique in its application to
different model architectures. Model-specific XAI methods heavily depend on the underlying
model architecture, whereas model-agnostic methods are more universal in their compatibility
with various models, and can be applied to different architectures without further modifications.
The interpretation of inherently interpretable models is always model-specific [22].
7
Prototype−based Representative samples or their parts from the dataset are analyzed
and compared with the image of interest.
XAI for Image Segmentation

Gradient−based Calculating the gradient of the selected layer’s output or the class
of interest with respect to selected inputs or feature maps.

Input Space Iterative occlusions of the input image.


Perturbation−based
Activation Space Partial or full deactivations of the model’s
feature maps from the selected layer.

Counterfactual Minimum input changes needed for the output to change are
investigated.

Architecture−based Additional architectural changes before or during the training to


make the model more interpretable.

Figure 3: Method-centered taxonomy

3.4. XAI taxonomy for image segmentation


Multiple compatible taxonomies are possible depending on the level of abstraction in which
we are interested. In [49], XAI methods in ML are divided into transparent models and post-hoc
explainability, which is then further divided into model-specific and model-agnostic categories.
In [50], interpretation methods are divided into post-hoc interpretability analysis and ad-hoc in-
terpretable modeling. In [55], a higher-level taxonomy distinguishes between structural analysis,
behavioral analysis, and explainability by design. In [56], a preliminary taxonomy of human
subject evaluation in XAI is introduced, which might be particularly useful when using qualita-
tive evaluations of XAI. Based on the analysis of XAI taxonomies in classification, we observe
that they could also be applied to image segmentation. However, no specific framework has been
introduced to address the ever-growing field of interpretable segmentation. We hope that a more
detailed demarcation will be useful in navigating across different types of techniques. In our
survey, we propose a taxonomy (Fig. 3) that is based on the reviewed literature in explainable
image segmentation.
The proposed method-centered taxonomy includes five method families: prototype-based,
gradient-based, perturbation-based, counterfactual methods, and architecture-based techniques.
Prototype-based methods employ representative samples or their parts from the dataset to ana-
lyze and compare with the input image. Gradient-based methods involve calculating the gradient
of the output of a selected layer or the class of interest with respect to selected inputs or feature
maps. Perturbation-based methods can be divided into two groups based on the perturbed space.
Input space perturbations are iterative occlusions of the input image. Typically, they are based
on a sliding filter, but different types of noise can also be introduced. Explanations are based on
their effect on the model’s outcome. Activation space perturbations involve partial or full deacti-
vations of the model’s feature maps from the selected layer. Once again, explanations are based
on their effect on the model’s outcome. Counterfactual methods employ the minimum input
changes needed for the output to change. Finally, architecture-based techniques involve mak-
ing additional architectural changes either before or during training to enhance interpretability.
Section 4 presents a more detailed analysis of each method group.

8
4. XAI for Image Segmentation

In this section, we review the main methods representative of each subgroup in the taxonomy,
as well as the metrics for explainable image segmentation.

4.1. Methods
4.1.1. Prototype-based methods
Prototype-based models [57] utilize typical representatives from the dataset, usually selected
from the training set. These methods emphasize the intuitiveness of the provided explanations,
presenting them in an easily understandable form of naturally occurring objects. Such features
can be easily distinguished and discriminated by end users. Meanwhile, prototypical parts refer
to specific regions within representative prototypes, also known as exemplars. In contrast to a
prototype, a criticism is a data instance that is not well represented by the prototypes [22]. In
terms of architecture, typical prototype-based methods require the insertion of a prototype layer
into the segmentation model. Therefore, depending on the taxonomy, prototype-based methods
could also be viewed as self-explainable and part of the architecture-based methods. However,
due to their frequent mentions in the related classification literature under the same subgroup
label, we opt to treat them as a separate group.
Prototype Layer
p1

Windshield Windshield

g(p1 )
p2
Body

g(p2 )
...

...

...

pm
Wheel

g(pm )
Input Segmentation Model Class Probability Output

Figure 4: A framework for prototype-based methods.

Although prototypical methods are prevalent in classification [58], [59], [60], their extensions
for segmentation are few. Typically, prototype layer (Fig. 4) is a key component in prototype-
based methods for both classification [58], [59], [60] and segmentation [61], [54]. Within a
prototypical layer, different classes are represented by predefined or learned prototypes. In [54],
a ProtoSeg model is proposed. The authors introduce a diversity loss function based on Jeffrey’s
divergence [62] to increase the prototype variability for each class. Better results are observed
when the diversity loss component is introduced. The authors attribute this to the higher infor-
mativeness of a more diverse set of prototypes that leads to a better generalization. We think that
this could be related to the diversity hypothesis [63], first introduced in the context of reinforce-
ment learning, and could be explored further. The experiments are performed using Pascal VOC
2012 [64], Cityscapes [65], and EM Segmentation Challenge [66] datasets. DeepLab [67] model
is used as the backbone. In [61], a prototype-based method is used in combination with if-then
rules for the interpretable segmentation of Earth observation data. The proposed approach is the
9
extension of xDNN [68], and uses mini-batch K-mean [69] clustering. For the feature extraction
part, the U-Net architecture is used. The experiments are performed using the Worldfloods [70]
dataset.

4.1.2. Counterfactual explanations


Counterfactual or contrastive explanations investigate the minimum input changes needed for
the output to change. Unconditional counterfactual explanations were first introduced in [71].
This explainability subfield is related to adversarial attacks. Counterfactual images are similar
to the original, yet are able to change the model’s output. Counterfactual explanations can also
be viewed as closely linked to perturbation-based explanations, which will be discussed in the
next subsection. Counterfactual XAI techniques frequently fall into local post-hoc category [72].
After the initial segmentation model, counterfactual-based interpretability method typically em-
ploys additional networks for counterfactual generation. In our pipeline (Fig. 5), this is depicted
by additional encoder and decoder networks.
Perturbation
Input Encoder Decoder

Results
Segmentation
Model

logits Latent

Figure 5: A framework for counterfactual methods.

Generator-based counterfactual explanations are investigated in [73]. OCTET, a generative


approach, produces object-aware counterfactual explanations for complex scenes. Counterfac-
tual changes to the image focus on road markings, such as changing the solid line into a dashed
one, or the positions of cars by cropping and extending the relevant regions of the input im-
age. The models are trained on the BDD100k [74] and BDD-OIA [75] datasets. Additional
information can be found in the supplementary material [76]. In [77], segmentation results are
qualitatively compared using counterfactual images. The experiments are performed on Kvasir-
seg [78] and Kvasir-instrument [79] datasets. Counterfactual explanations are generated using
the segmented area of interest, which is then replaced with the average pixel value of the rest of
the image. In [80], counterfactual explanations are generated for complex scenes while preserv-
ing the semantic structure. The proposed method uses semantic-to-real image synthesis. Here,
a noticeable contrast can be drawn between this approach and perturbation-based methods. In
the latter, perturbations applied to the input space fail to produce semantically meaningful image
regions.

4.1.3. Perturbation-based methods


Perturbation-based methods typically employ occlusions in the input (Fig. 6) or activation
(Fig. 7) space, and then measure their influence on the model’s output for a selected class. Here
occlusions or perturbations could be understood as uninformative regions, transforming the input
or its internal feature maps. Pixels in the occlusion filter can be set to 0, as seen in Fig. 6, or any
other arbitrary value. Such sliding filter would occlude different regions of the input space. Its
size and stride parameters are specified beforehand. Gaussian or any other random noise can also
be used for these purposes. Multiple perturbative iterations are required for the generation of an
10
explanation map. During each inference, the score is calculated, measuring the perturbation’s
effect on model’s performance. This can be done by taking the difference between the score for
the original image and that of its perturbed version. Such a score can be based on an evaluative
metric or pre-Softmax prediction values. Since the same input image has to undergo multiple
transformations, each requiring a separate forward pass through the model, perturbation-based
XAI methods are considered computationally expensive.

Segmentation
Model

logits
Input Segmentation Output

Figure 6: A framework for perturbation-based methods for the input space.

Iterative Occlusion
of Feature Maps

logits
Input Segmentation Output

Figure 7: A framework for perturbation-based methods for the activation space.

Following [81], given an RGB image x of dimensions N × M × 3, the pre-Softmax prediction


score for pixel xi j for class c is lc (xi j ). When xi j is classified as c, the sum of these scores is:
X
Lc (x) = [ĉi j = c]lc (xi j ), (1)
i, j

where ĉi j is the predicted class for xi j .


Following [82], Lc (x) is used to compute the importance weight wck of each activation map k:
Lc (x) − Lkc (x)
wck = , (2)
Lc (x)
where Lkc (x) is the sum of pre-Softmax scores for c after deactivating k. These calculated im-
portance scores are then used in a linear combination of feature maps to generate a perturbation-
based explanation.
The authors of [15] propose the first XAI solution for extending saliency techniques beyond
classification. Their perturbation-based method is introduced for the detection of contextual bi-
ases. The experiments are performed using a synthetic toy dataset based on MNIST [83] as well
as the Cityscapes [65] dataset. In [84], a hybrid SegNBDT approach is introduced, combining
both decision trees and neural networks. This method falls under both the perturbation-based and
self-explainable model categories. For the experimental part, Pascal Context [85], Cityscapes
11
[65], and Look Into Person [86] datasets are used. In [87], SHAP and RISE techniques are
applied to image segmentation. SHAP is a popular post-hoc interpretability method, and the
proposed approach is based on Kernel SHAP [29]. The experiments are performed on Synthetic
Aperture Radar images from the unspecified dataset for oil slick detection at the sea surface
and the Cityscapes [65] dataset. In [26], a perturbation-based occlusion sensitivity approach is
used to measure the performance of the proposed interpretable semantic segmentation approach.
Compared to occlusion sensitivity and Grad-CAM, their method achieves orders of magnitude
lower inference time. However, it requires training an additional interpretability model. In [88],
following [27], different types of input occlusions are investigated for applications in semantic
segmentation. The paper discusses how occlusion filter sizes and colors can affect the generated
explanations. It is observed that, compared to image classification, input occlusions in segmen-
tation models do not generate as much variance in the evaluation metric scores. For the experi-
mental investigation, COCO [89] dataset is used. The proposed method is evaluated qualitatively,
with select images also compared quantitatively using deletion curves.
Perturbations are not limited to the input space. For instance, Ablation-CAM [82] is a
gradient-free method that systematically deactivates feature maps in a selected layer. In [81],
Ablation-CAM is extended to semantic segmentation. It is a gradient-free interpretability tech-
nique based on ablating or perturbing activation maps. The experiments are performed on a
private industrial dataset for fruit-cutting machines as well as on COCO [89] dataset.

4.1.4. Gradient-based methods


Gradient-based methods (Fig. 8) typically use gradients of the outputs from later layers with
respect to the input features. These techniques are less computationally expensive compared
to perturbation-based techniques because only a single backward pass is required. Perturbation
techniques, on the other hand, require a separate forward pass for each perturbed image, increas-
ing computational costs with each inference.

Forward Pass
Input
Output Score Map

Encoder Decoder

Segmentation
Model

Backward Pass

Figure 8: A framework for gradient-based methods.

Following [90], given an RGB image x of dimensions N × M × 3, and a set of class labels
{1, 2, ..., C}, where C denotes the total number of classes, we define:

g(x) = (g1 (xi j ), ..., gC (xi j )) ∈ RN×M×C , (3)


where gc (xi j ) denotes the pre-Softmax prediction score for class c at pixel xi j .
The sum of these pre-Softmax scores for class c is then defined as:
12
X
gc,A (x) = gc (xi j ), (4)
i, j∈A
where A is a set of pixel indices of interest.
From this, the gradient-based explanation with respect to c is derived as:
∂gc,A (x)
G A (x, c) = . (5)
∂x
In [16], Seg-Grad-CAM is proposed as the extension of Grad-CAM [25]. It is one of the best
known explainability techniques in image segmentation. Just like in the case of regular Grad-
CAM, the generated saliency is based on the weighted sum of the selected feature maps. Its
application is demonstrated on a U-Net model, trained on the Cityscapes [65] dataset. In [91],
the same method is applied for automatic rock joint trace mapping. The original Grad-CAM
technique for classification, together with simple gradients, passes the previously discussed san-
ity checks, evaluating the reliability of XAI technique. In [92], Seg-XRes-CAM is introduced.
The authors criticize Seg-Grad-CAM [16] for not utilizing spatial information when generating
saliency maps for a region of the segmentation map. The proposed approach draws inspiration
from HiResCAM [93], a modification of the original Grad-CAM [25]. Subsequently, [94] adapts
five CAM-based XAI methods from classification to the segmentation of high-resolution satellite
images. Among the proposed extensions are Seg-Grad-CAM++, Seg-XGrad-CAM, Seg-Score-
CAM, and Seg-Eigen-CAM. Just like in [81], Ablation-CAM, a gradient-free method, is also
extended for segmentation. Besides using the drop in the segmentation score to measure their
methods’ performance, the authors also propose entropy-based XAI evaluation metric. The im-
plemented methods are tested on a WHU [95] building dataset. In [96], an interpretability and
visualization toolbox is proposed for classification and segmentation networks. It includes sev-
eral XAI extensions specifically for image segmentation. Among them are Guided Grad-CAM
and segmented score mapping, extended from [97].

4.1.5. Architecture-based methods


This subgroup of methods introduces additional architectural changes (Fig. 9) that aim to
make the models more interpretable. Instead of relying on post-hoc techniques that are added on
top of the already trained models, these methods are typically employed as part of the training
process. This class of XAI methods is sometimes described as interpretable by design, inherently
interpretable, or interpretability as part of the architecture.
Explainability
Component

logits
Input Segmentation Output

Figure 9: A framework for architecture-based methods.

One such example is chimeric U-Net with an invertible decoder [98]. This approach intro-
duces architectural constraints for the sake of explainability. The authors claim that it can achieve
13
both local and global explainability. In [99], both supervised and unsupervised techniques of Se-
mantic Bottlenecks (SB) are introduced for better inspectability of intermediate layers. This
approach is proposed as an addition to the pre-trained networks. Unsupervised SBs are identified
as offering greater inspectability compared to their supervised counterparts. The experiments are
primarily performed on street scene segmentation images from the Cityscapes dataset. The re-
sults are also compared using two other datasets: Broden [100] and Cityscapes-Parts, a derivative
of Cityscapes. In [101], a framework for symbolic semantic segmentation is proposed. This work
is at the intersection of image segmentation and emergent language models. The authors apply
their research to medical images, specifically brain tumor scans. Emergent Language model
with a Sender and a Receiver is utilized for interpretable segmentation. The Sender is an agent
responsible for generating a symbolic sentence based on the information from the high model
layer, while the Receiver cogenerates the segmentation mask after receiving symbolic sentences.
Symbolic U-Net is trained on the Cancer Imaging Archive (TCGA) dataset1 and used for pro-
viding inputs to the Sender network.

4.2. Metrics
XAI techniques are used in addition to standard evaluation metrics due to their limitations.
However, to evaluate the performance of these techniques, they also need to be measured. We
can distinguish between qualitative and quantitative assessment methods. Qualitative evaluation
commonly refers to user-based evaluation and, based on the surveyed papers (Table 1 and Table
2), is the more prevalent of the two. To quantify subjective user results, various questionnaires
have been proposed [102], such as the explanation goodness checklist, explanation satisfaction
scale, trust scales, and the ease of understanding when comparing different explainability tech-
niques [103]. These methods still require polling multiple subjects, although, when surveying
experts, in practice their number is limited to 2-5 [103]. This way, quantification still takes place,
but it is based on subject-dependent evaluation. Since questionnaire studies require additional re-
sources, most of the papers using qualitative evaluation only provide visual comparisons between
different XAI techniques, leaving qualitative evaluation to the reader’s eye.
Quantitative evaluation does not involve human subjects and can be more easily applied
when comparing different interpretability methods. Infidelity and sensitivity [104] are the only
two metrics that, as of 2024, are implemented in the Captum [105] interpretability library for
PyTorch. Deletion and insertion metrics [30] are another type of quantitative evaluation, based
on measuring the Area under the Curve (AUC), generated after gradually deleting or inserting the
most important pixels in the input space. However, for some XAI methods, such as counterfactual
explanations, it might be difficult to evaluate the usefulness of the explanation quantitatively.
In the case of counterfactual explanations, we can measure whether the generated images are
realistic and how closely they resemble the query images, but for a more thorough evaluation of
the explanation itself, user studies [73] might be required.
In [106], a psychophysics study (N = 1,150) is conducted to evaluate the performance of six
explainable attribution methods on different neural network architectures. Shortcomings in the
methods are detected when using them to explain failure cases. Comparative quantitative rank-
ings of different saliency techniques can also be inaccurate. In [107], inspired by [38], sanity
checks for saliency maps are investigated. The authors perform checks for inter-rater reliability,

1 https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=5309188

14
inter-method reliability, and internal consistency, and determine that the current saliency met-
rics are unreliable. It is observed that these metrics exhibit high variance and are sensitive to
implementation details.

5. Applications

This section presents concrete XAI applications in medical and industrial domains. We also
discuss other use cases, with a primary focus on industry-related monitoring domains, such as
remote sensing, environmental observation, and biometrics. Additionally, the potential uses of
XAI for self-supervised image segmentation are reviewed.

5.1. Medical Applications


Most applications in explainable image segmentation have been investigated in the medical
domain, using datasets from various medical fields (Table 1), ranging from cardiology to on-
cology. Proposed XAI solutions and applications are employed for diagnosing, monitoring, and
other clinical tasks. In some cases, there might be unavoidable overlaps between medical fields.
For instance, overlaps occur at the intersection of oncology and histopathology when discussing
microscopic tumor images, or between oncology and dermatology when considering melanoma
[143]. Such overlaps can also arise from using multiple datasets, each associated with a different
medical field. In these instances, we specify the relevant details in the method description.

5.1.1. Dermatology
Dermatology-centered XAI applications [143], [142] focus on skin lesions. Specifically,
[143] discusses applications for interpreting melanoma diagnosis results. The proposed pipeline
utilizes both classification and segmentation networks. Grad-CAM is employed to generate ex-
plainable heatmaps for the classifier, which are then used as inputs in the U-Net network. These
heatmaps assist in generating indicator biomarker localization maps. The proposed approach can
be used in self-supervised learning. Experiments are performed on the ISIC 2018 [114] and ISIC
2019 [122], [169], [170] datasets. In [142], a CAM-based explainability metric is proposed and
incorporated into the loss function. This metric quantifies the difference between the CAM out-
put and the segmentation ground truth for the targeted class. Both segmentation and explanation
losses are considered during the model’s training phase. The use of CAM with learnable weights
enables a balance between segmentation performance and explainability. The proposed method
belongs to the self-explainable XAI category. Similar to [143], the U-Net network is used. The
experiments are conducted on the ISIC2018 [114] dataset. In [115], a comprehensive attention-
based convolutional neural network is proposed for better interpretability in dermoscopic and
fetal MRI images. This approach uses multiple attentions, combining the information about spa-
tial regions, feature channels, and scales. The experiments are performed on ISIC 2018 [114]
and a private fetal MRI dataset.

5.1.2. Forensic Medicine


The applications of explainable segmentation in forensic medicine are limited to iris segmen-
tation. This can be more narrowly referred to as forensic ophthalmology. In [121], the investiga-
tion focuses on forensic postmortem iris segmentation. The authors apply a classical technique
of Class Activation Mapping (CAM) [171]. The experiments are performed on a private test
dataset, and publicly available post-mortem iris datasets, collected by [120].
15
Table 1: Explainable Image Segmentation in Medicine
Field Imaging Objects of interest Datasets Metric Year Ref.
modality
G IMG* Colorectal polyps EndoScene [108] ▷ 2018 [14]
O CT Liver tumors LiTS [109] ▶ 2019 [110]
C CMRI Ventricular volumes SUN09 [111], AC17 [112] ▷ 2020 [113]
O MRI Brain tumors TCGA ▷ 2020 [101]
D MRI/IMG Skin lesions, multi-organ (incl. the ISIC2018 [114] and a private fetal ▷ 2020 [115]
fetal brain and the placenta) MRI dataset
P CT Pancreatic region Medical segmentation decathlon ▶ 2021 [26]
C CMRI Ventricles, myocardium Cardiac MRI dataset [112] ▷ 2021 [116]
G IMG Polyps, med. instruments Kvasir-SEG [78], Kvasir- ▷ 2021 [117]
Instrument [79]
O MRI Brain tumors BraTS2018 [118] ▶ 2021 [119]
FM NIR Iris Private test dataset, post-mortem ▷ 2022 [121]
iris datasets, collected by [120]
V CT/MRI/IMG Skin lesions, abdomen multi-organ, HAM10000 [122], CHAOS 2019 ▷ 2022 [124]
brain tumors [123], BraTS 2020 [118]
V MRI Brain tumors, human knees BraTS 2017 [118], OAI ZIB [125] ▷ 2022 [98]
O MRI Brain tumors BraTS 2019 [118], BraTS 2021 ▷ 2022 [126]
[118]
N MRA Brain vessels Private ▶ 2022 [127]
H IMG Liver Simulated dataset (Test Set 4) [128] ▷ 2022 [129]
O US/MG Breast tumors Private LE/DES datasets, and BUSI ▷ 2023 [131]
[130]
G CT/IMG Colorectal polyps, lung cancer EndoScene [108], LIDC-IDRI ▶ 2023 [133]
[132]
O CT/MRI Prostate cancer 3D pelvis dataset [134] ▷ 2023 [135]
G CT Abdominal organs Synapse multi-organ CT dataset ▷ 2023 [92]
[136]
O BUS Breast tumors BUSI [130], BUSIS [137], HMSS ▷ 2023 [139]
[138]
O CT/PET Non-small cell lung cancer, whole- NSCLC, AutoPET [140] ▶ 2023 [141]
body
D IMG Skin lesions ISIC2018 [114] ▷ 2023 [142]
D IMG Melanoma ISIC 2018 [114], ISIC 2019 [122] ▷ 2023 [143]
O MRI/IMG Prostate tumors, optic disc and cup Prostate** and fundus*** datasets ▷ 2023 [144]
O X-ray Breast tumors INbreast [145] ▶ 2023 [146]
O WSI Head and neck tumors Private ▷ 2023 [147]
A IR Feet ThermalFeet ▷ 2023 [148]
V CT/MRI/IMG Brain tumors BraTS 2018 [118], BraTS 2019 ▷ 2023 [149]
[118], BraTS 2020 [118], ISIC
2017
P CT Pancreas Pancreas segmentation dataset ▷ 2023 [151]
[150]
Op OCT Retinal layers, glaucoma, diabetic NR206, glaucoma dataset [152], ▷ 2023 [154]
macular edema DME dataset [153]
V CT/MRI Prostate, left ventricle, right ventri- NCI-ISBI 2013 [155], I2CVB ▷ 2023 [162]
cle, myocardium [156], PROMISE12 [157];
MSCMR [158], EMIDEC [159],
ACDC [112], MMWHS [160],
CASDC 2013 [161]
Op OCT Retinal layers, glaucoma, diabetic Vis-105H, glaucoma dataset [152], ▷ 2024 [163]
macular edema DME dataset [153]
V CMRI/CT Left atrium, thoracic organs Atrium dataset [150], SegTHOR ▷ 2024 [165]
[164]
A: Anesthesiology, C: Cardiology, D: Dermatology, FM: Forensic Medicine, G: Gastroenterology, N: Neurology,
O: Oncology, Op: Ophthalmology, P: Pancreatology, and V: Various
* IMG: general-purpose digital image formats, such as JPEG
** Prostate datasets: RUNMC [155], BMC [155], HCRUDB [156], UCL [157], BIDMC [157], and HK [157]
*** Fundus datasets: DRISHTI-GS [166], RIM-ONE-r3 [167], and REFUGE [168]
▷: Qualitative XAI evaluation
▶: Quantitative XAI evaluation

16
5.1.3. Gastroenterology
XAI applications for endoscopic image segmentation primarily focus on polyps. In [14],
the Guided Backpropagation [172] technique is extended to the semantic segmentation of col-
orectal polyps. Uncertainty in input feature importance is estimated, with higher uncertainty ob-
served in inaccurate predictions. Uncertainty maps are generated using the Monte Carlo Dropout
method. The proposed solution is evaluated on the EndoScene [108] dataset. In [117], Layer-
wise Relevance Propagation (LRP), a propagation-based explainability method, is applied to
the endoscopic image segmentation of gastrointestinal polyps and medical instruments. LRP is
specifically applied to the generator component within a Generative Adversarial Network. The
generated relevance maps are then qualitatively evaluated. The segmentation models are trained
on the Kvasir-SEG [78] and Kvasir-Instrument [79] datasets.

5.1.4. Hepatology
In [129], two gradient-based post-hoc explanations, Grad-CAM and Grad-CAM++, are in-
vestigated for cross explanation of two DL models, U-Net and the Siamese/Stereo matching net-
work, based on [128]. The experiments are performed on laparoscopic simulated stereo images
[128], with a focus on liver segmentation.

5.1.5. Oncology
Most of the explainable medical AI applications in image segmentation are in oncology.
Liver: A DeepDream-inspired method is proposed in [110] for the segmentation of liver tu-
mors in CT scans, specifically focusing on binary segmentation. The study seeks to understand
how human-understandable features influence the segmentation output and defines the network’s
sensitivity and robustness to these high-level features. High sensitivity indicates the importance
of such features, while high network robustness shows its indifference to them. Radiomic fea-
tures are also analyzed. The experiments are performed on the LiTS2 [109] challenge dataset.
Semantic segmentation in liver CT images is further investigated in [173], where the segmen-
tation output is corrected based on XAI. This approach is categorized as a global surrogate and
is model-agnostic. However, its primary purpose is not interpretability but rather the improve-
ment in the initial segmentation by using additional boundary validation and patch segmentation
models. The authors of [174] investigate the segmentation of malignant melanoma lesions in 18-
fluorodeoxyglucose (18 F-FDG) PET/CT modalities, focusing on metastasized tumors. The claim
to interpretability is based on the visualization of the model’s intermediate outcomes. The over-
all pipeline involves both segmentation and detection. Volumes of interest (VOI) are visualized
for the liver as well as PET-positive regions classified as physiological uptake. This additional
information is provided together with the final segmentation masks.
Brain: An interpretable SUNet [101] architecture is proposed for the segmentation of brain
tumors using The Cancer Imaging Archive (TCGA) dataset. Experimental results and statistical
analysis indicate that symbolic sentences can be associated with clinically relevant information,
including tissue type, object localization, morphology, tumor histology and genomics data. In
[119], 3D visual explanations are investigated for brain tumor segmentation models, using the
quantitative deletion curve metric to compare the results with Grad-CAM and Guided Back-
propagation [172] techniques. In [124], a region-guided attention mechanism is used for the
explainability of dermoscopic, multi-organ abdomen CT, and brain tumor MRI images. The

2 https://competitions.codalab.org/competitions/17094

17
experiments are performed on HAM10000 [122], CHAOS 2019 [123], and BraTS 2020 [118]
datasets. Another architecture-based solution is proposed in [98], where the U-Net architecture
is modified and applied to two MRI datasets: BraTS 2017 [118] and OAI ZIB [125], respec-
tively focusing on brain tumors and human knees. In [175], Grad-CAM results are compared
to brain tumor segmentation results. The overall pipeline includes both classification and seg-
mentation networks, where DenseNet is used for classification and Grad-CAM-based heatmaps
are generated for different layers. However, Grad-CAM is not specifically tailored for segmen-
tation but rather used as an explainable classification tool to evaluate segmentation results. In
[126], a NeuroXAI framework is introduced, combining seven backpropagation-based explain-
ability techniques, each suitable for both explainable classification and segmentation. Gliomas
and their subregions are investigated using 2D and 3D explainable sensitivity maps. A ProtoSeg
method is proposed in [149] for interpreting the features of U-Net, presenting a segmentation
ability score based on the Dice coefficient between the feature segmentation map and the ground
truth. Experiments are performed on five medical datasets, including BraTS for brain tumors,
each focusing on different medical fields or affected organs.
Pelvis: In [135], a Generative Adversarial Segmentation Evolution (GASE) model is pro-
posed for a multiclass 3D pelvis dataset [134]. The approach is based on adversarial training.
Style-CAM is used to learn an explorable manifold. The interpretability part allows visualizing
the manifold of learned features, which could be used to explain the training process (i.e. what
features are seen by the discriminator during training).
Breast cancer: Oncological XAI applications for the segmentation of breast tumors are in-
vestigated in [131], [139], and [146]. In [131], a multitask network is proposed for both breast
cancer classification and segmentation. Its interpretations are based on contribution score maps,
which are generated by the information bottleneck. Three datasets are used, each focusing on a
different imaging modality. In [139], SHAP explainability method is applied to the task of breast
cancer detection and segmentation. The experiments are performed on BUSI [130], BUSIS
[137], and HMSS [138] datasets. In [146], explainability for mammogram tumor segmentation
is investigated with the application of Grad-CAM and occlusion sensitivity, in both cases using
Matlab implementations, and activation visualization. Their quantitative evaluation is based on
image entropy, which gives additional information about the XAI method’s complexity. Pixel-
flipping techniques, which are directly related to deletion curves, are also employed. The exper-
iments are performed on INbreast [145] dataset of X-ray images.
Other: In [144], Importance Activation Mapping (IAM) is employed as an explainable vi-
sualization technique in continual learning. The generated heatmap shows which regions in the
input space are activated by model parameters with high-importance weights, associated with
the model’s memory. This approach is evaluated for the segmentation of prostate cancer. It also
has applications in ophthalmology, specifically for segmenting the optic cup and disc. In [147],
two CAM-based XAI techniques, Seg-Grad-CAM and High-Resolution CAM (HR-CAM), are
applied to histopathological images of head and neck cancer. The explanations generated by both
techniques appear to rely on the same features identified by professional pathologists. In [176],
a solution based on Cartesian Genetic Programming is used to generate transparent and inter-
pretable image processing pipelines. This method is applied to biomedical image processing,
ranging from tissue histopathology to high-resolution microscopy images, and can be charac-
terized as a few-shot learning approach. In [177], a classification-based version of Grad-CAM
is used to enhance a U-Net-based segmentation network. The experiments are performed on
the 3D-IRCADb-01 [178] dataset, comprised of 3D CT scans of venous phase CT patients. An
Xception network generates 2D saliency maps for classification, which are then passed to the
18
U-Net network together with the corresponding input images. This prior information enables
more accurate segmentation. In [179], a framework for explainable classification and segmen-
tation is presented. For segmentation, it relies on a feature hierarchy. The experiments are
performed on the skin cancer dataset. The Factorizer architecture, introduced in [180], is based
on nonnegative matrix factorization (NMF) components, which are argued to be more semanti-
cally meaningful compared to CNNs and Transformers. The proposed approach is categorized
under architecture-based interpretability methods. The models are implemented for brain tu-
mor and ischemic stroke lesion segmentation datasets. In [127], a framework for explainable
semantic segmentation is presented, extending several classification techniques to segmentation.
These methods are also applied to 3D models. Infidelity and sensitivity metrics are used, and
the experiments are performed on vessel segmentation in human brain images using Time-of-
Flight Magnetic Resonance Angiogram. The experimental data [181] is not publicly available.
In [141], a new interpretation method is proposed for multi-modal segmentation of tumors in
PET and CT scans. It introduces a novel loss function to facilitate the feature fusion process.
The experiments are performed on two datasets: a private non-small cell lung cancer (NSCLC)
dataset and AutoPET [140], a whole-body PET/CT dataset from MICCAI 2022 challenge.

5.1.6. Ophthalmology
XAI is also employed in the segmentation of ophthalmological images. Optic disc and cup
segmentation is explored in the setting of continual learning [144], where it is investigated in
multi-site fundus datasets. Importance Activation Mapping is used to visualize the memorized
content, facilitating an explanation of the model’s memory. The focus is on reducing the model’s
forgetting. In [154], Seg-Grad-CAM is applied to ophthalmology for segmenting retinal layer
boundaries. The study provides an entropy-based uncertainty visualization of segmentation prob-
abilities. This offers more information about which retinal layers and regions exhibit higher un-
certainty and allows for focusing on problematic areas. It is observed that higher uncertainty
is associated with segmentation errors once it reaches a certain threshold. The experiments are
performed on NR2063 ,

5.1.7. Pancreatology
In [26], an interpretable image segmentation approach is proposed for pancreas segmentation
in CT scans. The method is also compared to Grad-CAM and occlusion sensitivity, demonstrat-
ing its superior inference time. This method identifies regions in the input images where noise
can be applied without significantly affecting model performance. It relies on noisy image occlu-
sion and can be classified as a perturbation-based technique. To directly parameterize the noise
mask for each pixel without harming the model’s performance, an additional small interpretabil-
ity model is trained. Both interpretability and utility models are based on U-Net. Pixels that can
be significantly perturbed without changing the model’s performance are considered less impor-
tant. Essentially, the proposed method involves training noise distributions. This approach allows
training dynamic noise maps for individual images, differing from the typical static systematic
occlusion. Experiments are performed on a pancreas dataset [182]. In [151], a smoothing loss is
introduced to guide interpretability learning. The authors observe that the explanations produced
by U-Noise are less continuous. Assuming that important pixels are likely to be spatially close,
the proposed smoothing objective considers the correlation between pixels during optimization.

3 https://github.com/Medical-Image-Analysis/Retinal-layer-segmentation

19
The resulting explanations are compared to those generated by Grad-CAM and U-Noise. Exper-
iments are performed on a pancreas segmentation dataset [150] from the medical segmentation
decathlon.

5.1.8. Urology
In [162], a Bayesian approach is proposed to address the problem of interpreting domain-
invariant features. The experiments are performed for prostate and cardiac segmentation tasks.
The experiments are performed on T2 prostate MRI images from NCI-ISBI 2013 [155], I2CVB
[156], and PROMISE12 [157]. For cardiac segmentation, MSCMR [158], EMIDEC [159],
ACDC [112], MMWHS [160], and CASDC 2013 [161] datasets are used.

5.1.9. Anesthesiology
In [148], an interpretable approach is investigated for regional neuraxial analgesia monitor-
ing. The experiments focus on thermal foot images for patients who have received epidural
anesthesia. The proposed method is based on Convolutional Random Fourier Features (CRFF)
and layer-wise weighted CAM. The experiments are performed on the ThermalFeet4 dataset of
infrared images.

5.2. Industry-related Applications


Various industrial and industry-related activities require precise segmentation. These activi-
ties might range from precise manufacturing and processing [90] to structural health monitoring
in infrastructure, particularly in evaluating damage [183]. Here we discuss both industrial pro-
cesses and indirectly related tasks, such as environmental monitoring and remote sensing, which
can have potential in industrial applications in a more narrow sense. In this section, we divide
industry-related explainable segmentation solutions into four categories: remote sensing, moni-
toring, scene understanding, and other more general applications.

5.2.1. Remote Sensing


One of the first applications of interpretable image segmentation is in remote sensing. In
[184], the U-Net model is applied for building detection. The proposed method works at the
intersection of interpretability, representation learning, and interactive visualization, and is de-
signed to explain U-Net’s functionality. It employs Principal Component Analysis (PCA) on
the activations in the bottleneck layer. PCA is the preferred method because it preserves the
largest variance in the data. In the case of 3D visualizations, the first three components could be
used. Following PCA, the new representations are clustered using k-means and DBSCAN algo-
rithm. This approach allows for the visualization of learned latent representations for all samples
through an Intersection over Union (IoU)-based heatmap, allowing users to identify qualitatively
different regions. The experiments are performed on Inria Aerial Image Labeling (IAIL) [185]
dataset. This technique can be applied to detect and evaluate damages in industrial disasters or
humanitarian crises, extending beyond mere infrastructure and product monitoring in industry.
Another remote sensing application [186], specifically focusing on high-resolution satellite im-
ages, employs a gradient-free Sobol method [187] and a U-Noise model [26]. The proposed
method is also compared to Seg-Grad-CAM++ classification extension.

4 https://gcpds-image-segmentation.readthedocs.io/en/latest/notebooks/02-datasets.html

20
Table 2: Explainable Image Segmentation in Industry
Category Domain Datasets Metric Year Ref.
Remote sensing Building detection IAIL [185] ▷ 2019 [184]
Scene understanding Autonomous driving SYNTHIA [188], A2D2 [189] ▷ 2021 [190]
Scene understanding Pedestrian environments PASCAL VOC 2012 [64], NA* 2021 [192]
ADE20K [191], Cityscapes
[65]
Scene understanding Autonomous driving KITTI [193] ▷ 2022 [194]
Environmental monitoring Flood detection Worldfloods [70] ▷ 2022 [61]
Scene understanding/ Driving scenes/Face BDD100k [74], ▷ 2022 [80]
Biometrics recognition CelebAMask-HQ [195],
CelebA [196]
Monitoring/Scene Drones/Food processing ICG drone dataset, private ▷ 2023 [90]
understanding dataset
Monitoring/General Food processing COCO [89], private dataset ▷ 2023 [81]
applications
Biometrics Facial emotions Face recognition dataset [197] ▷ 2023 [198]
Monitoring Cracks in infrastructure CrackInfra [199] ▷ 2023 [199]
General applications Common objects COCO [89] ▶ 2023 [88]
Scene understanding/General Street scenes/Common objects Pascal VOC 2012 [64], ▷ 2023 [54]
applications Cityscapes [65]
Scene understanding Driving scenes BDD100k [74], BDD-OIA ▷ 2023 [73]
[75]
Scene understanding/General Street scenes/Common objects Cityscapes [65], Pascal VOC ▶ 2023 [200]
applications [64], COCO [89]
General applications Common objects COCO [89] ▷ 2023 [92]
General applications Common objects Pascal VOC [64] ▶ 2023 [133]
Scene understanding/Remote Street scenes/Building Cityscapes [65], WHU [95] ▶ 2023 [186]
sensing detection
*The application focuses on introducing explainability to segmentation evaluation, rather than evaluating explainability techniques.
▷: Qualitative XAI evaluation
▶: Quantitative XAI evaluation

5.2.2. Monitoring
Here we review relevant papers that offer explainable segmentation-based monitoring in
proximate environments. In [90], simple gradient [24] saliency maps and SmoothGrad-based
[42] saliencies are implemented for semantic segmentation models to investigate the adversar-
ial attack setting. The experiments are performed on two industry-related cyber-physical system
datasets. A private dataset from CTI FoodTech, a manufacturer of fruit-pitting machines, is used.
In [81], the same private dataset is used for experiments with gradient-free XAI technique, based
on the perturbations of intermediate activation maps.
In [199], the focus is on crack segmentation in critical infrastructures, such as tunnels and
pavements. The U-Net model is used together with Grad-CAM, which is applied at the bottle-
neck, as in [16]. They investigate both simple and complex crack patterns as well as different
backgrounds. Two other papers [201], [183] also investigate the segmentation of different crack
types. However, the proposed XAI techniques are implemented in classification models, and
used for weakly supervised segmentation. These techniques are discussed in the subsequent sec-
tion. In [198], an interpretable Bayesian network is used for facial micro-expression recognition.
The authors prefer these networks for segmentation over DL models, primarily because of their
superior causal interpretability when dealing with uncertain information. This can make them
better interpretability candidates when uncertain causal inference is involved. The experiments
are performed on the database [197] of face images.

21
5.2.3. Scene Understanding
Scene understanding is an important area in applications for autonomous vehicles, monitor-
ing of pedestrians and ambient objects, and surveillance. Precise real-time segmentation of road
signs and obstructions is of particular importance. Explainable segmentation can be seen as part
of explainable autonomous driving systems [190], which investigate events, environments, and
engine operations. An explainable variational autoencoder (VAE) model is proposed in [190],
focusing on neuron activations with the use of attention mapping. For the experiments, the
SYNTHIA [188] and A2D2 [189] datasets are used. The results are analyzed both qualitatively
and quantitatively, using the average area under the receiver operator characteristic curve (AUC-
ROC) index. In [194], XAI techniques are employed to investigate pixel-wise road detection for
autonomous vehicles. The experiments are performed on different segmentation models, using
the KITTI [193] road dataset. The problem is formulated as a binary segmentation task, where
the classes are limited to the road and its surroundings. Grad-CAM and saliency maps are used to
generate explanations. Unmanned aerial vehicles can also fall under the category of autonomous
driving systems. In [90], gradient-based XAI techniques are applied to semantic drone dataset5
from Graz University of Technology.
Automated semantic understanding of pedestrian environments is investigated in [192]. Here
the focus is not on a particular XAI technique, but on introducing some level of explainability to
segmentation evaluation. The paper argues that popular pixel-wise segmentation metrics, such
as IoU or Dice coefficient, do not sufficiently take into account region-based over- and under-
segmentation. Here over-segmentation refers to those cases where the relevant ground-truth
region is segmented into a lower number of regions than the predicted mask. For instance, where
there is only one bus in the segmented ground-truth, but the model segments it into three disjoint
segments. In the case of under-segmentation, the opposite is true. Pixel-wise metrics do not ac-
curately represent these differences in disjoint and joint regions as long as a large enough number
of similar pixels is segmented in both the ground-truth image, and the corresponding prediction.
The use of region-wise measures is proposed as a better way to explain the source of error in
segmentation. The experiments are performed on PASCAL VOC 2012 [64], ADE20K [191],
and Cityscapes [65]. In [202], the focus is on automatic semantic segmentation for sediment
core analysis. To interpret the results, higher segmentation error regions and model prediction
confidence are visualized. Here, the model confidence is defined as prediction probability, and
the model error calculation is based on the normalized categorical cross-entropy.
The authors of [200] propose the Concept Relevance Propagation-based approach L-CRP as
an extension of CRP [203]. By utilizing concept-based explanations, the study seeks to gain
insights into both global and local aspects of explainability. The proposed approach seeks to un-
derstand the contribution of latent concepts to particular detections by identifying them, finding
them in the input space, and evaluating their effect on relevance. Context scores are computed
for different concepts. The experiments are performed on Cityscapes [65], Pascal VOC [64], and
COCO [89] datasets.

5.2.4. General Applications


Some of the XAI-related experiments focus on more general datasets, typically used in evalu-
ating the performance of segmentation models. COCO [89] dataset has been used as a benchmark
in [88] and [81]. The dataset is composed of 21 classes of everyday objects, including several

5 http://dronedataset.icg.tugraz.at/

22
types of vehicles. Both [88] and [81] apply perturbation-based gradient-free methods. Input per-
turbations are used in [88], while feature map perturbations in pre-selected intermediate layers
are used in [81].
Tendency-and-Assignment Explainer (TAX) framework is introduced in [133]. It seeks to
explain: 1) what contributes to the segmentation output; and 2) what contributes to us thinking
so (i.e. the why question). For this, a multi-annotator scenario is considered. The learned
annotator-dependent prototype bank indicates the segmentation tendency, with a particular focus
on uncertain regions. The experimental results on the Pascal VOC [64] dataset demonstrate that
TAX predicts oversegmentation consistent with the annotator tendencies.

5.3. XAI Applications in Self-Supervised and Weakly Supervised Segmentation


Manual image labeling is an expensive operation, especially when pixel-wise labeling is in-
volved. It requires significant time and financial resources, and depending on the dataset being
annotated, may also require particularly narrow expertise. With this in mind, it has been sug-
gested that XAI techniques could be employed for automated labeling, which could also help
reduce some forms of annotation bias.
In [204], a new explainable transformer architecture is proposed for model-inherent inter-
pretability. The proposed model, a Siamese network, is investigated for weakly supervised
segmentation. For enhanced interpretability, model representations are regularized using an
attribute-guided loss function. Higher-layer attention maps are fused and used alongside attribute
features. Qualitative segmentation results are compared with the SIPE technique. However, the
model’s limitation is its inability to incorporate attribute-level ground truth labels. Another appli-
cation for weakly supervised segmentation employs LRP-based classification explanations [201].
These explanations are used to generate pixel-wise binary segmentations, which are then thresh-
olded. The experiments are conducted on two datasets: one for cracks in sewer pipes and another
for cracks in magnetic tiles [205].
In [183], surface crack detection and growth monitoring are investigated as part of structural
health evaluation in infrastructure. Although no specific technique for explainable segmentation
is proposed, explainable classification is used for weakly supervised segmentation, allowing for
the quantification of crack severity. Six post-hoc techniques are implemented: InputXGradient,
LRP, Integrated Gradients, DeepLift, DeepLiftShap, and GradientShap. Additionally, B-cos net-
works and Neural Network Explainer are employed. In [206], GradCAM is used for semantic
segmentation. An additional classification model is used with masked inputs, based on the given
class. The classifier is trained on all the masked images across all classes. Explainable classi-
fication can also be used to enhance the data efficiency of segmentation models. For instance,
in [207], Grad-CAM is employed to extract data-efficient features from the classification model,
which are then used for segmentation. The results indicate that this approach generalizes across
different segmentation methods.

6. Discussion

6.1. Open Issues


Plenty of unresolved challenges remain in explainable semantic segmentation, most of which
are also applicable to image classification tasks. Below is a non-exhaustive list of these chal-
lenges:

23
• Evaluation metrics for XAI Most of the literature on XAI in image classification focuses
on introducing new explainability techniques and their modifications, rather than propos-
ing new evaluative frameworks or benchmark datasets. This tendency is even more visible
in explainable semantic segmentation. Currently, there are no papers dedicated solely to
evaluating XAI results in image segmentation. The investigation of XAI metrics remains
limited to the experimental results sections, and only in those few cases where quantitative
evaluation is used. There is no consensus on which evaluation metrics are most crucial
for capturing the key aspects of explainability, largely due to the difficulty in formalizing
explainability-related concepts. A better theoretical understanding of the problem should
inform the creation of evaluative XAI metrics and benchmarks. Such foundations would
likely result in more efficient explainable segmentation methods that are better adapted to
the problem at hand.
• Safety and robustness of XAI methods With the rapid deployment of DL models in
medical, military, and industrial settings, XAI techniques are set to play an even more im-
portant role. Their primary use is driven by the need to determine if the model is reliable
and trustworthy. However, similar questions can also be raised about the XAI techniques
themselves. It is important to investigate their vulnerabilities and loopholes. Both deploy-
ers and end-users need to know whether they are secure against intentional attacks directed
at XAI techniques or the model. Even if there is no direct threat, the robustness of each
specific XAI method needs to be investigated on a case-by-case basis.
Just like classification models, semantic segmentation models are susceptible to adver-
sarial attacks. Different attack methods have been proposed [208], [209], [210]. When
discussing adversarial attacks, it is common to focus on the model’s output as the primary
target. However, it is also possible to attack the output’s explanation saliency while leaving
both the input and the output perceptibly unchanged. Such attacks have been introduced
and investigated in the context of image classification [211]. It has also been demonstrated
that these second-level attacks can be extended to image segmentation [90]. More research
is needed to find the best ways to combat them, especially since new adversarial attacks
are constantly being developed, and comprehensive safety guarantees are challenging to
ensure. Systematic investigations need to be undertaken for both white-box attacks, where
the attacked model is known to the attacker, and black-box attacks, where it is unknown.
Similar investigations into the robustness of interpretable segmentation could contribute to
the overall security of AI systems.
Adversarial examples are typically not part of the training and testing datasets. This omis-
sion can lead to vulnerabilities in deployed models. Another critical issue is the presence
of biases. When the most salient regions of the explanation map fall outside the bound-
aries of the object of interest, this might signal not just a misguided prediction but also the
potential presence of adversarial influences [15]. Natural adversarial examples [212] and
their influence on XAI in segmentation could be investigated as well.

• XAI for video segmentation As semantic scene segmentation is not limited to 2D im-
ages, new interpretability techniques could be investigated for video data, where temporal
semantic segmentation is carried out. Video object segmentation requires significantly
more computational resources. To our knowledge, there are currently no studies investi-
gating explainable image segmentation in a dynamic setting. The nature of dynamic scenes
could introduce novel challenges not previously encountered in 2D segmentation contexts.
24
For instance, one would need to add an additional temporal explanation axis to account
for differences in interpretability maps across video frames. This task could be further
extended to real-time semantic segmentation by focusing on ways to reduce the latency of
the generated explanations.

6.2. Future Directions


Given that most literature primarily focuses on qualitative metrics, we aim to highlight the
need for a well-defined benchmark and evaluation strategy for XAI methods in image segmen-
tation. To our knowledge, there are currently no studies on evaluation metrics or benchmarks
specifically for XAI methods in this area. Moreover, research focusing on the formal aspects of
quantitative metrics in XAI is limited. We see mechanistic interpretability [213] as a promising
research area. This approach seeks to reverse-engineer how models function. As far as we are
aware, there have been no significant contributions in formal explainability [214] or argumenta-
tive XAI [215] within the context of image segmentation. Additionally, there has not been much
research [216] into the interpretability of transformers for segmentation, especially compared to
convolutional networks.

• Failure Modes This area is related to evaluation metrics. However, it covers problematic
areas that could not be identified by the commonly used metrics. Specifically, XAI could
be used to identify and mitigate bias in segmentation models. A systematic analysis of
failure cases and potential failure modes could better determine the scope of applicability
for XAI methods. Several studies [38] have critically evaluated different groups of ex-
plainability techniques in classification. However, a similar investigation has not yet been
conducted in image segmentation.
• Neural architecture search Neural architecture search (NAS) explores automating neu-
ral architecture designs. XAI techniques can be applied in NAS in at least two distinct
ways. First, existing XAI methods can be incorporated into NAS algorithms to improve
their performance. For example, in [217], an explainable CAM technique is integrated
with the NAS algorithm to avoid fully training submodels. Second, NAS algorithms can
include interpretability aspects as one of the metrics to be optimized in multi-objective op-
timization. In [218], a surrogate interpretability metric has been used for multi-objective
optimization in image classification. However, currently, no similar approaches exist for
semantic segmentation tasks.
• Continual Learning Continual learning (CL) refers to the research area that investigates
techniques allowing models to learn new tasks without forgetting the previously learned
ones. This strong tendency for DL models to forget previously learned information upon
acquiring new knowledge is commonly described as catastrophic forgetting. More effi-
cient solutions to CL problems would allow the models to be used more resourcefully,
without retraining them from scratch when new data arrives. The intersection of XAI and
CL presents an interesting area for investigation. XAI methods can be employed in CL
to: 1) improve the model’s performance; 2) better understand and explain the model’s pre-
dictions; and 3) investigate the phenomenon of catastrophic forgetting. The exploration of
XAI and CL could also lead to improved model understanding when either a shift in data
distribution or concept drift occurs.

25
7. Conclusion

This survey presents a comprehensive view of the field of XAI in image segmentation. Our
goal has been twofold: first, to provide an up-to-date literature review of various types of in-
terpretability methods applied in semantic segmentation; and second, to clarify conceptual mis-
understandings by proposing a method-centered taxonomy for image segmentation and general
frameworks for different types of interpretability techniques. To these ends, we have catego-
rized the methods into five major subgroups: prototype-based, gradient-based, perturbation-
based, counterfactual methods, and architecture-based techniques. Based on the surveyed lit-
erature on explainable image segmentation, it is evident that most of the methods focus on local
explanations and rely on qualitative evaluation. We hope this work can benefit computer vi-
sion researchers by presenting the landscape of XAI in image segmentation, delineating clearer
boundaries between existing methods, and informing the development of new interpretability
techniques.

References
[1] T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” Artificial Intelligence, vol.
267, pp. 1–38, 2019.
[2] C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable
models instead,” Nature Machine Intelligence, vol. 1, no. 5, pp. 206–215, 2019.
[3] R. Roscher, B. Bohn, M. F. Duarte, and J. Garcke, “Explainable machine learning for scientific insights and
discoveries,” IEEE Access, vol. 8, pp. 42 200–42 216, 2020.
[4] E. H. Shortliffe and B. G. Buchanan, “A model of inexact reasoning in medicine,” Mathematical Biosciences,
vol. 23, no. 3-4, pp. 351–379, 1975.
[5] W. Swartout, C. Paris, and J. Moore, “Explanations in knowledge systems: Design for explainable expert systems,”
IEEE Expert, vol. 6, no. 3, pp. 58–64, 1991.
[6] J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” Annals of Statistics, pp. 1189–
1232, 2001.
[7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,”
in Proceedings of the Advances in Neural Information Processing Systems, vol. 25, 2012, pp. 1–9.
[8] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané, “Concrete problems in AI safety,”
arXiv preprint arXiv:1606.06565, pp. 1–29, 2016.
[9] N. Maslej, L. Fattorini, E. Brynjolfsson, J. Etchemendy, K. Ligett, T. Lyons, J. Manyika, H. Ngo, J. C. Niebles,
V. Parli et al., “Artificial Intelligence Index Report 2023,” arXiv preprint arXiv:2310.03715, pp. 1–386, 2023.
[10] Morning Consult, “IBM global AI adoption index - enterprise report,” 2023, Available Online
at: https://filecache.mediaroom.com/mr5mr ibmspgi/179414/download/IBM%20Global%20AI%20Adoption%
20Index%20Report%20Dec.%202023.pdf/, Last Accessed on April 24, 2024.
[11] C. Curtis, N. Gillespie, and S. Lockey, “AI-deploying organizations are key to addressing ‘perfect storm’ of AI
risks,” AI and Ethics, vol. 3, no. 1, pp. 145–153, 2023.
[12] E. Commision, “Regulation of the european parliament and of the council laying down harmonised rules on artifi-
cial intelligence (artificial intelligence act) and amending certain union legislative acts,” 2021, Available Online at:
https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex%3A52021PC0206, Last Accessed on April 22, 2024.
[13] The White House, “Executive order on the safe, secure, and trustworthy development and use of artificial
intelligence,” 2023, Available Online at: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/
10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/, Last
Accessed on April 22, 2024.
[14] K. Wickstrøm, M. Kampffmeyer, and R. Jenssen, “Uncertainty and interpretability in convolutional neural net-
works for semantic segmentation of colorectal polyps,” Medical Image Analysis, vol. 60, p. 101619, 2020.
[15] L. Hoyer, M. Munoz, P. Katiyar, A. Khoreva, and V. Fischer, “Grid saliency for context explanations of semantic
segmentation,” in Proceedings of the Advances in Neural Information Processing Systems, vol. 32, 2019, pp.
1–12.
[16] K. Vinogradova, A. Dibrov, and G. Myers, “Towards interpretable semantic segmentation via gradient-weighted
class activation mapping (student abstract),” in Proceedings of the AAAI Conference on Artificial Intelligence,
vol. 34, no. 10, 2020, pp. 13 943–13 944.
26
[17] D. Feng, C. Haase-Schütz, L. Rosenbaum, H. Hertlein, C. Glaeser, F. Timm, W. Wiesbeck, and K. Dietmayer,
“Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and
challenges,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 3, pp. 1341–1360, 2020.
[18] S. Asgari Taghanaki, K. Abhishek, J. P. Cohen, J. Cohen-Adad, and G. Hamarneh, “Deep semantic segmentation
of natural and medical images: A review,” Artificial Intelligence Review, vol. 54, pp. 137–178, 2021.
[19] K. Vinogradova, “Explainable artificial intelligence for image segmentation and for estimation of optical aberra-
tions,” Ph.D. dissertation, Dresden University of Technology, Germany, 2023.
[20] S. Mullan, “Deep learning and explainable ai in medical image segmentation,” Ph.D. dissertation, The University
of Iowa, 2023.
[21] S. N. Hasany, F. Mériaudeau, and C. Petitjean, “Post-hoc XAI in medical image segmentation: The journey thus
far,” in Proceedings of the Medical Imaging with Deep Learning, 2024, pp. 1–17.
[22] C. Molnar, Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Independently
published, 2022.
[23] Y. Zhang, P. Tiňo, A. Leonardis, and K. Tang, “A survey on neural network interpretability,” IEEE Transactions
on Emerging Topics in Computational Intelligence, vol. 5, no. 5, pp. 726–742, 2021.
[24] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classifica-
tion models and saliency maps,” arXiv preprint arXiv:1312.6034, pp. 1–8, 2013.
[25] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations
from deep networks via gradient-based localization,” in Proceedings of the IEEE International Conference on
Computer Vision, 2017, pp. 618–626.
[26] T. Koker, F. Mireshghallah, T. Titcombe, and G. Kaissis, “U-noise: Learnable noise masks for interpretable image
segmentation,” in 2021 IEEE International Conference on Image Processing, 2021, pp. 394–398.
[27] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Proceedings of the Euro-
pean Conference on Computer Vision, 2014, pp. 818–833.
[28] M. T. Ribeiro, S. Singh, and C. Guestrin, “”why should I trust you?” explaining the predictions of any classifier,”
in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016,
pp. 1135–1144.
[29] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proceedings of the
Advances in Neural Information Processing Systems, vol. 30, 2017, pp. 4768—-4777.
[30] V. Petsiuk, A. Das, and K. Saenko, “RISE: Randomized input sampling for explanation of black-box models,”
arXiv preprint arXiv:1806.07421, pp. 1–17, 2018.
[31] D. Erhan, Y. Bengio, A. Courville, and P. Vincent, “Visualizing higher-layer features of a deep network,” Univer-
sity of Montreal, Tech. Rep., 2009, 1341.
[32] C. Olah, A. Mordvintsev, and L. Schubert, “Feature visualization,” Distill, vol. 2, no. 11, p. e7, 2017.
[33] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,”
in Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Inter-
vention, 2015, pp. 234–241.
[34] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A deep convolutional encoder-decoder architecture
for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp.
2481–2495, 2017.
[35] R. Strudel, R. Garcia, I. Laptev, and C. Schmid, “Segmenter: Transformer for semantic segmentation,” in Pro-
ceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7262–7272.
[36] J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou, “TransUNet: Transformers
make strong encoders for medical image segmentation,” arXiv preprint arXiv:2102.04306, pp. 1–13, 2021.
[37] S. S. Kim, N. Meister, V. V. Ramaswamy, R. Fong, and O. Russakovsky, “Hive: evaluating the human inter-
pretability of visual explanations,” in Proceedings of the European Conference on Computer Vision, 2022, pp.
280–298.
[38] J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt, and B. Kim, “Sanity checks for saliency maps,” in
Proceedings of the Advances in Advances in Neural Information Processing Systems, vol. 31, 2018, pp. 1–11.
[39] P.-J. Kindermans, S. Hooker, J. Adebayo, M. Alber, K. T. Schütt, S. Dähne, D. Erhan, and B. Kim, “The (un)
reliability of saliency methods,” Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp.
267–280, 2019.
[40] A. Saporta, X. Gui, A. Agrawal, A. Pareek, S. Q. Truong, C. D. Nguyen, V.-D. Ngo, J. Seekins, F. G. Blankenberg,
A. Y. Ng et al., “Benchmarking saliency methods for chest x-ray interpretation,” Nature Machine Intelligence,
vol. 4, no. 10, pp. 867–878, 2022.
[41] B. Bilodeau, N. Jaques, P. W. Koh, and B. Kim, “Impossibility theorems for feature attribution,” arXiv preprint
arXiv:2212.11870, pp. 1–38, 2022.
[42] D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg, “Smoothgrad: removing noise by adding noise,”
arXiv preprint arXiv:1706.03825, pp. 1–10, 2017.
27
[43] M. Ancona, E. Ceolini, C. Öztireli, and M. Gross, “Towards better understanding of gradient-based attribution
methods for deep neural networks,” arXiv preprint arXiv:1711.06104, pp. 1–16, 2017.
[44] J. Adebayo, M. Muelly, H. Abelson, and B. Kim, “Post hoc explanations may be ineffective for detecting unknown
spurious correlation,” in Proceedings of the International Conference on Learning Representations, 2021, pp. 1–
13.
[45] A. Atrey, K. Clary, and D. Jensen, “Exploratory not explanatory: Counterfactual analysis of saliency maps for
deep reinforcement learning,” arXiv preprint arXiv:1912.05743, pp. 1–23, 2019.
[46] G. Nguyen, D. Kim, and A. Nguyen, “The effectiveness of feature attribution methods and its correlation with
automatic evaluation scores,” in Proceedings of the Advances in Neural Information Processing Systems, vol. 34,
2021, pp. 26 422–26 436.
[47] Y. Zhou, S. Booth, M. T. Ribeiro, and J. Shah, “Do feature attribution methods correctly attribute features?” in
Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 9, 2022, pp. 9623–9633.
[48] R. Geirhos, R. S. Zimmermann, B. Bilodeau, W. Brendel, and B. Kim, “Don’t trust your eyes: on the (un)
reliability of feature visualizations,” arXiv preprint arXiv:2306.04719, pp. 1–32, 2023.
[49] A. B. Arrieta, N. Dı́az-Rodrı́guez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcı́a, S. Gil-López,
D. Molina, R. Benjamins, R. Chatila, and F. Herrera, “Explainable artificial intelligence (XAI): Concepts, tax-
onomies, opportunities and challenges toward responsible AI,” Information Fusion, vol. 58, pp. 82–115, 2020.
[50] F.-L. Fan, J. Xiong, M. Li, and G. Wang, “On interpretability of artificial neural networks: A survey,” IEEE
Transactions on Radiation and Plasma Medical Sciences, vol. 5, no. 6, pp. 741–760, 2021.
[51] M. Graziani, L. Dutkiewicz, D. Calvaresi, J. P. Amorim, K. Yordanova, M. Vered, R. Nair, P. H. Abreu, T. Blanke,
V. Pulignano, J. O. Prior, L. Lauwaert, W. Reijers, A. Depeursinge, V. Andrearczyk, and H. Müller, “A global tax-
onomy of interpretable AI: Unifying the terminology for the technical and social sciences,” Artificial Intelligence
Review, vol. 56, no. 4, pp. 3473–3504, 2023.
[52] T. Speith, “A review of taxonomies of explainable artificial intelligence (XAI) methods,” in Proceedings of the
2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 2239–2250.
[53] G. Schwalbe and B. Finzel, “A comprehensive taxonomy for explainable artificial intelligence: A systematic
survey of surveys on methods and concepts,” Data Mining and Knowledge Discovery, pp. 1–59, 2023.
[54] M. Sacha, D. Rymarczyk, Ł. Struski, J. Tabor, and B. Zieliński, “ProtoSeg: Interpretable semantic segmentation
with prototypical parts,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision,
2023, pp. 1481–1492.
[55] A. Shahroudnejad, “A survey on understanding, visualizations, and explanation of deep neural networks,” arXiv
preprint arXiv:2102.01792, 2021.
[56] M. Chromik and M. Schuessler, “A taxonomy for human subject evaluation of black-box explanations in XAI,”
ExSS-ATEC@IUI, vol. 1, pp. 1–7, 2020.
[57] M. Biehl, B. Hammer, and T. Villmann, “Prototype-based models in machine learning,” Wiley Interdisciplinary
Reviews: Cognitive Science, vol. 7, no. 2, pp. 92–111, 2016.
[58] C. Chen, O. Li, D. Tao, A. Barnett, C. Rudin, and J. K. Su, “This looks like that: Deep learning for interpretable
image recognition,” in Proceedings of the Advances in Neural Information Processing Systems, vol. 32, 2019, pp.
8930–8941.
[59] J. Donnelly, A. J. Barnett, and C. Chen, “Deformable ProtoPNet: An interpretable image classifier using de-
formable prototypes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
2022, pp. 10 265–10 275.
[60] D. Rymarczyk, Ł. Struski, M. Górszczak, K. Lewandowska, J. Tabor, and B. Zieliński, “Interpretable image
classification with differentiable prototypes assignment,” in Proceedings of the European Conference on Computer
Vision, 2022, pp. 351–368.
[61] Z. Zhang, P. Angelov, E. Soares, N. Longepe, and P. P. Mathieu, “An interpretable deep semantic segmentation
method for Earth observation,” in Proceedings of the IEEE International Conference on Intelligent Systems, 2022,
pp. 1–8.
[62] H. Jeffreys, The Theory of Probability. OUP Oxford, 1998.
[63] J. Hilton, N. Cammarata, S. Carter, G. Goh, and C. Olah, “Understanding RL vision,” Distill, vol. 5, no. 11, p.
e29, 2020.
[64] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The PASCAL visual object classes
(VOC) challenge,” International Journal of Computer Vision, vol. 88, pp. 303–338, 2010.
[65] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele,
“The Cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, 2016, pp. 3213–3223.
[66] “Segmentation of neuronal structures in EM stacks challenge - ISBI 2012 — imagej.net,” https://imagej.net/
events/isbi-2012-segmentation-challenge, [Accessed 07-04-2024].
[67] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic image segmentation
28
with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2017.
[68] P. Angelov and E. Soares, “Towards explainable deep neural networks (xDNN),” Neural Networks, vol. 130, pp.
185–194, 2020.
[69] D. Sculley, “Web-scale k-means clustering,” in Proceedings of the 19th International Conference on World Wide
Web, 2010, pp. 1177–1178.
[70] G. Mateo-Garcia, J. Veitch-Michaelis, L. Smith, S. V. Oprea, G. Schumann, Y. Gal, A. G. Baydin, and D. Backes,
“Towards global flood mapping onboard low cost satellites with machine learning,” Scientific Reports, vol. 11,
no. 1, p. 7249, 2021.
[71] S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual explanations without opening the black box: Auto-
mated decisions and the GDPR,” Harvard Journal of Law & Technology, vol. 31, p. 841, 2017.
[72] R. Guidotti, “Counterfactual explanations and how to find them: Literature review and benchmarking,” Data
Mining and Knowledge Discovery, pp. 1–55, 2022.
[73] M. Zemni, M. Chen, É. Zablocki, H. Ben-Younes, P. Pérez, and M. Cord, “OCTET: Object-aware counterfactual
explanations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023,
pp. 15 062–15 071.
[74] F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan, and T. Darrell, “BDD100K: A diverse driving
dataset for heterogeneous multitask learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, 2020, pp. 2636–2645.
[75] Y. Xu, X. Yang, L. Gong, H.-C. Lin, T.-Y. Wu, Y. Li, and N. Vasconcelos, “Explainable object-induced action
decision for autonomous vehicles,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, 2020, pp. 9523–9532.
[76] M. Zemni, M. Chen, E. Zablocki, H. Ben-Younes, P. Pérez, and M. Cord, “OCTET: Object-aware counterfactual
explanations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023,
pp. 15 062–15 071.
[77] D. Singh, A. Somani, A. Horsch, and D. K. Prasad, “Counterfactual explainable gastrointestinal and colonoscopy
image segmentation,” in Proceedings of the IEEE 19th International Symposium on Biomedical Imaging, 2022,
pp. 1–5.
[78] D. Jha, P. H. Smedsrud, M. A. Riegler, P. Halvorsen, T. de Lange, D. Johansen, and H. D. Johansen, “Kvasir-
seg: A segmented polyp dataset,” in Proceedings of the MultiMedia Modeling Conference, Daejeon, South Korea,
January 5–8, Part II 26, 2020, pp. 451–462.
[79] D. Jha, S. Ali, K. Emanuelsen, S. A. Hicks, V. Thambawita, E. Garcia-Ceja, M. A. Riegler, T. de Lange, P. T.
Schmidt, H. D. Johansen et al., “Kvasir-instrument: Diagnostic and therapeutic tool segmentation dataset in
gastrointestinal endoscopy,” in Proceedings of the MultiMedia Modeling Conference, Prague, Czech Republic,
June 22–24, Part II 27, 2021, pp. 218–229.
[80] P. Jacob, É. Zablocki, H. Ben-Younes, M. Chen, P. Pérez, and M. Cord, “STEEX: Steering counterfactual expla-
nations with semantics,” in Proceedings of the European Conference on Computer Vision, 2022, pp. 387–403.
[81] R. Gipiškis, D. Chiaro, D. Annunziata, and F. Piccialli, “Ablation studies in activation maps for explainable
semantic segmentation in Industry 4.0,” in Proceedings of the IEEE EUROCON, 2023, pp. 36–41.
[82] S. Desai and H. G. Ramaswamy, “Ablation-CAM: Visual explanations for deep convolutional network via
gradient-free localization,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer
Vision, 2020, pp. 983–991.
[83] Y. LeCun, “The MNIST database of handwritten digits,” http://yann. lecun. com/exdb/mnist/, 1998.
[84] A. Wan, D. Ho, Y. Song, H. Tillman, S. A. Bargal, and J. E. Gonzalez, “SegNBDT: Visual decision rules for
segmentation,” arXiv preprint arXiv:2006.06868, pp. 1–15, 2020.
[85] R. Mottaghi, X. Chen, X. Liu, N.-G. Cho, S.-W. Lee, S. Fidler, R. Urtasun, and A. Yuille, “The role of context
for object detection and semantic segmentation in the wild,” in Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, 2014, pp. 891–898.
[86] K. Gong, X. Liang, D. Zhang, X. Shen, and L. Lin, “Look into person: Self-supervised structure-sensitive learning
and a new benchmark for human parsing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2017, pp. 932–940.
[87] P. Dardouillet, A. Benoit, E. Amri, P. Bolon, D. Dubucq, and A. Crédoz, “Explainability of image semantic
segmentation through SHAP values,” in Proceedings of the ICPR Workshops of the International Conference on
Pattern Recognition Workshops, 2022, pp. 188–202.
[88] R. Gipiškis and O. Kurasova, “Occlusion-based approach for interpretable semantic segmentation,” in Proceed-
ings of the Iberian Conference on Information Systems and Technologies (CISTI), 2023, pp. 1–6.
[89] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO:
Common objects in context,” in Proceedings of the ECCV 2014, Zurich, Switzerland, September 6-12, Part V 13,
2014, pp. 740–755.
29
[90] R. Gipiškis, D. Chiaro, M. Preziosi, E. Prezioso, and F. Piccialli, “The impact of adversarial attacks on inter-
pretable semantic segmentation in cyber–physical systems,” IEEE Systems Journal, pp. 5327–5334, 2023.
[91] J. Chiu, C. C. Li, and O. J. Mengshoel, “Potential applications of deep learning in automatic rock joint trace
mapping in a rock mass,” in Proceedings of the IOP Conference, vol. 1124, no. 1, 2023, pp. 1–8.
[92] S. N. Hasany, C. Petitjean, and F. Mériaudeau, “Seg-XRes-CAM: Explaining spatially local regions in image
segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023,
pp. 3732–3737.
[93] R. L. Draelos and L. Carin, “Use HiResCAM instead of Grad-CAM for faithful explanations of convolutional
neural networks,” arXiv preprint arXiv:2011.08891, pp. 1–20, 2020.
[94] A. K. Gizzini, M. Shukor, and A. J. Ghandour, “Extending CAM-based XAI methods for remote sensing imagery
segmentation,” arXiv preprint arXiv:2310.01837, pp. 1–7, 2023.
[95] S. Ji, S. Wei, and M. Lu, “Fully convolutional networks for multisource building extraction from an open aerial
and satellite imagery data set,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 1, pp. 574–586,
2018.
[96] C. Schorr, P. Goodarzi, F. Chen, and T. Dahmen, “Neuroscope: An explainable ai toolbox for semantic segmenta-
tion and image classification of convolutional neural nets,” Applied Sciences, vol. 11, no. 5, p. 2199, 2021.
[97] A. Kapishnikov, T. Bolukbasi, F. Viégas, and M. Terry, “XRAI: Better attributions through regions,” in Proceed-
ings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 4948–4957.
[98] K. Schulze, F. Peppert, C. Schütte, and V. Sunkara, “Chimeric U-net–modifying the standard U-net towards
explainability,” bioRxiv, pp. 1–12, 2022.
[99] M. Losch, M. Fritz, and B. Schiele, “Semantic bottlenecks: Quantifying and improving inspectability of deep
representations,” International Journal of Computer Vision, vol. 129, pp. 3136–3153, 2021.
[100] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba, “Network dissection: Quantifying interpretability of deep
visual representations,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
2017, pp. 6541–6549.
[101] A. Santamaria-Pang, J. Kubricht, A. Chowdhury, C. Bhushan, and P. Tu, “Towards emergent language symbolic
semantic segmentation and model interpretability,” in Proceedings of the MICCAI Conference, Lima, Peru, Octo-
ber 4–8, Part I 23, 2020, pp. 326–334.
[102] R. R. Hoffman, S. T. Mueller, G. Klein, and J. Litman, “Metrics for explainable AI: Challenges and prospects,”
arXiv preprint arXiv:1812.04608, pp. 1–50, 2018.
[103] M. Graziani, I. Palatnik de Sousa, M. M. Vellasco, E. Costa da Silva, H. Müller, and V. Andrearczyk, “Sharpening
local interpretable model-agnostic explanations for histopathology: Improved understandability and reliability,”
in Proceedings of the MICCAI Conference, Strasbourg, France, Part III 24, 2021, pp. 540–549.
[104] C.-K. Yeh, C.-Y. Hsieh, A. Suggala, D. I. Inouye, and P. K. Ravikumar, “On the (in)fidelity and sensitivity of
explanations,” in Proceedings of the Advances in Neural Information Processing Systems, vol. 32, 2019, pp. 1–12.
[105] N. Kokhlikyan, V. Miglani, M. Martin, E. Wang, B. Alsallakh, J. Reynolds, A. Melnikov, N. Kliushkina, C. Araya,
S. Yan et al., “Captum: A unified and generic model interpretability library for PyTorch,” 2020, pp. 1–11.
[106] J. Colin, T. Fel, R. Cadène, and T. Serre, “What I cannot predict, I do not understand: A human-centered evaluation
framework for explainability methods,” in Proceedings of the Advances in Neural Information Processing Systems,
vol. 35, 2022, pp. 2832–2845.
[107] R. Tomsett, D. Harborne, S. Chakraborty, P. Gurram, and A. Preece, “Sanity checks for saliency metrics,” in
Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 6021–6029.
[108] D. Vázquez, J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, A. M. López, A. Romero, M. Drozdzal,
A. Courville et al., “A benchmark for endoluminal scene segmentation of colonoscopy images,” Journal of Health-
care Engineering, pp. 1–9, 2017.
[109] P. Bilic, P. Christ, H. B. Li, E. Vorontsov, A. Ben-Cohen, G. Kaissis, A. Szeskin, C. Jacobs, G. E. H. Mamani,
G. Chartrand et al., “The liver tumor segmentation benchmark (LiTS),” Medical Image Analysis, vol. 84, p.
102680, 2023.
[110] V. Couteaux, O. Nempont, G. Pizaine, and I. Bloch, “Towards interpretability of segmentation networks by ana-
lyzing DeepDreams,” in Proceedings of the Second International Workshop, iMIMIC 2019, and 9th International
Workshop, ML-CDS 2019, Held in Conjunction with MICCAI Conference, Shenzhen, China, 2019, pp. 56–63.
[111] P. Radau, Y. Lu, K. Connelly, G. Paul, A. J. Dick, and G. A. Wright, “Evaluation framework for algorithms
segmenting short axis cardiac MRI,” The MIDAS Journal, 2009.
[112] O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, X. Yang, P.-A. Heng, I. Cetin, K. Lekadir, O. Camara, M. A. G.
Ballester et al., “Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis:
Is the problem solved?” IEEE Transactions on Medical Imaging, vol. 37, no. 11, pp. 2514–2525, 2018.
[113] J. Sun, F. Darbehani, M. Zaidi, and B. Wang, “SAUNet: Shape attentive U-net for interpretable medical image
segmentation,” in Proceedings of the MICCAI Conference, Lima, Peru, Part IV 23, 2020, pp. 797–806.
[114] N. Codella, V. Rotemberg, P. Tschandl, M. E. Celebi, S. Dusza, D. Gutman, B. Helba, A. Kalloo, K. Liopyris,
30
M. Marchetti et al., “Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the interna-
tional skin imaging collaboration (ISIC),” arXiv preprint arXiv:1902.03368, pp. 1–12, 2019.
[115] R. Gu, G. Wang, T. Song, R. Huang, M. Aertsen, J. Deprest, S. Ourselin, T. Vercauteren, and S. Zhang, “CA-
Net: Comprehensive attention convolutional neural networks for explainable medical image segmentation,” IEEE
Transactions on Medical Imaging, vol. 40, no. 2, pp. 699–711, 2020.
[116] A. Janik, J. Dodd, G. Ifrim, K. Sankaran, and K. Curran, “Interpretability of a deep learning model in the appli-
cation of cardiac MRI segmentation with an ACDC challenge dataset,” in Proceedings of SPIE Medical Imaging
Conference, vol. 11596, 2021, pp. 861–872.
[117] A. Ahmed and L. A. Ali, “Explainable medical image segmentation via generative adversarial networks and layer-
wise relevance propagation,” arXiv preprint arXiv:2111.01665, pp. 1–3, 2021.
[118] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom,
R. Wiest et al., “The multimodal brain tumor image segmentation benchmark (BRATS),” IEEE Transactions on
Medical Imaging, vol. 34, no. 10, pp. 1993–2024, 2014.
[119] H. Saleem, A. R. Shahid, and B. Raza, “Visual interpretability in 3d brain tumor segmentation network,” Com-
puters in Biology and Medicine, vol. 133, p. 104410, 2021.
[120] M. Trokielewicz, A. Czajka, and P. Maciejewicz, “Post-mortem iris recognition resistant to biological eye decay
processes,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020, pp.
2307–2315.
[121] A. Kuehlkamp, A. Boyd, A. Czajka, K. Bowyer, P. Flynn, D. Chute, and E. Benjamin, “Interpretable deep
learning-based forensic iris segmentation and recognition,” in Proceedings of the IEEE/CVF Winter Conference
on Applications of Computer Vision, 2022, pp. 359–368.
[122] P. Tschandl, C. Rosendahl, and H. Kittler, “The HAM10000 dataset, a large collection of multi-source dermato-
scopic images of common pigmented skin lesions,” Scientific Data, vol. 5, no. 1, pp. 1–9, 2018.
[123] A. E. Kavur, N. S. Gezer, M. Barış, S. Aslan, P.-H. Conze, V. Groza, D. D. Pham, S. Chatterjee, P. Ernst, S. Özkan
et al., “CHAOS challenge-combined (CT-MR) healthy abdominal organ segmentation,” Medical Image Analysis,
vol. 69, p. 101950, 2021.
[124] M. Karri, C. S. R. Annavarapu, and U. R. Acharya, “Explainable multi-module semantic guided attention based
network for medical image segmentation,” Computers in Biology and Medicine, vol. 151, p. 106231, 2022.
[125] F. Ambellan, A. Tack, M. Ehlke, and S. Zachow, “Automated segmentation of knee bone and cartilage combining
statistical shape knowledge and convolutional neural networks: Data from the Osteoarthritis Initiative,” Medical
Image Analysis, vol. 52, pp. 109–118, 2019.
[126] R. A. Zeineldin, M. E. Karar, Z. Elshaer, ·. J. Coburger, C. R. Wirtz, O. Burgert, and F. Mathis-Ullrich, “Explain-
ability of deep neural networks for MRI analysis of brain tumors,” International Journal of Computer Assisted
Radiology and Surgery, vol. 17, no. 9, pp. 1673–1683, 2022.
[127] S. Chatterjee, A. Das, C. Mandal, B. Mukhopadhyay, M. Vipinraj, A. Shukla, R. Nagaraja Rao, C. Sarasaen,
O. Speck, and A. Nürnberger, “TorchEsegeta: Framework for interpretability and explainability of image-based
deep learning models,” Applied Sciences, vol. 12, no. 4, p. 1834, 2022.
[128] F. Bardozzo, T. Collins, A. Forgione, A. Hostettler, and R. Tagliaferri, “StaSiS-Net: A stacked and siamese
disparity estimation network for depth reconstruction in modern 3d laparoscopy,” Medical Image Analysis, vol. 77,
p. 102380, 2022.
[129] F. Bardozzo, M. D. Priscoli, T. Collins, A. Forgione, A. Hostettler, and R. Tagliaferri, “Cross X-AI: Explain-
able semantic segmentation of laparoscopic images in relation to depth estimation,” in Proceedings of the IEEE
International Joint Conference on Neural Networks, 2022, pp. 1–8.
[130] W. Al-Dhabyani, M. Gomaa, H. Khaled, and A. Fahmy, “Dataset of breast ultrasound images,” Data in Brief,
vol. 28, p. 104863, 2020.
[131] J. Wang, Y. Zheng, J. Ma, X. Li, C. Wang, J. Gee, H. Wang, and W. Huang, “Information bottleneck-based inter-
pretable multitask network for breast cancer classification and segmentation,” Medical Image Analysis, vol. 83, p.
102687, 2023.
[132] S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P. Reeves, B. Zhao, D. R. Aberle,
C. I. Henschke, E. A. Hoffman et al., “The lung image database consortium (LIDC) and image database resource
initiative (IDRI): A completed reference database of lung nodules on CT scans,” Medical Physics, vol. 38, no. 2,
pp. 915–931, 2011.
[133] Y.-C. Cheng, Z.-Y. Shiau, F.-E. Yang, and Y.-C. F. Wang, “TAX: Tendency-and-assignment explainer for semantic
segmentation with multi-annotators,” arXiv preprint arXiv:2302.09561, pp. 1–10, 2023.
[134] J. A. Dowling, J. Sun, P. Pichler, D. Rivest-Hénault, S. Ghose, H. Richardson, C. Wratten, J. Martin, J. Arm,
L. Best et al., “Automatic substitute computed tomography generation and contouring for magnetic resonance
imaging (MRI)-alone external beam radiation therapy from standard MRI sequences,” International Journal of
Radiation Oncology* Biology* Physics, vol. 93, no. 5, pp. 1144–1153, 2015.
[135] W. Dai, S. Liu, C. B. Engstrom, and S. S. Chandra, “Explainable semantic medical image segmentation with
31
style,” arXiv preprint arXiv:2303.05696, pp. 1–12, 2023.
[136] B. Landman, Z. Xu, J. Igelsias, M. Styner, T. Langerak, and A. Klein, “MICCAI multi-atlas labeling beyond
the cranial vault–workshop and challenge,” in Proceedings of the MICCAI Multi-Atlas Labeling Beyond Cranial
Vault—Workshop Challenge, vol. 5, 2015, p. 12.
[137] M. Xian, Y. Zhang, H.-D. Cheng, F. Xu, K. Huang, B. Zhang, J. Ding, C. Ning, and Y. Wang, A benchmark for
breast ultrasound image segmentation (BUSIS). Infinite Study, 2018.
[138] T. Geertsma, “Ultrasoundcases. info,” 2014.
[139] M. Karimzadeh, A. Vakanski, M. Xian, and B. Zhang, “Post-hoc explainability of BI-RADS descriptors in a
multi-task framework for breast cancer detection and segmentation,” arXiv preprint arXiv:2308.14213, pp. 1–11,
2023.
[140] S. Gatidis, T. Hepp, M. Früh, C. La Fougère, K. Nikolaou, C. Pfannenberg, B. Schölkopf, T. Küstner, C. Cyran,
and D. Rubin, “A whole-body FDG-PET/CT dataset with manually annotated tumor lesions,” Scientific Data,
vol. 9, no. 1, p. 601, 2022.
[141] S. Kang, Z. Chen, L. Li, W. Lu, X. S. Qi, and S. Tan, “Learning feature fusion via an interpretation method for
tumor segmentation on PET/CT,” Applied Soft Computing, vol. 148, p. 110825, 2023.
[142] K. Wang, S. Yin, Y. Wang, and S. Li, “Explainable deep learning for medical image segmentation with learn-
able class activation mapping,” in Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and
Machine Learning, 2023, pp. 210–215.
[143] R. Sun and M. Rostami, “Explainable artificial intelligence architecture for melanoma diagnosis using indicator
localization and self-supervised learning,” arXiv preprint arXiv:2303.14615, pp. 1–16, 2023.
[144] J. Zhang, R. Gu, P. Xue, M. Liu, H. Zheng, Y. Zheng, L. Ma, G. Wang, and L. Gu, “S3R: Shape and semantics-
based selective regularization for explainable continual segmentation across multiple sites,” IEEE Transactions
on Medical Imaging, pp. 1–13, 2023.
[145] I. C. Moreira, I. Amaral, I. Domingues, A. Cardoso, M. J. Cardoso, and J. S. Cardoso, “INbreast: Toward a
full-field digital mammographic database,” Academic Radiology, vol. 19, no. 2, pp. 236–248, 2012.
[146] A. Farrag, G. Gad, Z. M. Fadlullah, M. M. Fouda, and M. Alsabaan, “An explainable AI system for medical image
segmentation with preserved local resolution: Mammogram tumor segmentation,” IEEE Access, pp. 125 543–
125 561, 2023.
[147] M. Dörrich, M. Hecht, R. Fietkau, A. Hartmann, H. Iro, A.-O. Gostian, M. Eckstein, and A. M. Kist, “Explainable
convolutional neural networks for assessing head and neck cancer histopathology,” Diagnostic Pathology, vol. 18,
no. 1, p. 121, 2023.
[148] J. C. Aguirre-Arango, A. M. Álvarez-Meza, and G. Castellanos-Dominguez, “Feet segmentation for regional anal-
gesia monitoring using convolutional RFF and layer-wise weighted CAM interpretability,” Computation, vol. 11,
no. 6, p. 113, 2023.
[149] S. He, Y. Feng, P. E. Grant, and Y. Ou, “Segmentation ability map: Interpret deep features for medical image
segmentation,” Medical Image Analysis, vol. 84, p. 102726, 2023.
[150] M. Antonelli, A. Reinke, S. Bakas, K. Farahani, A. Kopp-Schneider, B. A. Landman, G. Litjens, B. Menze,
O. Ronneberger, R. M. Summers et al., “The medical segmentation decathlon,” Nature Communications, vol. 13,
no. 1, p. 4128, 2022.
[151] T. Okamoto, C. Gu, J. Yu, and C. Zhang, “Generating smooth interpretability map for explainable image segmen-
tation,” in Proceedings of the IEEE Global Conference on Consumer Electronics, 2023, pp. 1023–1025.
[152] J. Li, P. Jin, J. Zhu, H. Zou, X. Xu, M. Tang, M. Zhou, Y. Gan, J. He, Y. Ling et al., “Multi-scale GCN-assisted
two-stage network for joint segmentation of retinal layers and discs in peripapillary OCT images,” Biomedical
Optics Express, vol. 12, no. 4, pp. 2204–2220, 2021.
[153] S. J. Chiu, M. J. Allingham, P. S. Mettu, S. W. Cousins, J. A. Izatt, and S. Farsiu, “Kernel regression based
segmentation of optical coherence tomography images with diabetic macular edema,” Biomedical Optics Express,
vol. 6, no. 4, pp. 1172–1194, 2015.
[154] X. He, Y. Wang, F. Poiesi, W. Song, Q. Xu, Z. Feng, and Y. Wan, “Exploiting multi-granularity visual features
for retinal layer segmentation in human eyes,” Frontiers in Bioengineering and Biotechnology, vol. 11, pp. 1–14,
2023.
[155] N. Bloch, A. Madabhushi, H. Huisman, J. Freymann, J. Kirby, M. Grauer, A. Enquobahrie, C. Jaffe, L. Clarke, and
K. Farahani, “NCI-ISBI 2013 challenge: Automated segmentation of prostate structures,” The Cancer Imaging
Archive, vol. 370, no. 6, p. 5, 2015.
[156] G. Lemaı̂tre, R. Martı́, J. Freixenet, J. C. Vilanova, P. M. Walker, and F. Meriaudeau, “Computer-aided detection
and diagnosis for prostate cancer based on mono and multi-parametric MRI: A review,” Computers in Biology
and Medicine, vol. 60, pp. 8–31, 2015.
[157] G. Litjens, R. Toth, W. Van De Ven, C. Hoeks, S. Kerkstra, B. Van Ginneken, G. Vincent, G. Guillard, N. Birbeck,
J. Zhang et al., “Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge,” Medical
Image Analysis, vol. 18, no. 2, pp. 359–373, 2014.
32
[158] X. Zhuang, “Multivariate mixture model for myocardial segmentation combining multi-source images,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 12, pp. 2933–2946, 2018.
[159] A. Lalande, Z. Chen, T. Decourselle, A. Qayyum, T. Pommier, L. Lorgis, E. de La Rosa, A. Cochet, Y. Cottin,
D. Ginhac et al., “Emidec: A database usable for the automatic evaluation of myocardial infarction from delayed-
enhancement cardiac MRI,” Data, vol. 5, no. 4, p. 89, 2020.
[160] X. Zhuang and J. Shen, “Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI,”
Medical Image Analysis, vol. 31, pp. 77–87, 2016.
[161] H. Kirişli, M. Schaap, C. Metz, A. Dharampal, W. B. Meijboom, S.-L. Papadopoulou, A. Dedic, K. Nieman, M. A.
de Graaf, M. Meijs et al., “Standardized evaluation framework for evaluating coronary artery stenosis detection,
stenosis quantification and lumen segmentation algorithms in computed tomography angiography,” Medical Image
Analysis, vol. 17, no. 8, pp. 859–876, 2013.
[162] S. Gao, H. Zhou, Y. Gao, and X. Zhuang, “BayeSeg: Bayesian modeling for medical image segmentation with
interpretable generalizability,” arXiv preprint arXiv:2303.01710, pp. 1–15, 2023.
[163] X. He, W. Song, Y. Wang, F. Poiesi, J. Yi, M. Desai, Q. Xu, K. Yang, and Y. Wan, “Light-weight retinal layer
segmentation with global reasoning,” arXiv preprint arXiv:2404.16346, 2024.
[164] Z. Lambert, C. Petitjean, B. Dubray, and S. Kuan, “SegTHOR: Segmentation of thoracic organs at risk in CT
images,” in Proceedings of the International Conference on Image Processing Theory, Tools and Applications,
2020, pp. 1–6.
[165] Z. Lambert and C. Le Guyader, “About the incorporation of topological prescriptions in CNNs for medical image
semantic segmentation,” Journal of Mathematical Imaging and Vision, pp. 1–28, 2024.
[166] J. Sivaswamy, S. Krishnadas, A. Chakravarty, G. Joshi, A. S. Tabish et al., “A comprehensive retinal image dataset
for the assessment of glaucoma from the optic nerve head analysis,” JSM Biomedical Imaging Data Papers, vol. 2,
no. 1, p. 1004, 2015.
[167] F. Fumero, S. Alayón, J. L. Sanchez, J. Sigut, and M. Gonzalez-Hernandez, “RIM-ONE: An open retinal image
database for optic nerve evaluation,” in Proceedings of the International Symposium on Computer-based Medical
Systems, 2011, pp. 1–6.
[168] J. I. Orlando, H. Fu, J. B. Breda, K. Van Keer, D. R. Bathula, A. Diaz-Pinto, R. Fang, P.-A. Heng, J. Kim, J. Lee
et al., “Refuge challenge: A unified framework for evaluating automated methods for glaucoma assessment from
fundus photographs,” Medical Image Analysis, vol. 59, p. 101570, 2020.
[169] N. C. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Liopyris,
N. Mishra, H. Kittler et al., “Skin lesion analysis toward melanoma detection: A challenge at the 2017 interna-
tional symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC),”
in Proceedings of the IEEE International Symposium on Biomedical Imaging, 2018, pp. 168–172.
[170] M. Combalia, N. C. Codella, V. Rotemberg, B. Helba, V. Vilaplana, O. Reiter, C. Carrera, A. Barreiro, A. C.
Halpern, S. Puig et al., “BCN20000: Dermoscopic lesions in the wild,” arXiv preprint arXiv:1908.02288, pp.
1–3, 2019.
[171] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localiza-
tion,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2921–2929.
[172] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional
net,” arXiv preprint arXiv:1412.6806, pp. 1–14, 2014.
[173] S. Mohagheghi and A. H. Foruzan, “Developing an explainable deep learning boundary correction method by
incorporating cascaded x-Dim models to improve segmentation defects in liver CT images,” Computers in Biology
and Medicine, vol. 140, p. 105106, 2022.
[174] I. Dirks, M. Keyaerts, B. Neyns, and J. Vandemeulebroucke, “Computer-aided detection and segmentation of
malignant melanoma lesions on whole-body 18F-FDG PET/CT using an interpretable deep learning approach,”
Computer Methods and Programs in Biomedicine, vol. 221, p. 106902, 2022.
[175] S. Dasanayaka, S. Silva, V. Shantha, D. Meedeniya, and T. Ambegoda, “Interpretable machine learning for brain
tumor analysis using MRI,” in Proceedings of the IEEE International Conference on Advanced Research in Com-
puting, 2022, pp. 212–217.
[176] K. Cortacero, B. McKenzie, S. Müller, R. Khazen, F. Lafouresse, G. Corsaut, N. Van Acker, F.-X. Frenois,
L. Lamant, N. Meyer et al., “Evolutionary design of explainable algorithms for biomedical image segmentation,”
Nature Communications, vol. 14, no. 1, p. 7112, 2023.
[177] A. Kaur, G. Dong, and A. Basu, “GradXcepUNet: Explainable AI based medical image segmentation,” in Pro-
ceedings of the International Conference on Smart Multimedia, 2022, pp. 174–188.
[178] P. F. Christ, F. Ettlinger, F. Grün, M. E. A. Elshaera, J. Lipkova, S. Schlecht, F. Ahmaddy, S. Tatavarty, M. Bickel,
P. Bilic et al., “Automatic liver and tumor segmentation of CT and MRI volumes using cascaded fully convolu-
tional neural networks,” arXiv preprint arXiv:1702.05970, pp. 1–20, 2017.
[179] E. Pintelas and I. E. Livieris, “XSC—An explainable image segmentation and classification framework: A case
study on skin cancer,” Electronics, vol. 12, no. 17, p. 3551, 2023.
33
[180] P. Ashtari, D. M. Sima, L. De Lathauwer, D. Sappey-Marinier, F. Maes, and S. Van Huffel, “Factorizer: A scalable
interpretable approach to context modeling for medical image segmentation,” Medical Image Analysis, vol. 84, p.
102706, 2023.
[181] H. Mattern, A. Sciarra, F. Godenschweger, D. Stucht, F. Lüsebrink, G. Rose, and O. Speck, “Prospective motion
correction enables highest resolution time-of-flight angiography at 7t,” Magnetic Resonance in Medicine, vol. 80,
no. 1, pp. 248–258, 2018.
[182] A. L. Simpson, M. Antonelli, S. Bakas, M. Bilello, K. Farahani, B. Van Ginneken, A. Kopp-Schneider, B. A.
Landman, G. Litjens, B. Menze et al., “A large annotated medical image dataset for the development and evalua-
tion of segmentation algorithms,” arXiv preprint arXiv:1902.09063, pp. 1–15, 2019.
[183] F. Forest, H. Porta, D. Tuia, and O. Fink, “From classification to segmentation with explainable AI: A study on
crack detection and growth monitoring,” arXiv preprint arXiv:2309.11267, pp. 1–43, 2023.
[184] A. Janik, K. Sankaran, and A. Ortiz, “Interpreting black-box semantic segmentation models in remote sensing
applications,” Machine Learning Methods in Visualisation for Big Data, pp. 7–11, 2019.
[185] E. Maggiori, Y. Tarabalka, G. Charpiat, and P. Alliez, “Can semantic labeling methods generalize to any city?
The Inria aerial image labeling benchmark,” in Proceedings of the IEEE International Geoscience and Remote
Sensing Symposium, 2017, pp. 3226–3229.
[186] H. Shreim, A. K. Gizzini, and A. J. Ghandour, “Trainable noise model as an XAI evaluation method: application
on sobol for remote sensing image segmentation,” arXiv preprint arXiv:2310.01828, pp. 1–7, 2023.
[187] T. Fel, R. Cadène, M. Chalvidal, M. Cord, D. Vigouroux, and T. Serre, “Look at the variance! Efficient black-
box explanations with Sobol-based sensitivity analysis,” in Proceedings of the Advances in Neural Information
Processing Systems, vol. 34, 2021, pp. 26 005–26 014.
[188] J. Zolfaghari Bengar, A. Gonzalez-Garcia, G. Villalonga, B. Raducanu, H. Habibi Aghdam, M. Mozerov, A. M.
Lopez, and J. Van de Weijer, “Temporal coherence for active learning in videos,” in Proceedings of the IEEE/CVF
International Conference on Computer Vision Workshops, 2019, pp. 1–10.
[189] J. Geyer, Y. Kassahun, M. Mahmudi, X. Ricou, R. Durgesh, A. S. Chung, L. Hauswald, V. H. Pham, M. Mühlegg,
S. Dorn et al., “A2d2: Audi autonomous driving dataset,” arXiv preprint arXiv:2004.06320, pp. 1–10, 2020.
[190] M. Abukmeil, A. Genovese, V. Piuri, F. Rundo, and F. Scotti, “Towards explainable semantic segmentation for
autonomous driving systems by multi-scale variational attention,” in Proceedings of the IEEE International Con-
ference on Autonomous Systems, 2021, pp. 1–5.
[191] B. Zhou, H. Zhao, X. Puig, T. Xiao, S. Fidler, A. Barriuso, and A. Torralba, “Semantic understanding of scenes
through the ADE20K dataset,” International Journal of Computer Vision, vol. 127, pp. 302–321, 2019.
[192] Y. Zhang, S. Mehta, and A. Caspi, “Rethinking semantic segmentation evaluation for explainability and model
selection,” arXiv preprint arXiv:2101.08418, pp. 1–14, 2021.
[193] J. Fritsch, T. Kuehnl, and A. Geiger, “A new performance measure and evaluation benchmark for road detection
algorithms,” in Proceedings of the IEEE Conference on Intelligent Transportation Systems, 2013, pp. 1693–1700.
[194] H. Mankodiya, D. Jadav, R. Gupta, S. Tanwar, W.-C. Hong, and R. Sharma, “OD-XAI: Explainable AI-based
semantic object detection for autonomous vehicles,” Applied Sciences, vol. 12, no. 11, p. 5310, 2022.
[195] C.-H. Lee, Z. Liu, L. Wu, and P. Luo, “MaskGAN: Towards diverse and interactive facial image manipulation,”
in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 5549–5558.
[196] Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the IEEE
International Conference on Computer Vision, 2015, pp. 3730–3738.
[197] M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp.
71–86, 1991.
[198] C. Wang, X. Gao, and X. Li, “An interpretable deep Bayesian model for facial micro-expression recognition,” in
Proceedings of the IEEE International Conference on Control and Robotics Engineering, 2023, pp. 91–94.
[199] F. Liu, W. Ding, Y. Qiao, and L. Wang, “Transfer learning-based encoder-decoder model with visual explanations
for infrastructure crack segmentation: New open database and comprehensive evaluation,” Underground Space,
pp. 60–81, 2023.
[200] M. Dreyer, R. Achtibat, T. Wiegand, W. Samek, and S. Lapuschkin, “Revealing hidden context bias in segmenta-
tion and object detection through concept-specific explanations,” in Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, 2023, pp. 3828–3838.
[201] C. Seibold, J. Künzel, A. Hilsmann, and P. Eisert, “From explanations to segmentation: Using explainable AI for
image segmentation,” arXiv preprint arXiv:2202.00315, pp. 1–10, 2022.
[202] A. Di Martino, G. Carlini, G. Castellani, D. Remondini, and A. Amorosi, “Sediment core analysis using artificial
intelligence,” Scientific Reports, vol. 13, no. 1, p. 20409, 2023.
[203] R. Achtibat, M. Dreyer, I. Eisenbraun, S. Bosse, T. Wiegand, W. Samek, and S. Lapuschkin, “From” where”
to” what”: Towards human-understandable explanations through concept relevance propagation,” arXiv preprint
arXiv:2206.03208, pp. 1–87, 2022.
[204] L. Yu, W. Xiang, J. Fang, Y.-P. P. Chen, and L. Chi, “eX-ViT: A novel explainable vision transformer for weakly
34
supervised semantic segmentation,” Pattern Recognition, vol. 142, p. 109666, 2023.
[205] Y. Huang, C. Qiu, and K. Yuan, “Surface defect saliency of magnetic tile,” The Visual Computer, vol. 36, no. 1,
pp. 85–96, 2020.
[206] M. S. Bedmutha and S. Raman, “Using class activations to investigate semantic segmentation,” in Proceedings
of the Computer Vision and Image Processing Conference, Prayagraj, India, Revised Selected Papers, Part III 5,
2021, pp. 151–161.
[207] X. Wu, Z. Li, C. Tao, X. Han, Y.-W. Chen, J. Yao, J. Zhang, Q. Sun, W. Li, Y. Liu et al., “DEA: Data-efficient
augmentation for interpretable medical image segmentation,” Biomedical Signal Processing and Control, vol. 89,
p. 105748, 2024.
[208] V. Fischer, M. C. Kumar, J. H. Metzen, and T. Brox, “Adversarial examples for semantic image segmentation,”
arXiv preprint arXiv:1703.01101, pp. 1–4, 2017.
[209] M. Cisse, Y. Adi, N. Neverova, and J. Keshet, “Houdini: Fooling deep structured prediction models,” arXiv
preprint arXiv:1707.05373, pp. 1–12, 2017.
[210] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, “Adversarial examples for semantic segmentation and
object detection,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1369–1378.
[211] A.-K. Dombrowski, M. Alber, C. Anders, M. Ackermann, K.-R. Müller, and P. Kessel, “Explanations can be ma-
nipulated and geometry is to blame,” in Proceedings of the Advances in Neural Information Processing Systems,
vol. 32, 2019, pp. 1–12.
[212] D. Hendrycks, K. Zhao, S. Basart, J. Steinhardt, and D. Song, “Natural adversarial examples,” in Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15 262–15 271.
[213] N. Cammarata, G. Goh, S. Carter, C. Voss, L. Schubert, and C. Olah, “Curve circuits,” Distill, 2021,
https://distill.pub/2020/circuits/curve-circuits.
[214] J. Marques-Silva, “Logic-based explainability in machine learning,” in Reasoning Web. Causality, Explanations
and Declarative Knowledge: 18th International Summer School 2022, Berlin, Germany, September 27–30, 2022,
Tutorial Lectures, 2023, pp. 24–104.
[215] K. Čyras, A. Rago, E. Albini, P. Baroni, and F. Toni, “Argumentative XAI: A survey,” arXiv preprint
arXiv:2105.11266, pp. 1–8, 2021.
[216] R. Karim and R. P. Wildes, “Understanding video transformers for segmentation: A survey of application and
interpretability,” arXiv preprint arXiv:2310.12296, pp. 1–113, 2023.
[217] Z. Zhang, Z. Wang, and I. Joe, “CAM-NAS: An efficient and interpretable neural architecture search model based
on class activation mapping,” Applied Sciences, vol. 13, no. 17, p. 9686, 2023.
[218] Z. Carmichael, T. Moon, and S. A. Jacobs, “Learning interpretable models through multi-objective neural archi-
tecture search,” arXiv preprint arXiv:2112.08645, pp. 1–25, 2021.

35

You might also like