0% found this document useful (0 votes)
11 views

Seminar Report 77

The seminar report discusses Explainable Artificial Intelligence (XAI), emphasizing its importance in making AI systems transparent and understandable. It covers the definition, techniques, applications across various industries, and the challenges faced by XAI. The report concludes that XAI is crucial for trust and accountability in AI, particularly in critical sectors like healthcare and finance.

Uploaded by

Damilola Timothy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Seminar Report 77

The seminar report discusses Explainable Artificial Intelligence (XAI), emphasizing its importance in making AI systems transparent and understandable. It covers the definition, techniques, applications across various industries, and the challenges faced by XAI. The report concludes that XAI is crucial for trust and accountability in AI, particularly in critical sectors like healthcare and finance.

Uploaded by

Damilola Timothy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

TAI SOLARIN UNIVERSITY OF

EDUCATION

P.M.B2118, IJAGUN, IJEBU-ODE,


OGUN-STATE.

DEPARTMENT: COMPUTER SCIENCE


COURSE CODE: CSC315
LEVEL: 300
COURSE TITLE:SEMINAR
TOPIC:EXPLAINABLE E-LEARNING DEVELOPMENT

NAME MATRIC NO
Kareem Shinaayo Oyinlola 20220294113
Rokeeb Abioja 20220294339

Adedayo
SEMINAR REPORT ON EXPLAINABLE ARTIFICIAL INTELLIGENCE
(XAI)

TABLE OF CONTENTS

1. Introduction

2. Overview of Artificial Intelligence (AI)

2.1 Definition and Evolution of AI


2.2
2.3 Types of AI
2.4

2.3 Common AI Technologies

3. Understanding Explainable Artificial Intelligence (XAI)

3.1 Definition of XAI


3.2
3.3 Difference Between XAI and Black-Box AI
3.4

4. Importance of XAI

4.1 Trust and Transparency


4.2
4.3 Bias Detection
4.4

4.3 Regulatory Compliance

4.5 Debugging AI Systems


4.6
5. Techniques and Approaches in XAI

5.1 Model-Specific XAI Techniques


5.2
5.3 Model-Agnostic XAI Techniques
5.4

6. Applications of XAI in Various Industries

6.1 Healthcare
6.2
6.3 Finance
6.4 6.3 Autonomous Vehicles

6.5 Law and Criminal Justice


6.6

6.5 Cybersecurity

7. Challenges and Future of XAI

7.1 Challenges of XAI


7.2
7.3 Future of XAI
7.4

8. Conclusion

9. References
INTRODUCTION

Artificial Intelligence (AI) has become a crucial technology in today’s world, automating
tasks that previously required human intelligence. However, as AI models become more
complex, their decision-making processes become harder to interpret.

The lack of transparency In AI models, especially deep learning algorithms, has raised
concerns about trust, fairness, and accountability. Explainable Artificial Intelligence (XAI)
aims to bridge this gap by making AI models more interpretable and understandable.

This seminar will provide a comprehensive understanding of XAI, its importance,


techniques, applications, challenges, and future directions.

1. OVERVIEW OF ARTIFICIAL INTELLIGENCE (AI)

1.1 Definition and Evolution of AI


1.2

Artificial Intelligence (AI) refers to computer systems that mimic human cognitive functions such as
learning, problem-solving, and decision-making. The evolution of AI can be traced through
several phases:

1950s–1970s: Rule-based systems and symbolic AI.

1980s–1990s: Machine learning and expert systems.

2000s–Present: Deep learning, neural networks, and reinforcement learning.


1.3 Types of AI
1.4

AI is categorized into three major types:

1. Narrow AI: AI designed for specific tasks (e.g., Siri, Google Assistant).

2. General AI: AI with human-like reasoning abilities (still theoretical).

3. Super AI: AI surpassing human intelligence (speculative).

2.3 Common AI Technologies

Machine Learning (ML): AI models that learn from data (e.g., recommendation systems).

Deep Learning (DL): Uses neural networks for complex pattern recognition (e.g., facial
recognition).

Natural Language Processing (NLP): AI for language understanding (e.g., Chatbots).

Computer Vision: AI for analyzing images and videos (e.g., self-driving cars).

4. UNDERSTANDING EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI)

3.1
3.1Definition of XAI

Explainable AI (XAI) refers to techniques that make AI models transparent, interpretable,


and understandable. Unlike traditional AI (black-box models), XAI enables users to see
and understand how AI reaches a particular decision.

3.2 Difference Between XAI and Black-Box AI


4 IMPORTANCE OF XAI

4.1 Trust and Transparency


4.2

Users need to trust AI systems before they can adopt them in critical areas like
healthcare and finance.

4.3 Bias Detection


4.4

AI models trained on biased data can produce unfair results. XAI helps detect and correct
these biases.

4.3 Regulatory Compliance

Legal frameworks like the EU GDPR require AI decisions to be explainable.

4.5 Debugging AI Systems


4.6

AI developers use XAI tools to identify and fix errors in models.


TECHNIQUES AND APPROACHES IN XAI

4.7 Model-Specific XAI Techniques


4.8
1. Decision Trees: A flowchart-like model explaining decisions step-by-step.

2. Linear Regression: A mathematical model showing variable relationships.

3. Rule-Based Systems: Uses “if-then” logic for decision-making.

4.9 Model-Agnostic XAI Techniques


4.10
1. LIME (Local Interpretable Model-agnostic ): Generates interpretable models for black-
box AI.

2. SHAP (Shapley Additive Explanations): Assigns importance scores to AI inputs.

3. Counterfactual Explanations: Explains decisions by showing alternative outcomes.


APPLICATIONS OF XAI IN VARIOUS INDUSTRIES

4.11 Healthcare
4.12

AI diagnoses diseases, but XAI ensures doctors understand its recommendations.

4.13 Finance
4.14

Banks use AI for credit scoring. XAI explains why a loan is approved or denied.

6.3 Autonomous Vehicles

Self-driving cars must explain why they take specific driving actions.

4.15 Law and Criminal Justice


4.16

AI used for predicting crime must be transparent to avoid discrimination.

6.5 Cybersecurity

XAI helps security analysts understand AI-driven threat detection systems.


CHALLENGES AND FUTURE OF XAI

4.17 Challenges of XAI


4.18
1. Trade-off Between Accuracy and Explainability: More transparent models are often
less accurate.

2. Complexity: Some AI models are too intricate to fully explain.

3. Lack of Standardization: No universal approach exists for AI explainability.

4.19 Future of XAI


4.20

Research will improve AI transparency.

AI regulations will become stricter.

More industries will adopt XAI-driven AI systems.


CONCLUSION

Explainable AI (XAI) is essential for making AI systems transparent, fair, and trustworthy.
It ensures AI decisions can be understood and justified, which is crucial for industries like
healthcare, finance, and cybersecurity. Although XAI faces challenges, ongoing research
and regulations will continue to enhance its effectiveness and adoption.
REFERENCES

1. Adadi, A., & Berrada, M. (2018). “Peeking inside the black-box: A survey on
Explainable AI.” IEEE Access.

2. Doshi-Velez, F., & Kim, B. (2017). “Towards a rigorous science of interpretable


machine learning.” arXiv preprint.

3. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Proceedings
of the 22nd ACM SIGKDD.

You might also like