0% found this document useful (0 votes)
3 views

Unit-1 Introduction to AI

The document provides an introduction to Artificial Intelligence (AI), covering its definitions, historical development, and foundational disciplines. It outlines various approaches to AI, including acting and thinking humanly and rationally, and discusses the significance of the Turing Test and notable AI milestones. Additionally, it touches on the ethical considerations and applications of AI in various fields.

Uploaded by

ghartikishmi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Unit-1 Introduction to AI

The document provides an introduction to Artificial Intelligence (AI), covering its definitions, historical development, and foundational disciplines. It outlines various approaches to AI, including acting and thinking humanly and rationally, and discusses the significance of the Turing Test and notable AI milestones. Additionally, it touches on the ethical considerations and applications of AI in various fields.

Uploaded by

ghartikishmi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Introduction to AI

By,
Nabaraj Bahadur Negi
Contents
• Intelligence, Artificial Intelligence (AI)
• AI Perspectives: acting and thinking humanly, acting and thinking rationally
• History of AI
• Foundations of AI: Philosophy, Economics, Psychology, Sociology, Linguistics,
Neuroscience, Mathematics, Computer Science, Control Theory
• AI Ethics and Responsible
• Applications of AI.
What is Artificial Intelligence (AI) ?

• Intelligence is
• ability to reason
• ability to understand
• ability to create
• ability to Learn from experience
• ability to plan and execute complex tasks
• Artificial is
• Made as copy something like natural.

• So, What is Artificial Intelligence?


• Artificial Intelligence is the branch of computer science concerned with making
computers behave like humans.
• Artificial intelligence, the ability of a computer or computer-controlled robot
to perform tasks commonly associated with intelligent beings.
• John McCarthy, who coined the term in 1956, defines it as "the science and engineering
of making intelligent machines, especially intelligent computer programs."

• Major AI textbooks define artificial intelligence as "the study and design of intelligent
agents," where an intelligent agent is a system that perceives its environment and takes
actions which maximize its chances of success.
• The definitions of AI according to some text books are categorized into four approaches and are
summarized in the table below :
System that think like Humans System that Think Rationally
“The exciting new effort to make computers “The Study of mental faculties through the
think…..machine with minds, in the full and use of computational models” (Charniak and
literal sense.” (Haugeland, 1985) McDermott, 1985)

“The automation of activities that we associate “The study of computations that make it
with human thinking, activities such as decision possible to perceive, reason and act”. (Winston
making, problem solving, learning ….” 1992)
(Bellman, 1978)
System that acts like Humans System that acts Rationally
“The art of creating machine that perform “Computational Intelligence is the study of
functions that require intelligence when the design of intelligent agents” (Poole
performed by people” (Kurzweil, 1990) et.al., 1998)

“The study of how to make computer do things “AI is concerned with intelligent behavior in
at which, at the moment, people are better” artifacts” (Nilsson, 1998)
(Rich and Knight, 1991)
Contd….

• Top dimension is concerned with thought processes and reasoning, where as bottom
dimension addresses the behavior.

• The definition on the left measures the success in terms of fidelity of human
performance, whereas definitions on the right measure an ideal concept of intelligence,
which is called rationality.

• Human-centered approaches must be an empirical science, involving hypothesis and


experimental confirmation.

• A rationalist approach involves a combination of mathematics and engineering.

• The four approaches in more detail are as follows :


1. Acting Humanly: The Turing Test Approach
• In 1950, Alan Turing introduced a test to check
whether a machine can think like a human or
not, this test is known as the Turing Test.

• Turing Test was introduced by Turing in his


1950 paper, "Computing Machinery and
Intelligence," which considered the question,
"Can Machine think?"
• The test involves an interrogator who interacts with one human and one machine. Within a
given time the interrogator has to find out which of the two the human is, and which one
the machine.
• The computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a computer.
• The conversation between all players is via keyboard and screen so the result would not
depend on the machine's ability to convert words as speech.

• The questions and answers can be like:

• Interrogator: Are you a computer? Player A (Computer): No,

• Interrogator: Multiply two large numbers such as (256896489*456725896), Player A:


Long pause and give the wrong answer.
Contd….

• "In 1991, the New York businessman Hugh Loebner announces the prize competition,
offering a $100,000 prize for the first computer to pass the Turing test. However, no
AI program to till date, come close to passing an undiluted Turing test".
Chatbots to attempt the Turing test:

• ELIZA: ELIZA was a Natural language processing computer program created by Joseph Weizenbaum. It
was created to demonstrate the ability of communication between machine and humans. It was one of the
first chatterbots, which has attempted the Turing Test.
• Parry: Parry was a chatterbot created by Kenneth Colby in 1972. Parry was designed to simulate a
person with Paranoid schizophrenia (most common chronic mental disorder). Parry was described as
"ELIZA with attitude." Parry was tested using a variation of the Turing Test in the early 1970s.
• Eugene Goostman: Eugene Goostman was a chatbot developed in Saint Petersburg in 2001. This bot has
competed in the various number of Turing Test. In June 2012, at an event, Goostman won the competition
promoted as largest-ever Turing test content, in which it has convinced 29% of judges that it was a
human. Goostman resembled as a 13-year old virtual boy.
Features required for a machine to pass the turing test
• Natural language processing: to communicate successfully in a human language;
• Knowledge representation: to store what it knows or hears;
• Automated reasoning: to answer questions and to draw new conclusions;
• Machine learning: to adapt to new circumstances and to detect and extrapolate patterns.
• To pass the total turing test, a robot will need
• Computer vision and speech recognition to perceive the world;
• Robotics to manipulate objects and move about.
2. Thinking Humanly: Cognitive Modeling Approach
• To say that a program thinks like a human, we must know how humans think. We can
learn about human thought in three ways:
• introspection—trying to catch our own thoughts as they go by;
• psychological experiments—observing a person in action;
• brain imaging—observing the brain in action.

• Once we have precise theory of mind, it is possible to express the theory as a computer
program.

• But unfortunately until up to now there is no precise theory about thinking process of
human brain. Therefore it is not possible to make the machine that think like human
brain.
Contd….

3. Think rationally: The laws of thought approach


• The Greek philosopher Aristotle was one of the first to attempt to codify “right
thinking”—that is, irrefutable reasoning processes. His syllogisms provided patterns for
argument structures that always yielded correct conclusions when given correct
premises.
• Syllogism is a kind of logical argument that applies deductive reasoning to arrive at a
conclusion based on two or more propositions that are assumed to be true.
• For example: – Ram is a man – All men are mortal – Ram is mortal
• These law of thought were supposed to govern the operation of mind:
• This study initiated the field of logic.
• The logic tradition in AI hopes to create intelligent systems using logic programming.

• First, It is not easy to take informal knowledge and state in the formal terms required
by logical notation, particularly when knowledge is not 100% certain.

• Second, solving problem principally is different from doing it in practice. Even


problems with certain dozens of fact may exhaust the computational resources of any
computer unless it has some guidance as which reasoning step to try first.
4. Acting Rationally: The rational Agent approach
• An agent is just something that acts (agent comes from the Latin agere, to do).
• Computer agents are expected to do more: operate autonomously, perceive their
environment, persist over a prolonged time period, adapt to change, and create and pursue
goals.
• A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.

• In the laws of thought approach to AI, the emphasis was given to correct inferences.
• Making correct inferences is sometimes part of being a rational agent, because one
way to act rationally is to reason logically to the conclusion and act on that
conclusion.
• On the other hand, there are also some ways of acting rationally that cannot be said to involve
inference.

• For Example, recoiling from a hot stove is a reflex action that usually more successful than a
slower action taken after careful deliberation.
History of Artificial Intelligence

The inception of artificial intelligence (1943–1956):


• Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and
Walter pits in 1943. They proposed a model of artificial neurons.
• Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength
between neurons. His rule is now called Hebbian learning.
• Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning
in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in which he
proposed a test. The test can check the machine's ability to exhibit intelligent behavior equivalent
to human intelligence, called a Turing test.
• Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial neural network
(ANN) named SNARC. They utilized 3,000 vacuum tubes to mimic a network of 40 neurons.
• Year 1952: Arthur Samuel pioneered the creation of the Samuel Checkers-Playing Program, which
marked the world's first self-learning program for playing games.
• Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program“
Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems,
and find new and more elegant proofs for some theorems.
• Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John
McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field.
The golden years-Early enthusiasm (1956-1974):
• Year 1958: During this period, Frank Rosenblatt introduced the perceptron, one of the early
artificial neural networks with the ability to learn from data.
• Year 1959: Arthur Samuel is credited with introducing the phrase "machine learning" in a pivotal
paper in which he proposed that computers could be programmed to surpass their creators in
performance.
• Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow created STUDENT, one
of the early programs for natural language processing (NLP), with the specific purpose of solving
algebra word problems.
• Year 1965: The initial expert system, Dendral, was devised by Edward Feigenbaum, Bruce G.
Buchanan, Joshua Lederberg, and Carl Djerassi.
• Year 1966: The researchers emphasized developing algorithms that can solve mathematical
problems. Joseph Weizenbaum created the first chatbot in 1966, which was named ELIZA.
• Year 1968: Terry Winograd developed SHRDLU, which was the pioneering multimodal AI
capable of following user instructions to manipulate and reason within a world of blocks.
• Year 1969: Arthur Bryson and Yu-Chi Ho outlined a learning algorithm known as
backpropagation, which enabled the development of multilayer artificial neural networks.
• Year 1972: The first intelligent humanoid robot was built in Japan, which was named WABOT-1.
• Year 1973: James Lighthill published the report titled "Artificial Intelligence: A General Survey,"
resulting in a substantial reduction in the British government's backing for AI research.
The first AI winter (1974-1980):
• The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to the
time period where computer scientist dealt with a severe shortage of funding from government
for AI researches.
• During AI winters, an interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987):
• Year 1980: After AI's winter duration, AI came back with an "Expert System". Expert systems were
programmed to emulate the decision-making ability of a human expert. Additionally, Symbolics Lisp
machines were brought into commercial use, marking the onset of an AI resurgence.
• Year 1981: Danny Hillis created parallel computers tailored for AI and various computational
functions, featuring an architecture akin to contemporary GPUs.
• Year 1984: Marvin Minsky and Roger Schank introduced the phrase "AI winter" during a gathering of
the Association for the Advancement of Artificial Intelligence. They cautioned the business world that
exaggerated expectations about AI would result in disillusionment and the eventual downfall of the
industry, which indeed occurred three years later.
• Year 1985: Judea Pearl introduced Bayesian network causal analysis, presenting statistical methods for
encoding uncertainty in computer systems.
The second AI winter (1987-1993):
• The duration between the years 1987 to 1993 was the second AI Winter duration.
• Again Investors and government stopped in funding for AI research as due to high cost but not
efficient result. The expert system such as XCON was very cost effective.
The emergence of intelligent agents (1993-2011):
• Year 1997: In 1997, IBM's Deep Blue achieved a historic milestone by defeating world chess
champion Gary Kasparov, marking the first time a computer triumphed over a reigning world chess
champion.
• Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
• Year 2006: AI came into the Business world till the year 2006. Companies like Facebook, Twitter,
and Netflix also started using AI.
• Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng released the paper titled "Utilizing
Graphics Processors for Extensive Deep Unsupervised Learning,"
• Year 2011: Jürgen Schmidhuber, Dan Claudiu Cire?an, Ueli Meier, and Jonathan Masci created the
initial CNN that attained "superhuman" performance by emerging as the victor in the German Traffic
Sign Recognition competition. Furthermore, Apple launched Siri, a voice-activated personal assistant
capable of generating responses and executing actions in response to voice commands.
Deep learning, big data and artificial general intelligence (2011-present)
• Year 2011: In 2011, IBM's Watson won Jeopardy, a quiz show where it had to solve complex
questions as well as riddles.
• Year 2012: Google launched an Android app feature, "Google Now", which was able to provide
information to the user as a prediction.
• Year 2013: China's Tianhe-2 system achieved a remarkable feat by doubling the speed of the world's
leading supercomputers to reach 33.86 petaflops. It retained its status as the world's fastest system
for the third consecutive time.
• Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing
test.“
• Year 2016: DeepMind's AlphaGo secured victory over the esteemed Go player Lee Sedol in Seoul,
South Korea, prompting reminiscence of the Kasparov chess match against Deep Blue nearly two
decades earlier.
• Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and
also performed extremely well.
• Year 2021: OpenAI unveiled the Dall-E multimodal AI system, capable of producing images based on
textual prompts.
• Year 2022: In November, OpenAI launched ChatGPT, offering a chat-oriented interface to its GPT-
3.5.
Foundations of AI
Economics:
• How should we make decisions in accordance with our preferences?
• How should we do this when others may not go along?
• How should we do this when the payoff may be far in the future?
• Economics studies incentives, decision-making, game theory, and resource allocation. In AI,
economics is applied to areas like algorithmic trading, market analysis, auction design, and
modeling economic systems.
• The science of economics got its start in 1776, when Scottish philosopher Adam Smith (1723-1790)
published An Inquiry into the Nature and Causes of the Wealth of Nations. While the ancient Greeks
and others had made contributions to economic thought, Smith was the first to treat it as a science,
using the idea that economies can be thought of as consisting of individual agents maximizing their
own economic well-being.
• This is suitable for “large” economies where each agent need pay no attention to the actions of
other agents as individuals.
• For “small” economies, the situation is much more like a game: the actions of one player can
significantly affect the utility of another (either positively or negatively).
• Von Neumann and Morgenstern’s development of game theory included the surprising result that,
for some games, a rational agent should act in a random fashion, or at least in a way that appears
random to the adversaries.

Psychology:
How do humans and animals think and act?
• Psychology investigates human cognition, learning, perception, and behavior. AI benefits
from psychological research by incorporating models of human-like reasoning, decision-
making, and problem-solving into intelligent systems.
• The origins of scientific psychology are usually traced to the work of the German physicist
Hermann von Helmholtz (1821-1894) and his student Wilhelm Wundt (1 832-1920).
• Helmholtz applied the scientific method to the study of human vision, and his Handbook of
Physiological Optics is even now described as “the single most important treatise on the physics
and physiology of human vision” (Nalwa, 1993, p.15).
• In 1879, Wundt opened the first laboratory of experimental psychology at the University of
Leipzig. Wundt insisted on carefully controlled experiments in which his workers would perform
a perceptual or associative task while introspecting on their thought processes.
• Behaviorism discovered a lot about rats and pigeons, but had less success at understanding
humans.
Philosophy:
• Can formal rules be used to draw valid conclusions?
• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to action?
• Philosophy addresses about intelligence, consciousness, ethics, and knowledge representation.
It influences AI through discussions on machine ethics, explainable AI, and the societal
implications of AI technologies.

• Aristotle (384-322 B.C.) was the first to formulate a precise set of laws governing the rational part
of the mind. He developed an informal system of syllogisms for proper reasoning, which in
principle allowed one to generate conclusions mechanically, given initial premises.
• Much later, Ramon Lull (d. 13 15) had the idea that useful reasoning could actually be carried out
by a mechanical artifact – “concept wheels“.
• Thomas Hobbes (1588-1679) proposed that reasoning was like numerical computation that “we add
and subtract in our silent thoughts.” The automation of computation itself was already well under
way.
• Around 1500, Leonardo da Vinci (1452-1519) designed but did not build a mechanical calculator;
recent reconstructions have shown the design to be functional.
• The first known calculating machine was constructed around 1623 by the German scientist
Wilhelm Schickard (1592-1635).
• The Pascaline, built in 1642 by Blaise Pascal (1623-1662), is more famous. Pascal wrote that “the
arithmetical machine produces effects which appear nearer to thought than all the actions of
animals.”
• Gottfried Wilhelm Leibniz (1646-1716) built a mechanical device intended to carry out
operations on concepts rather than numbers, but its scope was rather limited.
• Now that we have the idea of a set of rules that can describe the formal, rational part of the mind,
the next step is to consider the mind as a physical system.
• Rene Descartes (1596-1650) gave the first clear discussion of the distinction between mind and
matter and of the problems that arise.
• One problem with a purely physical conception of the mind is that it seems to leave little room
for free will: if the mind is governed entirely by physical laws, then it has no more free will than
a rock “deciding” to fall toward the center of the earth.
Mathematics:
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain information?
• Mathematics provides the foundation for AI algorithms, including calculus, linear algebra,
probability theory, statistics, and optimization methods. AI applications heavily rely on
mathematical concepts for modeling, analysis, and decision-making.
• Philosophers staked out most of the important ideas of AI, but the leap to a formal science
required a level of mathematical formalization in three fundamental areas: logic, computation,
and probability.
• The idea of formal logic can be traced back to the philosophers of ancient Greece, but its
mathematical development really began with the work of George Boole (1815-1864), who
worked out the details of propositional or Boolean logic.
• In 1879, Gottlob Frege (1848-1925) extended Boole’s logic to include objects and relations,
creating the first-order logic that is used today as the most basic knowledge representation
system.
• Alfred Tarski (1902-1983) introduced a theory of reference that shows how to relate the
objects in a logic to objects in the real world.
• The first nontrivial algorithm is thought to be Euclid’s algorithm for computing greatest
common denominators.
Linguistics:
How does language relate to thought?
• Linguistics studies language structure, semantics, pragmatics, and language processing. In AI,
linguistics informs natural language processing (NLP) tasks such as machine translation,
sentiment analysis, and dialogue systems.
• In 1957, B. F. Skinner published Verbal Behavior. This was a comprehensive, detailed account of the
behaviorist approach to language learning, written by the foremost expert in the field.
• A review of the book became as well known as the book itself, and served to almost kill off interest in
behaviorism. The author of the review was Noam Chomsky, who had just published a book on his
own theory, Syntactic Structures.
• Chomsky showed how the behaviorist theory did not address the notion of creativity in language. It
did not explain how a child could understand and make up sentences that he or she had never heard
before.
• Chomsky’s theory, based on syntactic models going back to the Indian linguist Panini
(b.c.350), could explain this, and unlike previous theories, it was formal enough that it could in
principle be programmed.
• Modem linguistics and AI, then, were “born” at about the same time, and grew up together,
intersecting in a hybrid field called computational linguistics or natural language processing.
• Understanding language requires an understanding of the subject matter and context, not
just an understanding of the structure of sentences.
Neuroscience:
How do brains process information?
• Neuroscience studies the brain's structure, function, and neural processes. AI draws
inspiration from neuroscience for developing neural networks, deep learning algorithms, and
cognitive models of intelligence.
• Paul Broca‘s (1824-1880) study of aphasia (speech deficit) in brain-damaged patients in 1861
reinvigorated the field and persuaded the medical establishment of the existence of localized areas
of the brain responsible for specific cognitive functions.
• Despite these advances, we are still a long way from understanding how any of these cognitive
processes actually work.
• The truly amazing conclusion is that a collection of simple cells can lead to thought, action, and
consciousness or, in other words, that brains cause minds (Searle, 1992).
• Brains and computers perform quite different tasks and have different properties.
• Moore’s Law predicts that the CPU’s gate count will equal the brain’s neuron count around 2020.
Moore’s Law says that the number of transistors per square inch doubles every 1 to 1.5 years.
Human brain capacity doubles roughly every 2 to 4 million years.
• Even though a computer is a million times faster in raw switching speed, the brain ends up being
100,000 times faster at what it does.
Computer engineering:
How can we build an efficient computer?
• Computer science encompasses algorithms, data structures, programming languages,
software engineering, and computational theory. AI research in computer science focuses on
developing AI algorithms, systems, and applications.
• The first operational computer was the electromechanical Heath Robinson,9 built in 1943 by Alan
Turing’s team for a single purpose: deciphering German messages. In 1943, the same group
developed the Colossus, a powerful general-purpose machine based on vacuum tubes.10 The first
operational programmable computer was the Z-3, the invention of Konrad Zuse in Germany in
1941.
Control theory :
How can artifacts operate under their own control?
• Control theory deals with regulating and optimizing systems based on feedback and control
signals. In AI, control theory is applied to robotics, autonomous systems, industrial
automation, and adaptive control algorithms.
• Ktesibios of Alexandria (c. 250 BCE) built the first self-controlling machine: a water clock with a
regulator that maintained a constant flow rate.
• Other examples of self-regulating feedback control systems include the steam engine governor,
created by James Watt (1736–1819), and the thermostat, invented by Cornelis Drebbel (1572–1633),
who also invented the submarine. James Clerk Maxwell (1868) initiated the mathematical theory of
control systems.
AI Ethics and Responsible AI
Strategies for Promoting Responsible AI Development
To address these ethical challenges and promote responsible AI development, there are several steps
that can be taken.
Bias detection and mitigation:
• It is important to work to identify potential sources of bias in AI systems and develop strategies to
mitigate them. This can involve careful data selection, reweighting, or using techniques such as
adversarial training.
Fairness testing:
• Rigorous testing can be used to ensure that AI systems are fair and unbiased. This can involve using
techniques such as statistical parity, equal opportunity, or equalized odds.
• For example, when developing an AI-powered credit scoring system, it is important to ensure that
the system is not biased against any particular demographic group and that it treats everyone fairly
and equally.
Transparency and explainability :
• AI systems should be designed to be transparent and explainable, so that stakeholders can
understand how decisions are being made. This can help build trust in the system and prevent
unintended consequences.
Ethical design principles:
• Ethical design principles can be used to guide the development of AI systems, ensuring that they
are designed to be ethical and responsible from the outset.
• For example, it is important to ensure that the data used to train an AI system is diverse and
representative, and that the system does not perpetuate any existing inequalities or biases.
Ethical AI rests in three pillars of equal importance:
• Accountability: It refers to the need to explain and justify one’s decisions and actions to its
partners, users and others with whom the system interacts. To ensure accountability, decisions
must be derivable from, and explained by, the decision-making algorithms used.
• Responsibility: refers to the role of people themselves and to the capability of AI systems to
answer for one’s decision and identify errors or unexpected results. As the chain of responsibility
grows means are needed to link the AI systems’s decisions to the fair use of data and to the
actions of stakeholders involved in the system’s decision.
• Transparency: refers to the need to describe, inspect and reproduce the mechanisms through
which AI systems make decisions and learns to adapt to its environment, and to the governance of
the data used created. Current AI algorithms are basically black boxes.
Applications of AI
Healthcare: AI is used for medical imaging analysis, disease diagnosis, drug discovery, personalized
treatment plans, and patient data analysis for better healthcare outcomes.
Finance: AI is employed in fraud detection, algorithmic trading, credit scoring, risk assessment, and
personalized financial services based on customer behavior analysis.
Retail: AI powers recommendation systems, demand forecasting, supply chain optimization,
inventory management, and customer service chatbots.
Manufacturing: AI is used for predictive maintenance, quality control, process optimization, supply
chain management, and autonomous robots for assembly and logistics.
Transportation: AI enables autonomous vehicles, route optimization, traffic management, predictive
maintenance for vehicles and infrastructure, and intelligent transportation systems.
Marketing: AI is used for customer segmentation, personalized marketing campaigns, sentiment
analysis, social media monitoring, and content generation.
Education: AI applications include personalized learning platforms, automated grading and
feedback systems, educational content generation, and adaptive learning environments.
Agriculture: AI is used for precision agriculture, crop monitoring, yield prediction, pest
detection, and autonomous farming equipment.
Cybersecurity: AI is employed for threat detection, anomaly detection, user behavior analysis,
security automation, and response orchestration.
Smart Cities: AI is used for traffic management, energy optimization, waste management,
public safety monitoring, and infrastructure maintenance.
Thank you

You might also like