0% found this document useful (0 votes)
14 views

Artificial Intelligence

Artificial intelligence note

Uploaded by

kueiyiee
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Artificial Intelligence

Artificial intelligence note

Uploaded by

kueiyiee
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

ARTIFICIAL INTELLIGENCE

What is Artificial Intelligence Machine learning is a core component of AI and


involves training algorithms to learn from data
Artificial intelligence (AI) refers to the simulation and make predictions or take actions based on
of human intelligence in machines that are that learning. It involves providing large amounts
programmed to think, reason, and learn like of data to algorithms and allowing them to
humans. It involves developing computer systems automatically discover patterns and relationships
and algorithms that can perform tasks that within the data, enabling them to make accurate
typically require human intelligence, such as predictions or decisions.
visual perception, speech recognition, decision-
Natural language processing (NLP) focuses on
making, problem-solving, and language
enabling machines to understand, interpret, and
translation. AI aims to create machines that can
generate human language. It involves tasks such
mimic and replicate human cognitive abilities.
as speech recognition, language translation,
There are two main types of AI: Narrow AI and sentiment analysis, and text summarization.
General AI.
Computer vision deals with enabling machines to
1. Narrow AI (also known as weak AI): understand and interpret visual information. It
Narrow AI is designed to perform a involves tasks such as object recognition, image
specific task or set of tasks. It focuses on classification, and image generation.
solving specific problems and lacks the
AI has numerous applications across various
ability to generalize knowledge beyond its
industries, including healthcare, finance,
specific domain. Examples of narrow AI
transportation, entertainment, and manufacturing.
include voice assistants (such as Siri or
It has the potential to revolutionize these
Alexa), recommendation systems, image
industries by automating tasks, improving
recognition systems, and chatbots.
efficiency, enhancing decision-making, and
2. General AI (also known as strong AI): enabling new capabilities.
General AI refers to machines that possess
However, AI also raises ethical and societal
human-like intelligence and are capable of
concerns, such as job displacement, privacy, bias
performing any intellectual task that a
in algorithms, and the potential for autonomous
human being can do. These machines
machines to make decisions that may have
would have a broad understanding of
significant consequences. As AI continues to
various domains and possess the ability to
evolve, it is important to ensure its responsible
learn and apply knowledge across
development and deployment to maximize its
different contexts. General AI is currently
benefits while minimizing potential risks.
more of a theoretical concept and does
not exist in reality. FUNDAMENTAL PRINCIPLES OF ARTIFICIAL
INTELLIGENCE
Artificial intelligence encompasses several
subfields, including machine learning, natural Here's a more detailed explanation of the
language processing, computer vision, robotics, fundamental principles underlying artificial
expert systems, and neural networks. These intelligence :-
subfields use different approaches and techniques 1. Problem-solving :- Problem-solving in AI
to enable machines to acquire and process involves developing algorithms and
information, make decisions, and improve their techniques to solve complex tasks or
performance over time. challenges efficiently and effectively. It
encompasses various methods such as
planning, optimization, pattern intelligence, reasoning, and decision-making
recognition, and decision-making. AI abilities.
systems are designed to analyze problem
Formal Definition of Artificial Intelligence
spaces, explore possible solutions, and
select the best course of action to achieve Artificial Intelligence, or AI for short, is a really
desired goals or outcomes. clever technology that makes computers and
2. Knowledge representation: Knowledge machines act and think like humans. It helps
representation in AI focuses on organizing them understand things, make decisions, and
and structuring information in a way that even do tasks that usually only humans can do.
intelligent systems can utilize effectively. AI is used in many things we use every day, like
It involves creating models or frameworks voice assistants, video games, and even self-
to represent knowledge about the world, driving cars!
including facts, rules, concepts, Now, let's define the four aspects of AI in simple
relationships, and dependencies. These terms:
representations enable AI systems to
reason, learn, and make informed 1. Thinking Humanly: This means making
decisions based on the available computers and machines think and behave
knowledge. Different techniques, such as like humans do. It's like giving them a
semantic networks, ontologies, or brain that helps them understand and
knowledge graphs, are used to capture respond to things just like we do. For
and represent information in a format that example, if you ask a question, the
can be processed by AI algorithms. computer should be able to understand it
and give you a good answer, almost as if
3. Decision-making in AI involves selecting it were a person.
actions or choices based on specific
criteria or objectives. It encompasses a 2. Thinking Rationally: This means making
range of techniques, including logical computers and machines think logically
reasoning, probabilistic reasoning, and and make smart decisions. It's like
optimization. AI systems analyze available teaching them to solve problems using
data, evaluate potential outcomes, and rules and logic. For instance, if you give
assess the likelihood of different scenarios the computer some information, it should
to make informed decisions. They can rely use that information to figure out the best
on predefined rules, algorithms, or solution to a problem, kind of like solving
learned patterns to determine the most a puzzle.
suitable course of action. 3. Acting Humanly: This means making
These three principles—problem-solving, computers and machines behave like
knowledge representation, and decision-making— humans. It's like giving them the ability
are fundamental to the field of AI. Problem- to do things that people can do, such as
solving provides the framework for addressing talking, listening, and even understanding
complex tasks, while knowledge representation emotions. So, if you tell the computer to
enables the organization and utilization of do something, it should understand your
information. Decision-making allows AI systems to instructions and follow them, just like a
make intelligent choices based on analysis and person would.
evaluation of available data. By combining these 4. Acting Rationally: This means making
principles, AI researchers and developers aim to computers and machines behave in a
create systems that exhibit human-like smart and logical way. It's like training
them to make decisions that make sense
and lead to good outcomes. For example, 4. Goals and Objectives: An agent typically
if the computer is playing a game, it has specific goals or objectives it aims to
should make moves that are smart and achieve. These goals could be defined by
help it win the game, kind of like a the designer of the agent or can be
really good player. learned or adapted by the agent through
machine learning or other techniques. The
agent's actions and decisions are guided
WHAT IS AN AGENT
by its goals or objectives.
An agent refers to something that perceives its
environment through sensors and takes actions in RATIONALITY VS OMNISCIENCE
that environment through effectors. In the context
of Artificial Intelligence, an agent is typically a Rational Agent: A rational agent is one that acts
computer program or system that is designed to in a way that helps achieve its goals, based on
interact with its environment in order to achieve its beliefs or understanding of the world. It does
certain goals or objectives. what it believes is the right thing to do given the
information it has. However, a rational agent can
Here are the key points to understand about an
make mistakes or encounter unforeseen events
agent:
that affect its decisions.
1. Perception: An agent perceives its
Omniscient Agent: An omniscient agent knows
environment using sensors. In the case of
everything, including the actual outcome of its
humans, our sensors include our eyes,
actions. In reality, achieving omniscience
ears, nose, skin, and other organs that
(knowing everything) is impossible. So, while an
enable us to gather information about the
omniscient agent can always make perfect
world around us. Similarly, in the case of
decisions with 100% certainty, such agents do not
a robot agent, it may have sensors like
exist in real life.
cameras, sound recorders, infrared range
finders, or other specialized sensors that Expected Success: Rational agents make decisions
allow it to sense and gather data about based on what they expect to be successful, given
its environment. their beliefs and understanding of the situation.
They consider the likelihood of success based on
2. Action: An agent takes actions in its
available information. However, unexpected
environment using effectors. As humans,
factors can sometimes lead to mistakes or
our effectors include our hands, legs,
failures.
mouth, and other organs that enable us to
interact with the world. For a robot Unpredictable Factors: Rational agents can make
agent, effectors can be various types of mistakes because there are often unpredictable
motors, actuators, or other mechanisms factors or events that they couldn't foresee or
that allow it to physically interact with perceive. In the example, crossing the street was
objects or perform tasks in its a rational decision because, most of the time, it
environment. would have been successful. However, the falling
banner was an unforeseen event that couldn't
3. Environment: The agent is situated within
have been predicted.
an environment. This environment can be
real, physical surroundings, or it can be a To summarize, being rational means making
virtual or simulated environment. The decisions based on what is expected to be
agent perceives the environment through successful given the available information.
its sensors and takes actions in the Rational agents can still make mistakes due to
environment using its effectors. unforeseen factors or events. An omniscient
agent, which knows everything and acts perfectly, can handle situations where multiple goals
is not possible in reality. So, we can't blame an may conflict with each other by weighing
agent for not considering or acting upon the trade-offs between different options.
something it couldn't have known or perceived.
• Learning agents: Learning agents have the
AGENT TYPES ability to improve their performance over
time through experience. They learn from
• Simple reflex agents: These agents make the feedback they receive from the
decisions based solely on the current environment and adapt their behavior
percept (input) they receive. They don't accordingly. Learning agents can acquire
have memory or the ability to keep track knowledge and develop strategies to make
of past actions or states. Simple reflex better decisions in the future. They use
agents use condition-action rules to techniques like machine learning and
determine their actions. They follow a set reinforcement learning to learn from their
of predefined rules that map specific interactions with the environment and
percepts to specific actions without optimize their actions.
considering the overall context.
Each type of agent builds upon the previous one,
• Model-based reflex agents: These agents, increasing in generality and complexity. Simple
in addition to the current percept, also reflex agents are the most basic, while learning
maintain an internal model of the world. agents are the most sophisticated and adaptable.
They use this model to keep track of the
world state and update it based on
DIFFERENT CLASS OF ENVIRONMENTS
percepts received. Model-based reflex
agents can make more informed decisions
In designing successful agents, it is important to
by considering the current state and using understand the type of environment with which
their internal model to predict the effects the agent interacts. Environments are where
of different actions. agents operate and receive information. They can
vary in terms of their characteristics, and these
• Goal-based agents: Goal-based agents have properties impact how agents perceive and act
a goal or objective they are trying to within them. Understanding these properties helps
achieve. They have access to information in designing effective strategies for different types
about the current state of the world and of environments.
use it to determine the actions needed to
Fully Observable vs. Partially Observable: Fully
reach their goal. These agents employ a
observable environments provide agents with
planning process to generate a sequence
complete and accurate information about the
of actions that lead to the desired
current state. The agent can directly perceive all
outcome. They consider the current state,
aspects of the environment that are relevant to its
the desired goal state, and the available
decision-making process. In contrast, partially
actions to choose the most appropriate
observable environments do not provide complete
action at each step.
information. The agent may have limited or
• Utility-based agents: Utility-based agents incomplete knowledge about the environment,
make decisions based on the concept of requiring it to infer or estimate the current state
utility or desirability. They assign a value based on available observations.
or utility to different outcomes and select
Example :- Imagine a puzzle-solving game where
actions that maximize the expected utility.
the player can see the entire puzzle board at all
These agents consider not only the goal
times. The player has complete information about
but also the relative desirability of
the positions of all the puzzle pieces, and they
different outcomes. Utility-based agents
can plan their moves accordingly. The Example: Let's consider a dice-rolling game. When
environment in this case is fully observable you roll a fair six-sided die, the outcome (the
because the player has access to all relevant number rolled) is uncertain and random. Even if
information about the puzzle. you roll the same die from the same starting
position multiple times, you may get different
Example :- Consider a self-driving car navigating
results each time. The next state (rolled number)
a busy city street. The car's sensors, such as
cameras and lidar, provide it with information is not completely determined by the current state
about its surroundings. However, due to factors (starting position) and the action (rolling the die).
like blind spots, occlusions, or limited sensor The randomness of the dice introduces variability
range, the car may not have complete visibility and makes the environment stochastic.
of the entire environment. The car must use the
available sensor data to estimate the positions of Chess: Chess is considered a deterministic game.
other vehicles and pedestrians, even if it cannot The rules of chess are well-defined, and the
directly perceive them. The environment in this outcome of each move is determined solely by
case is partially observable because the car has the current game state and the actions taken by
limited or incomplete knowledge about the
the players. There is no randomness or element
current state of the road.
of chance involved in the game. Given the same
initial game state and a specific sequence of
Deterministic vs. Stochastic:
moves, the game will always progress in the
same manner. The deterministic nature of chess
Deterministic: In a deterministic environment, the
allows for the possibility of advanced analysis
next state of the environment is completely
and strategy based on logical reasoning.
determined by the current state and the action
taken by the agent. This means that for a given Self-driving Cars: Self-driving cars operate in a
current state and action, there is a unique stochastic environment. While self-driving cars
mapping or predictable outcome that leads to the rely on advanced algorithms and sensors to make
next state. The environment behaves consistently, decisions, the driving environment itself is
and the agent can accurately anticipate the inherently unpredictable and subject to various
consequences of its actions. uncertainties. Factors such as the behavior of
other road users, changing weather conditions, or
Example: Let's say you have a ball and you throw
unexpected obstacles introduce elements of
it into the air. In a deterministic environment,
randomness and variability into the driving
the ball's motion would follow predictable laws of
process. The same set of actions taken by a self-
physics. If you know the initial position, velocity,
driving car in a particular situation may lead to
and the force applied when throwing, you can
slightly different outcomes due to these stochastic
precisely calculate the ball's trajectory and predict
elements. As a result, self-driving cars need to
where it will be at any given time. The next
adapt to the dynamic and uncertain nature of the
state (position of the ball) is completely
driving environment.
determined by the current state (initial position,
velocity) and the action (throwing force). Episodic vs. Sequential
Stochastic: In a stochastic environment, there is
Episodic environments divide the agent's
an element of uncertainty or randomness involved
interaction with the environment into separate
in the outcome. The next state may vary, even episodes or individual tasks. Each episode is
when the same action is taken from the same independent, and the agent's actions and
current state. The environment introduces some outcomes do not affect future episodes. The agent
degree of unpredictability, and the agent needs to starts anew with each episode. Non-episodic
consider probabilities or uncertainties when (sequential) environments have a continuous
making decisions. interaction, and the agent's actions can have
long-term consequences that influence future
states and decisions. The agent's actions in the remains constant until a player makes a move.
past can impact the future. The environment does not evolve or undergo any
changes unless acted upon by the players.
Chess (Episodic): Chess is typically played as a
Therefore, chess can be characterized as a static
series of individual games or matches, with each
environment.
game representing an episode. Each game begins
with a specific starting position and ends when a Self-Driving Cars (Dynamic): Self-driving cars
player wins, loses, or the game results in a draw. operate in a dynamic environment. The road
The outcome of one game does not directly environment is subject to constant changes and
influence the subsequent games. Players start fluctuations. Factors such as the movement of
afresh with each game, and the previous game's other vehicles, pedestrians, and objects, as well
results do not carry over or affect the future as changing traffic conditions and road
games. Therefore, chess fits the definition of an infrastructure, make the driving environment
episodic environment. dynamic. Self-driving cars need to continuously
perceive and adapt to these dynamic changes to
Self-driving Cars (Non-episodic): In contrast, self-
navigate safely and make appropriate driving
driving cars operate in a continuous and non-
decisions. The state of the environment evolves
episodic environment. The driving experience is a
over time, influenced by external factors and the
continuous interaction rather than a sequence of
actions of other entities. Therefore, self-driving
separate episodes. A self-driving car continuously
cars operate in a dynamic environment.
navigates and interacts with the road
environment, responding to real-time changes in Discrete vs. Continuous:
traffic, road conditions, and other dynamic
Discrete environments have a limited number of
factors. The actions taken by the car have long-
distinct states or actions. The agent's actions and
term consequences, as the car's behavior can
the environment's state can be clearly defined and
influence subsequent states and decisions. For
categorized. In contrast, continuous environments
example, the car's past actions, such as previous
have infinite or large sets of possible states or
lane changes or braking maneuvers, may impact
actions. The agent and the environment operate
the current driving situation or influence future
in a continuous space, requiring more complex
actions. Hence, self-driving cars are better
and precise computations or techniques.
categorized as operating in a non-episodic
environment. Chess (Discrete): Chess can be considered a
discrete environment. The game state in chess is
Static vs. Dynamic: characterized by a finite set of distinct positions
Static environments remain unchanged while the and configurations on the chessboard. Each chess
agent is making decisions. The state of the piece has a defined set of possible moves, and
environment does not change unless the agent the actions of the players are discrete and well-
acts upon it. In contrast, dynamic environments defined within the rules of the game. The state
undergo changes even without the agent's actions. and actions can be clearly categorized into
The environment may evolve or be influenced by distinct categories. The discrete nature of chess
external factors, and the agent needs to adapt to allows for precise analysis and enumeration of
the changing conditions. possible moves and positions.

Chess (Static): Chess can be considered a static Self-Driving Cars (Continuous): Self-driving cars
environment. Once the game begins, the operate in a continuous environment. The driving
chessboard and the pieces remain unchanged environment is not limited to a finite set of
unless a player makes a move. The positions of distinct states or actions. Instead, it involves a
the chess pieces do not change autonomously continuous range of possibilities and variables.
during the game. The state of the chessboard The car's perception of the environment, such as
the positions of other vehicles, pedestrians, or training. This is typically used when
objects, as well as the car's own motion, is obtaining a fully labeled dataset is time-
represented in continuous spaces. The car's consuming or costly.
actions, such as steering, acceleration, and
2. Deep Learning (DL)
braking, can be smoothly varied within
continuous ranges. The continuous nature of the Deep Learning is a subset of machine learning
driving environment requires more complex that is based on artificial neural networks with
computations and algorithms to process and make representation learning. Representation learning is
decisions based on the continuous data. the ability of a machine to automatically find the
features needed for data classification. DL is
TECHNIQUES OF ARTIFICIAL INTELLIGENCE
especially good at identifying patterns in
let's delve deeper into the main techniques of AI, unstructured data, such as images, sound, and
namely Machine Learning, Deep Learning, and text.
Reinforcement Learning :-
The "deep" in deep learning denotes the depth of
1. Machine Learning (ML) layers in a neural network. A typical deep
learning model consists of many layers of
Machine Learning is a subfield of AI that
artificial neurons, also known as nodes. These
provides systems the ability to learn and improve
layers are interconnected in a way that mimics
from experience without being explicitly
the structure of a human brain. Each node in a
programmed. It centers on the development of
layer uses input from nodes in the previous layer,
algorithms that can modify themselves to improve
performs a computation, and passes the result to
over time. There are three primary types of
nodes in the next layer.
machine learning:
One crucial aspect of deep learning is its ability
• Supervised Learning: In this method, the
to automatically learn feature representations
machine is taught using labeled data. For
from raw data, eliminating the need for manual
instance, if we're trying to create an
feature extraction. Applications of deep learning
algorithm that can identify cats in images,
include image recognition, natural language
we'd train the algorithm using a large
processing, speech recognition, and more.
number of images that are labeled either
"cat" (positive examples) or "not cat" 3. Reinforcement Learning (RL)
(negative examples). The algorithm would
Reinforcement Learning is another subfield of
then learn the characteristics that
machine learning where an agent learns to
differentiate a cat from other entities.
behave in an environment, by performing certain
• Unsupervised Learning: This is a type of actions and observing the results/rewards of those
machine learning where the algorithm is actions. It is about taking suitable action to
not provided with labeled data. The goal maximize reward in a particular situation. It is
is to find hidden patterns or intrinsic employed by various software and machines to
structures from unlabeled data. Common find the best possible behavior or path it should
unsupervised learning methods include take in a specific context.
clustering (grouping similar instances
Reinforcement learning differs from supervised
together) and dimensionality reduction
learning in that it doesn't need labelled
(simplifying input data without losing too
input/output pairs to be explicitly provided, and
much information).
it doesn't need sub-optimal actions to be
• Semi-supervised Learning: As the name explicitly corrected. Instead, the focus is on
suggests, this is a hybrid approach that exploring the environment, taking actions, and
uses both labeled and unlabeled data for learning from the feedback.
A good example of reinforcement learning is a reasoning helps us make sure that if the
chess engine. Here, the agent decides upon a rule is true and the facts are true, then
series of moves depending on the state of the our conclusion will also be true.
board (the environment), and learns optimal
2. Inductive Reasoning :- Inductive reasoning
strategies by playing numerous games and
involves drawing conclusions based on
adjusting its policies based on the game results
patterns or trends observed in specific
(win, lose, or draw).
examples. It's like making educated
In conclusion, all these techniques constitute the guesses. For example, if we observe that
bedrock of AI. Machine Learning provides the every cat we've seen so far has been
foundational principles and methods that allow black, we might generalize and guess that
systems to learn from data. Deep Learning takes all cats are black. Inductive reasoning
these concepts further by enabling the suggests that the conclusion is likely, but
construction of neural networks that can process not guaranteed to be true.
complex, unstructured data. Finally,
Inductive reasoning is like making guesses
Reinforcement Learning focuses on decision-
based on patterns you've noticed. For
making and the optimization of sequences of
example, if you see your friend playing
actions for an agent operating in an environment.
with three green toys, and then you see
These fields continue to evolve, driven by another friend playing with three green
advancements in computational power and the toys too, you might guess that all your
ever-increasing availability of data, and they are friends like to play with three green toys.
finding applications across a wide range of It's not always guaranteed to be true, but
domains, including healthcare, finance, it gives you a good guess based on what
autonomous vehicles, and more. you've seen.

Three different types of reasoning 3. Abductive Reasoning:- Abductive reasoning


focuses on finding the best explanation for
in AI
a set of observations. It's like solving a
mystery by piecing together clues. For
Here's a simpler explanation of the three types of
example, if we observe dark clouds, hear
reasoning in AI:
thunder, and see people carrying
1. Deductive Reasoning: Deductive reasoning umbrellas, we might deduce that it's
is when an AI system reaches a certain likely to rain. Abductive reasoning aims
conclusion based on established facts or to find the most plausible explanation for
premises. It's like following a set of the given evidence.
logical rules. For example, if we know
Abductive reasoning is like being a
that all cats have tails and Fluffy is a cat,
detective and finding clues to solve a
we can deduce that Fluffy has a tail.
mystery. Let's say you come home and
Deductive reasoning guarantees that if the
find a broken glass on the floor, some
premises are true, the conclusion will also
water spilled, and your dog hiding under
be true.
the table. You might think that the dog
Deductive reasoning is like solving a knocked over the glass and spilled the
puzzle using rules. Imagine you have a water because those clues fit together.
rule that says all birds have wings, and Abductive reasoning helps us find the best
you know that a penguin is a bird. By explanation for what we see and
using deductive reasoning, you can figure understand what might have happened.
out that the penguin must have wings too
because it fits the rule. So, deductive
In simpler terms, deductive reasoning is about model, and manufacturer. We can fill in
reaching guaranteed conclusions based on known those slots with specific information like
facts, inductive reasoning involves making "red," "sedan," and "Toyota" to describe
educated guesses based on patterns, and a particular car.
abductive reasoning is about finding the best
• Logic: Logic (like propositional logic or
explanation for a given set of observations. These
first-order logic) is a common way of
types of reasoning help AI systems make
representing knowledge in AI. Logical
decisions and draw conclusions based on available
statements are used to represent facts
information.
about the world, and logical inference
KNOWLEDGE REPRESENTATION IN AI rules are used to reason about those facts.

Knowledge representation in AI is about how to • Logic is like a set of rules or statements


store and organize information in an AI system. that help the AI system understand facts
There are several ways to represent knowledge in and make deductions. It's like using
AI :- puzzle pieces to figure out the answers.
For example, if we know that "all birds
• Semantic Networks: These are graph have wings" and "penguins are birds," we
structures used for representing knowledge can logically deduce that penguins must
in patterns of interconnected nodes and have wings too.
arcs. Nodes represent concepts and arcs
represent relationships between those • Probabilistic Models: These models are
concepts. They're often used in natural used when the world is uncertain.
language processing and in building Bayesian networks and Markov models are
expert systems. examples of probabilistic models where
knowledge is represented in terms of
• Imagine you have a map where different probabilities.
places are connected by lines. In a
semantic network, instead of places, we • Probabilistic models are used when things
have concepts or ideas, and the lines are not certain or definite. It's like
show how they are related. For example, making educated guesses based on
you might have a concept for "dog" and probabilities. For example, if you see dark
another for "bark," and the line between clouds, you might guess that it's going to
them shows that dogs can bark. It helps rain, but you're not 100% sure.
the AI system understand how different Probabilistic models help AI systems make
ideas are connected. decisions based on how likely something
is to happen.
• Frames: Frames are data-structures for
representing a stereotyped situation. They
are similar to object-oriented classes Learning techniques in AI
where you have a collection of properties
and values describing a situation or Learning in AI refers to the ability of an AI
object. For example, a "car" frame would system to improve its performance based on
include properties like color, model, experience. Here are the major learning
manufacturer, etc. techniques in AI:

• Frames are like containers that hold


information about something. Think of it
like a template with different slots to fill.
For example, if we have a frame for a
"car," it would have slots for color,
• Supervised Learning: In this technique, relationships (like frequently bought together) can
the AI system is trained on a labeled be edges connecting these nodes. This type of
dataset. It's similar to learning with a representation is conducive to making
teacher. The goal of the AI system is to recommendations based on the connections in the
learn a mapping from inputs to outputs. graph.

• Unsupervised Learning: In this technique, In contrast, if we're developing a medical


the AI system is given unlabeled data and diagnosis system, we might choose a rule-based
needs to find patterns or structure in that representation. Here, we could represent medical
data. It's like learning without a teacher. knowledge as a set of IF-THEN rules. For
example, "IF the patient has a fever, cough, and
• Reinforcement Learning: This technique
loss of smell, THEN they may have COVID-19."
involves an agent interacting with its
This allows for logical reasoning to arrive at a
environment by taking actions, receiving
diagnosis based on the symptoms presented.
rewards or penalties, and learning to
make better decisions over time. 2. Learning Techniques
• Deep Learning: This is a technique where Consider a scenario where you have a large set
artificial neural networks with many of labelled images, and you want to build an AI
layers ("deep" structures) learn from a model to classify these images. In this case,
large amount of data. Deep learning supervised learning would be the most
algorithms are used for complex tasks like appropriate learning technique, and within that,
image recognition, speech recognition, deep learning with Convolutional Neural
and natural language processing. Networks (CNNs) would be a good choice given
their excellent performance on image data.
In summary, reasoning, knowledge representation,
and learning techniques are core components of On the other hand, if you're working with a large
AI systems. They determine how an AI system dataset of customer transactions, and you want to
thinks, how it stores and organizes knowledge, identify segments of similar customers, but you
and how it learns from experience. Understanding don't have any pre-existing labels, then
these concepts is fundamental to understanding unsupervised learning is a better choice.
AI. Techniques like clustering (e.g., K-means,
Hierarchical Clustering) could be used to group
CHOOSING THE RIGHT KNOWLEDGE
customers based on their transaction behavior.
REPRESENTATION
In a different context, suppose you are designing
Choosing the correct knowledge representation or a system to control a self-driving car. The system
learning technique refers to selecting the most needs to make a sequence of decisions (steer left,
suitable way to represent information in an AI steer right, accelerate, brake, etc.), and the
system, or choosing the most appropriate learning optimal decision depends on the current state of
method for a particular problem or task. The the environment (position of other cars,
choice depends on the problem domain, the pedestrians, traffic lights, etc.). In this case,
nature of the input data, and the desired output. reinforcement learning would be an appropriate
Here are a few examples: technique, as it is designed for learning optimal
1. Knowledge Representation sequences of actions based on reward feedback.

Let's assume we're building a recommendation Thus, the choice of knowledge representation and

system for an e-commerce platform. We could learning technique depends on the specific

represent knowledge using a graph-based model. requirements of your problem and the nature of

Products can be nodes in the graph, and your data. Making the right choice is a critical
step in developing effective AI solutions
The Role of AI in Gaining Insight into AI plays a crucial role in both these fields. In
Intelligence and Perception cognitive science, AI models can be used to
simulate cognitive processes, helping to test
1. How AI Models Mimic or Enhance Human
hypotheses about how the mind works. For
Intelligence and Perception
instance, AI models can simulate how humans
Artificial Intelligence (AI) models have process language, providing insights into how we
increasingly been developed to imitate and understand and generate speech.
enhance human cognitive abilities, which are the
In neuroscience, AI can help in understanding
mental skills and processes that allow us to
how the brain works at a deeper level. For
acquire and apply knowledge. These cognitive
instance, neural networks, a type of AI model,
abilities include learning from experiences,
are inspired by the structure and function of the
understanding complex concepts, reasoning,
brain. These models have layers of interconnected
problem-solving, decision-making, and adjusting
nodes, or "neurons," that process and pass on
to new situations.
information, similar to how neurons work in the
AI systems can mimic human intelligence in brain. By studying these models, neuroscientists
numerous ways. One prominent method is can gain insights into how complex brain
Machine Learning (ML), a subset of AI that networks function.
provides systems the ability to learn and improve
3. Analysis of AI Use Cases in Areas such as
from experience without being explicitly
Natural Language Processing, Image Recognition,
programmed. Much like how a child learns to
etc.
differentiate between various animals, ML
algorithms can be trained on large datasets to AI has numerous practical applications across
identify patterns and make decisions. For various fields, including natural language
instance, a machine learning model can learn to processing (NLP) and image recognition.
differentiate between spam and non-spam emails
NLP is a subfield of AI that focuses on the
based on a training dataset.
interaction between computers and humans
AI not only mimics human intelligence but can through natural language. It enables computers to
also enhance it. For instance, AI can process vast understand, interpret, and generate human
amounts of data much quicker than a human, language in a valuable way. Use cases of NLP
identifying patterns and relationships that might include voice assistants like Siri and Alexa,
be missed by human analysts. This ability can be machine translation tools like Google Translate,
used in numerous fields, from healthcare to and sentiment analysis in social media
finance, to improve decision-making and monitoring.
predictions.
Image recognition, another application of AI,
2. The Role of AI in Cognitive Science and involves teaching computers to recognize images.
Neuroscience This technology is commonly used in a variety of
fields, including healthcare, where it can help
Cognitive science is an interdisciplinary field that
diagnose diseases; in autonomous vehicles, where
studies how information is represented and
it enables the vehicle to recognize obstacles; and
transformed in the brain. It involves research
in social media, for tagging and categorizing
from psychology, linguistics, philosophy, and
photos. AI models, particularly Convolutional
computer science. Neuroscience, on the other
Neural Networks (CNNs), are at the core of these
hand, is a discipline that studies the structure
image recognition tasks.
and function of the nervous system, focusing on
the brain and its impact on behavior and AI's ability to mimic and enhance human
cognitive functions. intelligence and perception, its role in cognitive
science and neuroscience, and its wide range of
applications are significant aspects of the field. search algorithm to prioritize certain actions or
These topics provide a deeper understanding of paths over others.
how AI functions, its potential, and its impact
Informed search algorithms make use of heuristic
across various sectors.
functions that estimate the desirability or quality
SEARCHING ALGORITHMS of states or actions based on the available
information. These functions provide a measure of
how close a state or action is to the goal.
In the field of Artificial Intelligence (AI), search
Examples of informed search strategies include
strategies are algorithms or methods used to
Best-First Search, Greedy Search, and A* (A-Star)
explore and traverse a problem space in order to
Search.
find a solution. These strategies are employed
when there is a need to navigate through a set of Informed search strategies leverage heuristics to
possible states, actions, or paths to reach a make more informed decisions and focus the
desired goal or solution. search on paths that are likely to lead to the goal
state more efficiently. However, the effectiveness
Now, let's define informed and uninformed search
of informed search heavily depends on the quality
strategies in detail :-
and accuracy of the heuristic function employed.
Uninformed Search: Uninformed search, also
known as blind search, refers to search strategies
that do not have any additional information
about the problem other than the available
actions and the current state. These strategies BREADTH FIRST SEARCH
explore the problem space in a systematic manner
Breadth-First Search (BFS) is an uninformed
without utilizing any heuristics or prior
search algorithm used to traverse or explore a
knowledge specific to the problem.
graph or tree in a breadth-wise manner. It
Uninformed search algorithms typically involve systematically explores all the nodes at the same
visiting and expanding nodes in a search tree or depth level before moving on to nodes at the
graph based solely on the available actions and next depth level. BFS ensures that all nodes at a
the order in which they are encountered. given depth are visited before moving to nodes at
Examples of uninformed search strategies include the next depth level.
➢ Breadth-First Search (BFS), Breadth-First Search guarantees that nodes at
➢ Depth-First Search (DFS), each depth level are visited before moving on to
deeper levels. BFS explores the graph in a level-
➢ Uniform Cost Search (UCS). by-level manner, gradually moving away from the
➢ Depth-limited search (DLS) starting node.

➢ Iterative deepening search (IDS) Breadth-First Search is often used to find the
shortest path or the minimum number of steps
Uninformed search is generally more time-
required to reach a goal state in an unweighted
consuming and less efficient compared to
graph. It can also be used to explore or traverse
informed search, especially for complex problems.
a tree or graph systematically in a breadth-first
Informed Search: Informed search, also known as fashion.
heuristic search, involves utilizing additional
Note :- Breadth-First Search is an uninformed
knowledge or heuristics to guide the search
search algorithm, which means it does not
process towards more promising paths or
consider any additional information or heuristics
solutions. Heuristics are rules or estimates based
specific to the problem. It simply explores the
on domain-specific knowledge that can guide the
graph in a systematic manner based on the
available connections and does not prioritize any 5. You continue this process, always
particular path or node. selecting the path with the lowest cost,
until you reach your destination (the goal
node).

Uniform Cost Search ensures that you consider


the cheapest paths at each step. It takes into
account the cumulative cost from the start node
to the current node and chooses the path with
the smallest total cost.

This algorithm is particularly useful when you


want to find the optimal path in terms of cost. It
can be applied to various problems, such as
finding the shortest route based on distance or
finding the least expensive solution.

Remember, Uniform Cost Search focuses on cost


rather than other factors like time or distance. It
helps you make efficient decisions by prioritizing
paths with lower cumulative costs at each step.

UNIFORM COST SEARCH (UCS)


WHY ARE WE CALLING A UNIFORM COST
Uniform Cost Search is an uninformed search
SEARCH UNIFORM
algorithm that explores a graph or tree by
considering the cost associated with each path. We call it "Uniform" Cost Search because the
The goal of Uniform Cost Search is to find the algorithm assigns a uniform or equal priority to
path with the lowest total cost from the start all paths based on their costs. It treats each path
node to the goal node. equally and explores them in a systematic manner
Imagine you're in a maze where each path has a without any additional biases or preferences.
different cost. Uniform Cost Search helps you find Unlike other search algorithms that may prioritize
the path that requires the least amount of effort certain paths based on heuristics or estimations,
(or cost) to reach your destination. Uniform Cost Search focuses solely on the cost of
the paths.
Here's a simpler explanation of how Uniform Cost
Search works: Since Uniform cost search considers the cost ,
can’t we say it is a heuristic approach ?
1. You start at a specific location (the start
node) in the maze. While Uniform Cost Search considers the cost
associated with each path, it is still considered an
2. You explore all the paths connected to uninformed search algorithm rather than a
the start node and determine their costs. heuristic approach. In the context of search
3. You choose the path with the lowest cost algorithms, heuristics typically refer to techniques
and move to the connected node. that incorporate additional domain-specific
knowledge or estimates to guide the search
4. At the new node, you again explore all
process. In Uniform Cost Search, the cost is
the paths connected to it and compare
considered as a known factor, but it does not
their costs.
involve any domain-specific knowledge or
heuristics specific to the problem.
Uniform Cost Search does not rely on any prior 4. Repeat steps 2 and 3 until all nodes have
knowledge or assumptions about the problem been visited or the desired goal node is
other than the given costs associated with each found.
path. It explores the search space systematically
Depth-First Search uses a stack (LIFO - Last-In-
by considering the accumulated costs, without
First-Out data structure) to keep track of nodes to
any preconceived biases or estimations. The
visit. When traversing a branch , new nodes are
algorithm aims to find the path with the lowest
pushed onto the stack, and when backtracking,
total cost, but it does not utilize any heuristic
nodes are popped off the stack.
functions or domain-specific insights.

DEPTH FIRST SEARCH

Depth-First Search (DFS) is an uninformed search Breadth-First Search (BFS) Limitations:


algorithm used to traverse or explore a graph or
tree. It follows a depth-wise exploration 1. Memory Requirements: BFS typically
approach, meaning it explores as far as possible requires more memory compared to DFS.
along each branch before backtracking. This is because BFS needs to store all the
nodes at the current level in a queue,
Here's a simplified explanation of how Depth-First which can become memory-intensive if
Search works: the branching factor is high or the search
1. Start at a specific node in the graph or space is large.
tree. 2. Time Complexity: In the worst-case
2. Explore as deeply as possible along each scenario, BFS can be slower than DFS.
branch before backtracking. This means This is because BFS explores all nodes at
visiting a neighboring node and each depth level before moving on to
continuing the exploration from there deeper levels, which may involve
until reaching a dead end. redundant exploration of nodes.

3. Upon reaching a dead end, backtrack to Depth-First Search (DFS) Limitations:


the previous node and continue exploring 1. Completeness: DFS does not guarantee
other unvisited branches. This finding a solution if the search space
backtracking process allows for the contains cycles or infinite paths. If the
exploration of alternative paths. solution exists deep within the search tree
or graph and DFS explores an infinite
path, it can get stuck and fail to find the
solution.
2. Lack of Optimal Solution: DFS does not needs to store information about the
guarantee finding the optimal solution in current path. It can quickly find a
terms of the shortest path. DFS may find solution by exploring deep paths first.
a solution quickly, but it does not ensure However, DFS lacks completeness, as it
that it is the best or shortest one. can get stuck in infinite paths or cycles
and may not find a solution even if one
• DFS is a search algorithm that can find a
exists.
solution quickly, but it doesn't guarantee
that the solution it finds is the best or 3. Depth-Limited Search: Depth-Limited
shortest one. Search addresses the limitations of both
BFS and DFS. It sets a predefined depth
• Imagine you're looking for a way to get
limit, beyond which the search does not
from your home to a park. DFS might
explore further. By restricting the depth,
find a path that gets you to the park
it prevents DFS from getting stuck in
quickly, but it may not be the shortest or
infinite paths while still being memory-
most efficient path. It could overlook a
efficient like DFS.
shorter route or take you on a longer
detour. 4. Iterative Deepening Search: An improved
version of Depth-Limited Search is
• DFS focuses on exploring one path deeply
Iterative Deepening Search. It performs
before considering other paths, which can
multiple depth-limited searches, gradually
be helpful in certain scenarios. However,
increasing the depth limit with each
this depth-first approach doesn't always
iteration. It starts with a shallow depth
lead to the most optimal solution in terms
limit and gradually increases it until a
of the shortest path or the most efficient
solution is found. This approach combines
way to reach your goal.
the completeness of BFS (with increasing
depth limits) and the space efficiency of
DEPTH LIMITED SEARCH DFS.

By using Depth-Limited Search or Iterative


Depth-Limited Search is an algorithm that
Deepening Search, we can explore the search
combines the advantages of breadth-first search
space effectively, ensuring completeness while
(completeness) and depth-first search (space
maintaining memory efficiency. These strategies
complexity) while addressing some of their
are especially useful when the depth of the
limitations. It restricts the depth of exploration,
search tree or graph is unknown or large.
ensuring that the search does not go beyond a
certain depth level in the search tree or graph.
- Depth-Limited Search is an algorithm that limits
Here's a more detailed explanation:
the depth of exploration during a search process.
1. Advantage of Breadth-First Search (BFS): It is a variation of Depth-First Search (DFS) that
BFS guarantees completeness, meaning it prevents infinite path exploration and helps
will find a solution if one exists. It overcome some of the limitations of DFS.
explores all nodes at each depth level
Here's a more detailed explanation of Depth-
before moving on to deeper levels.
Limited Search:
However, BFS can have high space
complexity because it needs to store all 1. Depth-First Search (DFS) Recap: DFS
nodes at each level in memory. explores a path as deeply as possible
before backtracking. It traverses through
2. Advantage of Depth-First Search (DFS):
the graph or tree, moving from one node
DFS is memory-efficient since it only
to another until it reaches a leaf node or
a specified condition is met. DFS can get appropriate depth limit based on the problem's
stuck in infinite paths or deep branches, characteristics and search space complexity.
which hinders its completeness.
Note that Depth-Limited Search sacrifices
2. Depth-Limited Search Approach: Depth- completeness to gain efficiency and termination
Limited Search introduces a predefined guarantees within a limited depth range. It is
depth limit to restrict the depth of most suitable when the solution is expected to
exploration. It allows DFS to proceed until exist within a specific depth level or when the
a certain depth level and then forces it to search space is vast and infinite path avoidance is
backtrack and explore other paths. By necessary.
limiting the depth, the algorithm avoids
getting trapped in infinite paths and
ITERATIVE DEEPENING SEARCH
ensures termination.
Iterative Deepening Search (IDS) is an algorithm
3. Exploration Process: The Depth-Limited that combines the benefits of depth-first search
Search algorithm operates as follows: (DFS) and breadth-first search (BFS) by
performing multiple depth-limited searches with
a. Start at the initial node and set the
increasing depth limits. It aims to achieve
depth limit (e.g., a maximum depth level
completeness while maintaining the efficiency of
or a specific depth threshold).
DFS.
b. Perform DFS exploration until the
Here's a more detailed explanation of Iterative
depth limit is reached or a goal state is
Deepening Search:
found.
1. Depth-Limited Search: IDS begins by
c. If the depth limit is reached and a goal
performing a series of depth-limited
state is not found, backtrack to the
searches. In each iteration, the algorithm
previous node and explore other unvisited
applies depth-limited search with a
paths.
specific depth limit, starting from 1 and
d. Repeat steps b and c until a solution is incrementing the limit with each iteration.
found or all paths within the depth limit
2. Exploration Process:
have been explored.
a. Starting with a depth limit of 1,
4. Completeness: Depth-Limited Search is not
perform a depth-limited search from the
complete in itself, as it can miss solutions
initial node.
if they lie beyond the depth limit.
However, it can be combined with b. If the goal state is not found within
Iterative Deepening Search (IDS) to the depth limit, increase the depth limit
achieve completeness. IDS performs by 1 and repeat the depth-limited search.
multiple Depth-Limited Searches with
c. Repeat steps a and b until the goal
increasing depth limits, ensuring that the
state is found or the entire search space is
search explores all paths up to a certain
exhausted.
depth.
3. Benefits of Iterative Deepening: IDS
Depth-Limited Search strikes a balance between
overcomes the limitations of DFS and BFS
the memory efficiency of DFS and the avoidance
in different ways:
of infinite path exploration. It allows for efficient
exploration within a limited depth range while a. Completeness: By incrementally
still maintaining termination. The effectiveness of increasing the depth limit in each
Depth-Limited Search depends on selecting an iteration, IDS ensures that all nodes up to
a certain depth level are explored. It
guarantees completeness, meaning that if based on their expected likelihood of
a solution exists, IDS will eventually find leading to the goal state, the search
it. becomes more efficient.

b. Memory Efficiency: IDS retains the 2. Heuristics and Focused Search: Informed
memory efficiency of DFS since it only search relies on domain knowledge or
needs to store information about the heuristics, which are rules, estimates, or
current path being explored. It does not insights that provide information about
require the extensive memory usage of the problem domain. These heuristics help
BFS, which stores all nodes at each depth in determining which choices or paths are
level. more likely to lead to the goal state,
allowing for a focused search.

3. Heuristic Information: The additional


information used in informed search is
often referred to as heuristic information
or heuristics. Heuristics provide an
estimate of the cost or desirability
associated with each state, guiding the
search process. The heuristic information
may not be entirely accurate but helps in
making better decisions.

4. Best-First Search: Informed search is often


implemented through a class of algorithms
Iterative Deepening Search is a powerful search called Best-First Search. Best-First Search
algorithm that combines the completeness of BFS algorithms use heuristics to evaluate and
and the memory efficiency of DFS. It is prioritize the choices for exploration. They
particularly useful when the search space is large select the most promising choice or path
and the depth of the optimal solution is based on the estimated desirability or cost
unknown. IDS guarantees finding the optimal and explore it further. Examples of Best-
solution (in terms of path cost) if one exists while First Search algorithms include Greedy
avoiding excessive memory usage. Search and A* (A-Star) Search.

Informed search allows the search algorithm to


INFORMED SEARCH make informed decisions by leveraging heuristic
information. By incorporating domain-specific
Informed Search, also known as heuristic search, knowledge, it directs the search towards more
is a search strategy that utilizes additional promising paths, potentially reducing the search
information or heuristics specific to the problem effort and finding solutions more efficiently.
domain to guide the search process. It improves
It's important to note that the effectiveness of
search efficiency by prioritizing the most
informed search heavily depends on the quality
promising choices for exploration.
and accuracy of the heuristic information used.
Here's a breakdown of Informed Search: Well-designed and accurate heuristics can
significantly improve the search process, while
1. The Need for Informed Search: In many
poor or inaccurate heuristics may lead to
search problems, having additional
suboptimal results.
information about the problem domain
can greatly enhance the search process. If
we can order or prioritize the choices
HEURISTIC FUNCTION node based on the current state description and
the domain-specific information available. By
Informed search is a strategy that helps an AI using the heuristic function, the search algorithm
system search for a solution more efficiently by can make informed decisions and prioritize nodes
using information about the problem. It's like that appear most promising in terms of reaching
having a map or clues to guide you towards the the goal state efficiently.
right direction when you're searching for The evaluation function f(n) is defined as the sum
something. The AI system uses this information to of g(n) and h(n), where g(n) represents the cost
make better decisions and explore the most of reaching a particular node from the starting
promising options first. state, and h(n) represents the estimated cost or
Heuristic Information: Heuristic information is the distance from that node to the goal state. The
type of information that the AI system uses to evaluation function f(n) is used to compare and
estimate the cost or value of different choices. It's evaluate different nodes in the search tree.
like having hints or educated guesses about the Let's break down the components and explain the
best path to take. This information may not equation f(n) = g(n) + h(n) in simpler terms:
always be completely accurate, but it provides
useful guidance to the AI system when making • g(n): In the context of a route-finding
decisions. problem, g(n) represents the cost of
reaching a particular node (or state) from
Best-First Search: Best-first search is a specific the starting node. It is like keeping track
type of informed search algorithm. It's like of the total distance or cost traveled so
following a trail of breadcrumbs that leads to the far in the search path.
most promising places. In best-first search, the AI
system uses heuristic information to evaluate the • h(n): In the context of a route-finding
available choices and selects the one that appears problem, h(n) represents the estimated
to be the most promising. It prioritizes exploring cost or distance from a particular node to
the options that are likely to lead to the goal the goal state. It is like having an
state more quickly and efficiently. estimation of how much more distance or
cost is needed to reach the goal.
To summarize, informed search is a strategy that
helps the AI system make smarter decisions by • f(n): The evaluation function f(n) is the
using heuristic information. This information sum of g(n) and h(n). It combines the cost
provides hints or estimates about the best choices incurred so far (g(n)) with the estimated
to make. Best-first search is a specific type of remaining cost to reach the goal (h(n)).
informed search where the AI system selects the The evaluation function helps in
most promising option based on the heuristic comparing and selecting nodes in the
information to reach the goal state more search process.
efficiently. To put it simply, in a route-finding problem, g(n)
In best-first search algorithms, an important represents the cost traveled so far, h(n) estimates
component is the heuristic function, denoted as the remaining cost to reach the goal, and f(n) is
h(n). The heuristic function provides an estimated the sum of these costs. By evaluating f(n), the
cost of the cheapest path from the current state algorithm can prioritize nodes that have a lower
at node n to a goal state. It utilizes domain- f(n) value, indicating they are closer to the goal
specific knowledge to guide the search algorithm. or have a better overall evaluation.

Heuristic functions play a crucial role in informed This equation is used in informed search
search by imparting additional knowledge about algorithms like A* search, where the algorithm
the problem. They estimate the goodness of a aims to minimize the total path cost by
considering both the cost incurred so far and the
estimated cost remaining. The algorithm evaluates In the steps of the Best-First Search algorithm,
and selects nodes based on their f(n) values to several key concepts are involved. Let's go
efficiently find the shortest path to the goal state. through them one by one:

Explored and Expanded Nodes 1. Initial State: This is the starting point of
the search. It represents the state from
Here's a simplified explanation of the terms which the search process begins.
"explored" and "expanded" nodes:
2. Goal State: This is the desired state that
1. Explored Node: An explored node refers the search aims to reach. It defines the
to a node that has been visited during the condition that signifies the solution to the
search process. When the search algorithm problem.
encounters a node, it examines or
evaluates it to determine its 3. Heuristic Function: A heuristic function is

characteristics, such as its state, cost, or used to estimate how close a given node

heuristic value. Once a node has been is to the goal state. It provides a measure

examined, it is marked as explored, of desirability or potential for each node.

indicating that it has been visited and its The heuristic function guides the search

information has been taken into account. by prioritizing nodes based on their
heuristic values.
2. Expanded Node: An expanded node refers
to a node that has been selected for 4. Priority Queue: The priority queue is a

further exploration or expansion during data structure that stores the nodes during

the search process. When the search the search process. Nodes are inserted

algorithm expands a node, it generates or into the priority queue based on their

creates its neighboring nodes based on the heuristic values. The node with the

problem's rules or constraints. These highest priority (according to the heuristic

neighboring nodes represent the possible function) is selected for expansion.

next steps in the search. The expanded 5. Expansion: When a node is selected from
node becomes a parent node to its the priority queue, it is expanded.
generated neighboring nodes. Expansion involves generating the

In simpler terms, an explored node is a node that neighboring nodes or states from the

has been visited and examined during the search current node. These neighboring nodes

process. It means that the search algorithm has represent the possible next steps in the

looked at its properties or characteristics. On the search.

other hand, an expanded node is a node that has 6. Evaluation: Each generated neighboring
been selected for further exploration, and its node is evaluated using the heuristic
neighboring nodes have been generated. The function. The heuristic value is computed
expanded node becomes the parent of its to estimate the desirability of the node in
generated neighboring nodes. reaching the goal state.

Both explored and expanded nodes are essential 7. Insertion: The evaluated neighboring
in search algorithms as they help keep track of nodes are inserted into the priority queue,
the progress of the search and guide the based on their heuristic values. The
exploration of the search space. priority queue maintains the order of
nodes, ensuring that the most promising

BEST FIRST SEARCH (GREEDY SEARCH ) nodes are explored first.


8. Termination: The search process continues
until one of the following conditions is
met:

• The goal state is found: If the


selected node from the priority
queue matches the goal state, the
search terminates, and a solution
is found.
• Open list becomes empty: If the
priority queue (open list) is empty Properties of Best First Search
and no solution has been found, it
Here's a simplified explanation of the
indicates that there is no path to
characteristics of the Best-First Search algorithm:
the goal state, and the search
terminates without a solution. 1. Completeness: Best-First Search is
considered complete if repetition (looping)
Throughout the search process, the Best-First
is controlled. Without proper control, it
Search algorithm uses the heuristic function to
can get stuck in infinite loops and not
guide the exploration, expanding nodes with
find a solution. By managing repetition, it
higher heuristic values first. The algorithm
can explore the entire search space and
dynamically adjusts the priority queue based on
find a solution if one exists.
the evaluation of the neighboring nodes,
continually prioritizing the most promising 2. Time Complexity: The time complexity of
options. This enables the algorithm to efficiently Best-First Search is represented as
explore the search space and potentially find a O(b^m), where 'b' is the branching factor
solution close to the goal state. (average number of successors per node)
and 'm' is the maximum depth of the
Example :-
search space. In simpler terms, the time it
takes to execute the algorithm grows
exponentially with the branching factor
and depth of the problem. However, if a
good heuristic is used, it can dramatically
improve the algorithm's performance by
guiding it towards promising paths.

3. Space Complexity: The space complexity


of Best-First Search is also represented as
O(b^m). It means that the algorithm
needs to keep all generated nodes in
memory during the search process. The
space required increases exponentially
with the branching factor and depth of
the search space.

4. Optimality: Best-First Search does not


guarantee finding the optimal solution.
Due to its reliance on heuristics, it can
overlook certain paths or make suboptimal
choices. While it can find a solution
quickly, it may not be the best or optimal is higher than the previous cost,
solution for the problem. skip this node.
• If the neighboring node is not in
In simpler terms, Best-First Search explores the
the open list or the new cost is
search space by prioritizing nodes based on their
lower than the previous cost,
desirability according to a heuristic function. It
update the node's cost and set its
can get stuck in loops without proper control,
parent as the selected node.
and the time and space required grow
• If the neighboring node is not in
exponentially with the branching factor and depth
the open list, calculate its
of the problem. While it can find solutions
estimated cost to the goal state
quickly, they may not be the best ones in terms
(h(n)).
of optimality.
• Add the neighboring node to the
open list.
A* ALGORITHM 4. If the open list becomes empty without
finding the goal state, then there is no
A* (pronounced "A-star") is a popular informed solution.
search algorithm that combines elements of both
The A* algorithm intelligently balances the cost
uniform cost search and best-first search. It is
of reaching a node (g(n)) and the estimated cost
widely used in pathfinding and optimization
from that node to the goal state (h(n)). The
problems. A* algorithm guarantees finding the
heuristic function used in A* should be
optimal solution, provided that certain conditions
admissible, meaning it never overestimates the
are met.
actual cost to reach the goal. If the heuristic is
Here's a brief overview of how the A* algorithm admissible, A* is guaranteed to find the optimal
works: solution.

1. Initialize the open list with the initial By considering both the actual cost and the
state and set its cost to zero. estimated cost, A* can explore the search space
2. Initialize the closed list as an empty set. efficiently and converge towards the optimal
3. While the open list is not empty: a. Select path. It intelligently prioritizes nodes with lower
the node with the lowest cost from the costs, making it more efficient than uninformed
open list. This is determined by the sum search algorithms like Breadth-First Search or
of the cost to reach that node (known as Depth-First Search.
g(n)) and the estimated cost from that
node to the goal state (known as h(n)). b.
If the selected node is the goal state,
terminate the search and return the
solution. c. Move the selected node from
the open list to the closed list to mark it
as visited. d. Generate the neighboring Example :-
nodes from the selected node. e. For each
neighboring node:
• Calculate its cost to reach (g(n))
by adding the cost from the initial
state to the selected node and the
cost from the selected node to the
neighboring node.
• If the neighboring node is already
in the closed list and the new cost
required grows exponentially with the size
of the problem. The actual time and space
complexity depend on the specific
problem and the quality of the heuristic
function used.

3. Optimality: A* search is optimal if the


heuristic function used is admissible. An
admissible heuristic never overestimates
the actual cost to reach the goal state.
When an admissible heuristic is used, A*
guarantees to find the optimal solution,
i.e., the shortest path from the initial
state to the goal state.

In simpler terms, A* search is a complete


algorithm that will find a solution if there is one.
It keeps track of all generated nodes, which can
require a significant amount of memory. The time
and space complexity can be high, especially in
worst-case scenarios. However, the actual
performance depends on the problem size and the
quality of the heuristic function. If an admissible
Example :- heuristic is used, A* search is guaranteed to find
the optimal solution, providing the shortest path
to the goal state.

GAME AS A SEARCH PROBLEM

In artificial intelligence, games can be represented


as search problems. Search algorithms are used to
navigate through the game's decision tree,
exploring possible moves and finding the optimal
or best moves to achieve a desired outcome.

1. State: In a game, the state represents the


Properties of A* algorithm
current configuration or situation of the
Here's a simplified explanation of the game at a specific point in time. It
characteristics of the A* algorithm: includes information such as the positions
of game pieces, scores, available moves,
1. Completeness: A* search is complete, and any other relevant game-specific data.
which means it is guaranteed to find a
solution if one exists. It will eventually 2. Initial State: The initial state is the
reach the goal state and terminate the starting point of the game. It represents
search process. the initial configuration of the game
before any moves have been made.
2. Time and Space Complexity: The time and
space complexity of A* search can be 3. Actions: Actions are the possible moves or
exponential. It keeps all generated nodes decisions that a player can make at any
in memory, and the amount of memory given state of the game. These actions
depend on the rules and mechanics of the ADVERSARIAL SEARCH
specific game being played.
Adversarial search is a type of search problem
4. Successor Function: The successor function
that arises when multiple agents or players are
defines the result of applying an action to
involved in a game or problem-solving scenario.
a state. It generates new states by
In this case, each agent aims to find the best
applying legal moves to the current state
solution or make the best decisions while
of the game.
considering the actions and strategies of the other
5. Goal State: The goal state represents the agents who are competing against them.
desired outcome or winning condition of
In traditional search problems, we typically focus
the game. It specifies the condition that
on finding a solution that involves a single agent
indicates a winning position, such as
making a sequence of actions. However, in
capturing all opponent's pieces, reaching a
adversarial search, there are multiple agents with
certain score, or fulfilling a specific
conflicting goals who are exploring the same
objective.
search space.
6. Search Space: The search space refers to
This type of search is commonly encountered in
the entire set of possible states that can
game playing, where players compete against
be reached by applying different actions
each other to achieve their individual objectives.
from the initial state. It represents all the
Each player needs to think strategically,
potential paths and configurations the
considering not only their own actions but also
game can take.
the potential actions and moves of their
7. Search Algorithm: A search algorithm is opponents. The decisions made by one player can
used to navigate through the search space have an impact on the performance and outcomes
and find the best or optimal moves. of the other players.
Various search algorithms like Minimax,
Adversarial search involves analyzing and
Alpha-Beta Pruning, or Monte Carlo Tree
evaluating the game states, considering the
Search (MCTS) can be used depending on
possible actions and strategies of both oneself and
the characteristics of the game.
the opponents. The goal is to find the best moves
8. Evaluation Function: An evaluation or decisions that maximize one's own chances of
function is often used to assign a value to winning or achieving their objectives while taking
a game state, indicating its desirability or into account the actions and strategies of the
quality. This function is typically used in other players.
heuristic-based search algorithms like
In summary, adversarial search refers to the
Minimax with Alpha-Beta Pruning. The
problem of finding the best moves or decisions in
evaluation function estimates the strength
a game or multi-agent environment, where
or advantage of a particular game state
multiple players with conflicting goals are
for a player.
exploring the same search space and trying to
By representing a game as a search problem, outperform each other. It involves considering the
artificial intelligence can utilize search algorithms actions and strategies of both oneself and the
to explore the potential moves and make opponents to make informed and strategic
informed decisions. The search algorithm analyzes decisions.
the game states, considers different actions, and
evaluates their outcomes to determine the best
A Two Person Zero Sum Game
moves to achieve the desired outcome, such as
winning the game or maximizing the player's
advantage.
Here's a simplified explanation of the concepts making decisions. When it's a player's
related to 2-person zero-sum games and perfect turn to act, they have the opportunity to
information: observe the current state of the game
before making their move. This allows
1. 2-Person Game: A 2-person game refers to
them to make informed decisions based
a game in which two players take turns
on the available information.
making moves or decisions. These players
could be individuals, teams, or AI agents. In simpler terms, a 2-person zero-sum game is a
game where two players take turns making
2. Zero-Sum Game: A zero-sum game is a
moves, and whatever one player gains, the other
type of game where the gains and losses
player loses. An evaluation function is used to
of one player are perfectly balanced with
assess the quality of positions, with positive
the gains and losses of the other player.
values indicating advantage for one player and
In other words, whatever one player
negative values indicating advantage for the other
gains, the other player loses, and vice
player. Perfect information means that both
versa. The total outcome of the game is
players have complete knowledge of the game
zero-sum.
state. In such games, players make decisions
3. Evaluation Function (f(n)): An evaluation sequentially, observing the current state before
function is a mathematical function used choosing their moves.
to assess the quality or goodness of a
particular position or state in the game. Example of an adversarial game : Tic-
In a zero-sum game, a single evaluation tac-toe
function is used to describe the
desirability of a board with respect to Tic-tac-toe is a classic adversarial game played on

both players. Positive values indicate a a grid of squares, typically a 3x3 grid. The

position favorable to one player, negative objective of the game is to be the first player to

values indicate a position favorable to the create a straight line of three of their own

other player, and values near zero symbols in a row, column, or diagonal.

represent neutral positions. The game is played by two players, usually

4. +Infinity and -Infinity: In the context of referred to as player X and player O. Player X

the evaluation function, +infinity always makes the first move by placing an X

represents a winning position for one symbol in any of the open squares on the board.

player (Player A), and -infinity represents Then, player O takes their turn and places an O

a winning position for the other player symbol in any remaining open square. The

(Player B). These values indicate an players continue taking turns, each marking their

overwhelmingly advantageous position for respective symbol in an empty square.

the corresponding player. The game ends in one of three ways :-

5. Perfect Information: Perfect information 1. If one player successfully creates a line of


refers to a type of game where both three of their symbols (X or O) in a row,
players have access to complete and column, or diagonal, they win the game.
accurate information about the state of 2. If all the squares on the board are filled
the game. There are no hidden or with symbols, and no player has achieved
unknown elements. Both players are fully a winning line, the game is considered a
aware of the current state and past moves draw or tie.
made by themselves and the opponent. 3. If a winning line is not achieved, but

6. Sequential Decision-Making: In perfect- there are still empty squares on the

information games, players take turns board, the game continues until a winning
line is formed or the board is filled,
resulting in a draw.

In summary, Tic-tac-toe is a simple game played


on a grid where two players take turns marking
X and O symbols on empty squares. The goal is
to create a line of three symbols in a row,
column, or diagonal. The first player to achieve
this wins, and if no player accomplishes this and
all squares are filled, the game ends in a draw.

GAME SEARCH TECHNIQUES

MIN-MAX ALGORITHM

ALPHA-BETA PRUNING

You might also like