0% found this document useful (0 votes)
41 views53 pages

Ai Unit 1 Notes

Artificial Intelligence (AI) is defined as a branch of computer science that enables machines to exhibit intelligent behavior similar to humans, encompassing abilities like reasoning, learning, and problem-solving. The need for AI arises from its potential to improve efficiency, decision-making, and personalization across various fields such as healthcare, finance, and education. AI technologies, including machine learning, natural language processing, and robotics, are increasingly integrated into everyday applications, enhancing user experiences and driving innovation.

Uploaded by

harrybeckam11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views53 pages

Ai Unit 1 Notes

Artificial Intelligence (AI) is defined as a branch of computer science that enables machines to exhibit intelligent behavior similar to humans, encompassing abilities like reasoning, learning, and problem-solving. The need for AI arises from its potential to improve efficiency, decision-making, and personalization across various fields such as healthcare, finance, and education. AI technologies, including machine learning, natural language processing, and robotics, are increasingly integrated into everyday applications, enhancing user experiences and driving innovation.

Uploaded by

harrybeckam11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

UNIT 1

1.1 INTRODUCTION TO AI
Intelligence:
The ability to learn and solve problems. This definition is taken from webster’s Dictionary.
The most common answer that one expects is “to make computers intelligent so that they
can act intelligently!”, but the question is how much intelligent? How can one judge
intelligence?
as intelligent as humans. If the computers can, somehow, solve real-world problems, by
improving on their own from past experiences, they would be called “intelligent”.
Thus, the AI systems are more generic(rather than specific), can “think” and are more
flexible.
Intelligence, as we know, is the ability to acquire and apply knowledge. Knowledge is the
information acquired through experience. Experience is the knowledge gained through
exposure(training). Summing the terms up, we get artificial intelligence as the “copy of
something natural(i.e., human beings) ‘WHO’ is capable of acquiring and applying the
information it has gained through exposure.”

Definition:
"It is a branch of computer science by which we can create intelligent machines which can
behave like a human, think like humans, and able to make decisions."

Intelligence is composed of:

 Reasoning
 Learning
 Problem-Solving
 Perception
 Linguistic Intelligence

Need for Artificial Intelligence:

1. To create expert systems that exhibit intelligent behavior with the capability to learn,
demonstrate, explain, and advise its users.
2. Helping machines find solutions to complex problems like humans do and applying them
as algorithms in a computer-friendly manner.
3. Improved efficiency: Artificial intelligence can automate tasks and processes that are
time-consuming and require a lot of human effort. This can help improve efficiency and
productivity, allowing humans to focus on more creative and high-level tasks.
4. Better decision-making: Artificial intelligence can analyze large amounts of data and
provide insights that can aid in decision-making. This can be especially useful in domains
like finance, healthcare, and logistics, where decisions can have significant impacts on
outcomes.
5. Enhanced accuracy: Artificial intelligence algorithms can process data quickly and
accurately, reducing the risk of errors that can occur in manual processes. This can
improve the reliability and quality of results.
6. Personalization: Artificial intelligence can be used to personalize experiences for users,
tailoring recommendations, and interactions based on individual preferences and
behaviors. This can improve customer satisfaction and loyalty.
7. Exploration of new frontiers: Artificial intelligence can be used to explore new frontiers
and discover new knowledge that is difficult or impossible for humans to access. This
can lead to new breakthroughs in fields like astronomy, genetics, and drug discovery.

History of AI:

To achieve the above factors for a machine or software Artificial Intelligence requires the
following discipline:

o Mathematics
o Biology
o Psychology
o Sociology
o Computer Science
o Neurons Study
o Statistics
Goals of AI:

1. Logical Reasoning

AI programs enable computers to perform sophisticated tasks. On February 10, 1996, IBM’s
Deep Blue computer won a game of chess against a former world champion, Garry
Kasparov.

2. Knowledge Representation

Smalltalk is an object-oriented, dynamically typed, reflective programming language that


was created to underpin the “new world” of computing exemplified by “human-computer
symbiosis.”

3. Planning and Navigation

The process of enabling a computer to get from point A to point B. A prime example of this
is Google’s self-driving Toyota Prius.

4. Natural Language Processing

Set up computers that can understand and process language.

5. Perception

Use computers to interact with the world through sight, hearing, touch, and smell.

6. Emergent Intelligence

Intelligence that is not explicitly programmed, but emerges from the rest of the specific AI
features. The vision for this goal is to have machines exhibit emotional intelligence and
moral reasoning.
Some of the tasks performed by AI-enabled devices include:

 Speech recognition

 Object detection

 Solve problems and learn from the given data


 Plan an approach for future tests to be done

Technologies Based on Artificial Intelligence:


1. Machine Learning: A subfield of AI that uses algorithms to enable systems to learn from
data and make predictions or decisions without being explicitly programmed.
2. Natural Language Processing (NLP): A branch of AI that focuses on enabling
computers to understand, interpret, and generate human language.
3. Computer Vision: A field of AI that deals with the processing and analysis of visual
information using computer algorithms.
4. Robotics: AI-powered robots and automation systems that can perform tasks in
manufacturing, healthcare, retail, and other industries.
5. Neural Networks: A type of machine learning algorithm modeled after the structure and
function of the human brain.
6. Expert Systems: AI systems that mimic the decision-making ability of a human expert
in a specific field.
7. Chatbots: AI-powered virtual assistants that can interact with users through text-based
or voice-based interfaces.

Advantages of AI:

 It reduces human error

 It never sleeps, so it’s available 24x7

 It never gets bored, so it easily handles repetitive tasks

 It’s fast

 Good at detail-oriented jobs. AI has proven to be just as good, if not better than
doctors at diagnosing certain cancers, including breast cancer and melanoma.

 Reduced time for data-heavy tasks. AI is widely used in data-heavy industries,


including banking and securities, pharma and insurance, to reduce the time it takes to
analyze big data sets. Financial services, for example, routinely use AI to process loan
applications and detect fraud.

 Saves labor and increases productivity. An example here is the use of warehouse
automation, which grew during the pandemic and is expected to increase with the
integration of AI and machine learning.

 Delivers consistent results. The best AI translation tools deliver high levels of
consistency, offering even small businesses the ability to reach customers in their native
language.
 Can improve customer satisfaction through personalization. AI can personalize
content, messaging, ads, recommendations and websites to individual customers.

 AI-powered virtual agents are always available. AI programs do not need to sleep
or take breaks, providing 24/7 service.

Artificial Intelligence Examples

Artificial Intelligence (AI) has become an integral part of our daily lives, revolutionizing
various industries and enhancing user experiences. Here are some notable examples of AI
applications:

ChatGPT

ChatGPT is an advanced language model developed by OpenAI, capable of generating human-


like responses and engaging in natural language conversations. It uses deep learning techniques
to understand and generate coherent text, making it useful for customer support, chatbots, and
virtual assistants.

Google Maps

Google Maps utilizes AI algorithms to provide real-time navigation, traffic updates, and
personalized recommendations. It analyzes vast amounts of data, including historical traffic
patterns and user input, to suggest the fastest routes, estimate arrival times, and even predict
traffic congestion.

Smart Assistants

Smart assistants like Amazon's Alexa, Apple's Siri, and Google Assistant employ AI
technologies to interpret voice commands, answer questions, and perform tasks. These
assistants use natural language processing and machine learning algorithms to understand user
intent, retrieve relevant information, and carry out requested actions.

Snapchat Filters

Snapchat's augmented reality filters, or "Lenses," incorporate AI to recognize facial features,


track movements, and overlay interactive effects on users' faces in real-time. AI algorithms
enable Snapchat to apply various filters, masks, and animations that align with the user's facial
expressions and movements.
Self-Driving Cars

Self-driving cars rely heavily on AI for perception, decision-making, and control. Using a
combination of sensors, cameras, and machine learning algorithms, these vehicles can detect
objects, interpret traffic signs, and navigate complex road conditions autonomously, enhancing
safety and efficiency on the roads.

Wearables

Wearable devices, such as fitness trackers and smartwatches, utilize AI to monitor and analyze
users' health data. They track activities, heart rate, sleep patterns, and more, providing
personalized insights and recommendations to improve overall well-being.

MuZero

MuZero is an AI algorithm developed by DeepMind that combines reinforcement learning and


deep neural networks. It has achieved remarkable success in playing complex board games like
chess, Go, and shogi at a superhuman level. MuZero learns and improves its strategies through
self-play and planning.

These examples demonstrate the wide-ranging applications of AI, showcasing its potential to
enhance our lives, improve efficiency, and drive innovation across various industries.
1.2 APPLICATIONS OF AI

 Natural Language Processing (NLP)

AI is used in NLP to analyze and understand human language. It powers applications such as
speech recognition, machine translation, sentiment analysis, and virtual assistants like Siri and
Alexa.

 Image and Video Analysis

AI techniques, including computer vision, enable the analysis and interpretation of images and
videos. This finds application in facial recognition, object detection and tracking, content
moderation, medical imaging, and autonomous vehicles.

 Robotics and Automation

AI plays a crucial role in robotics and automation systems. Robots equipped with AI algorithms
can perform complex tasks in manufacturing, healthcare, logistics, and exploration. They can
adapt to changing environments, learn from experience, and collaborate with humans.

 Recommendation Systems

AI-powered recommendation systems are used in e-commerce, streaming platforms, and social
media to personalize user experiences. They analyze user preferences, behavior, and historical
data to suggest relevant products, movies, music, or content.

 Financial Services

AI is extensively used in the finance industry for fraud detection, algorithmic trading, credit
scoring, and risk assessment. Machine learning models can analyze vast amounts of financial
data to identify patterns and make predictions.

 Healthcare

AI applications in healthcare include disease diagnosis, medical imaging analysis, drug


discovery, personalized medicine, and patient monitoring. AI can assist in identifying patterns
in medical data and provide insights for better diagnosis and treatment.
 Virtual Assistants and Chatbots

AI-powered virtual assistants and chatbots interact with users, understand their queries, and
provide relevant information or perform tasks. They are used in customer support, information
retrieval, and personalized assistance.

 Gaming

AI algorithms are employed in gaming for creating realistic virtual characters, opponent
behavior, and intelligent decision-making. AI is also used to optimize game graphics, physics
simulations, and game testing.

 Smart Homes and IoT

AI enables the development of smart home systems that can automate tasks, control devices,
and learn from user preferences. AI can enhance the functionality and efficiency of Internet of
Things (IoT) devices and networks.

 Cybersecurity

AI helps in detecting and preventing cyber threats by analyzing network traffic, identifying
anomalies, and predicting potential attacks. It can enhance the security of systems and data
through advanced threat detection and response mechanisms.

These are just a few examples of how AI is applied in various fields. The potential of AI is
vast, and its applications continue to expand as technology advances.

AI in healthcare

The biggest bets are on improving patient outcomes and reducing costs. Companies
are applying machine learning to make better and faster medical diagnoses than
humans. One of the best-known healthcare technologies is IBM Watson. It
understands natural language and can respond to questions asked of it. The system
mines patient data and other available data sources to form a hypothesis, which it
then presents with a confidence scoring schema. Other AI applications include using
online virtual health assistants and chatbots to help patients and healthcare
customers find medical information, schedule appointments, understand the billing
process and complete other administrative processes. An array of AI technologies is
also being used to predict, fight and understand pandemics such as COVID-19.
AI in business

Machine learning algorithms are being integrated into analytics and customer
relationship management (CRM) platforms to uncover information on how to better
serve customers. Chatbots have been incorporated into websites to provide
immediate service to customers. The rapid advancement of generative AI technology
such as ChatGPT is expected to have far-reaching consequences: eliminating jobs,
revolutionizing product design and disrupting business models.

AI in education

AI can automate grading, giving educators more time for other tasks. It can assess
students and adapt to their needs, helping them work at their own pace. AI tutors can
provide additional support to students, ensuring they stay on track. The technology
could also change where and how students learn, perhaps even replacing some
teachers. As demonstrated by ChatGPT, Google Bard and other large language
models, generative AI can help educators craft course work and other teaching
materials and engage students in new ways. The advent of these tools also forces
educators to rethink student homework and testing and revise policies on plagiarism.

AI in finance

AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting


financial institutions. Applications such as these collect personal data and provide
financial advice. Other programs, such as IBM Watson, have been applied to the
process of buying a home. Today, artificial intelligence software performs much of
the trading on Wall Street.

AI in law

The discovery process -- sifting through documents -- in law is often overwhelming


for humans. Using AI to help automate the legal industry's labor-intensive processes
is saving time and improving client service. Law firms use machine learning to
describe data and predict outcomes, computer vision to classify and extract
information from documents, and NLP to interpret requests for information.
AI in entertainment and media

The entertainment business uses AI techniques for targeted advertising,


recommending content, distribution, detecting fraud, creating scripts and making
movies. Automated journalism helps newsrooms streamline media workflows
reducing time, costs and complexity. Newsrooms use AI to automate routine tasks,
such as data entry and proofreading; and to research topics and assist with headlines.
How journalism can reliably use ChatGPT and other generative AI to generate
content is open to question.

AI in software coding and IT processes

New generative AI tools can be used to produce application code based on natural
language prompts, but it is early days for these tools and unlikely they will replace
software engineers soon. AI is also being used to automate many IT processes,
including data entry, fraud detection, customer service, and predictive maintenance
and security.

Security

AI and machine learning are at the top of the buzzword list security vendors use to
market their products, so buyers should approach with caution. Still, AI techniques
are being successfully applied to multiple aspects of cybersecurity, including
anomaly detection, solving the false-positive problem and conducting behavioral
threat analytics. Organizations use machine learning in security information and
event management (SIEM) software and related areas to detect anomalies and
identify suspicious activities that indicate threats. By analyzing data and using logic
to identify similarities to known malicious code, AI can provide alerts to new and
emerging attacks much sooner than human employees and previous technology
iterations.

AI in manufacturing

Manufacturing has been at the forefront of incorporating robots into the workflow.
For example, the industrial robots that were at one time programmed to perform
single tasks and separated from human workers, increasingly function as cobots:
Smaller, multitasking robots that collaborate with humans and take on responsibility
for more parts of the job in warehouses, factory floors and other workspaces.

AI in banking

Banks are successfully employing chatbots to make their customers aware of


services and offerings and to handle transactions that don't require human
intervention. AI virtual assistants are used to improve and cut the costs of
compliance with banking regulations. Banking organizations use AI to improve their
decision-making for loans, set credit limits and identify investment opportunities.

AI in transportation

In addition to AI's fundamental role in operating autonomous vehicles, AI


technologies are used in transportation to manage traffic, predict flight delays, and
make ocean shipping safer and more efficient. In supply chains, AI is replacing
traditional methods of forecasting demand and predicting disruptions, a trend
accelerated by COVID-19 when many companies were caught off guard by the
effects of a global pandemic on the supply and demand of goods.
1.3 PROBLEM SOLVING AGENTS

Problem-Solving Approach in Artificial Intelligence Problems:

The reflex agents are known as the simplest agents because they directly map states into actions.
Unfortunately, these agents fail to operate in an environment where the mapping is too large to
store and learn. Goal-based agent, on the other hand, considers future actions and the desired
outcomes.

One type of goal-based agent known as a problem-solving agent, which uses atomic
representation with no internal states visible to the problem-solving algorithms.

Problem-solving agent:

The problem-solving agent performs precisely by defining problems and its severalsolutions.

 According to psychology, “a problem-solving refers to a state where we wish to reach to


a definite goal from a present state or condition.”
 According to computer science, a problem-solving is a part of artificial intelligence which
encompasses a number of techniques such as algorithms, heuristics to solve a problem.
Therefore, a problem-solving agent is a goal-driven agent and focuses on satisfying the goal.

Problem Definition:
To build a system to solve a particular problem, we need to do four things:
(i) Define the problem precisely. This definition must include specification of the initial
situations and also final situations which constitute (i.e) acceptable solution to the problem.
(ii) Analyze the problem (i.e) important features have an immense (i.e) huge impact on the
appropriateness of various techniques for solving the problems.
(iii) Isolate and represent the knowledge to solve the problem.
(iv) Choose the best problem – solving techniques and apply it to the particular problem.
Steps performed by Problem-solving agent:

 Goal Formulation: It is the first and simplest step in problem-solving. It


organizes the steps/sequence required to formulate one goal out of multiple
goals as well as actions to achieve that goal. Goal formulation is based on the
current situation and the agent’sperformance measure (discussed below).
 Problem Formulation: It is the most important step of problem-solving which
decides what actions should be taken to achieve the formulated goal. There are
following five components involved in problem formulation:
 Initial State: It is the starting state or initial step of the agent towards its goal.
 Actions: It is the description of the possible actions available to the agent.
 Transition Model: It describes what each action does.
 Goal Test: It determines if the given state is a goal state.
 Path cost: It assigns a numeric cost to each path that follows the goal. The
problem- solving agent selects a cost function, which reflects its performance
measure. Remember, anoptimal solution has the lowest path cost among all
the solutions.
Note: Initial state, actions, and transition model together define the state-space of the
problemimplicitly. State-space of a problem is a set of all states, which can be reached from
the initial statefollowed by any sequence of actions. The state-space forms a directed map
or graph where nodesare the states, links between the nodes are actions, and the path is a
sequence of states connected by the sequence of actions.

 Search: It identifies all the best possible sequence of actions to reach the goal
state fromthe current state. It takes a problem as an input and returns solution as its
output.
 Solution: It finds the best algorithm out of various algorithms, which may be
proven as thebest optimal solution.
 Execution: It executes the best optimal solution from the searching algorithms to
reach thegoal state from the current state.
Example Problems

Basically, there are two types of problem approaches:

 Toy Problem: It is a concise and exact description of the problem which is used by
the researchers to compare the performance of algorithms.
 Real-world Problem: It is real-world based problems which require solutions.
Unlike a toyproblem, it does not depend on descriptions, but we can have a general
formulation of the problem.

Some Toy Problems

8 Puzzle Problem:

 Here, we have a 3×3 matrix with movable tiles numbered from 1 to 8 with a blank
space. The tile adjacent to the blank space can slide into that space. The objective is
to reach a specified goal state similar to the goal state, as shown in the below figure.
 In the figure, our task is to convert the current state into goal state by sliding digits
into theblank space.

In the above figure, our task is to convert the current(Start) state into goal state by
sliding digits into the blank space.
The problem formulation is as follows:

 States: It describes the location of each numbered tiles and the blank tile.
 Initial State: We can start from any state as the initial state.

 Actions: Here, actions of the blank space is defined, i.e., either left, right, up or
down
 Transition Model: It returns the resulting state as per the given state and actions.
 Goal test: It identifies whether we have reached the correct goal-state.
 Path cost: The path cost is the number of steps in the path where the cost of each
step is 1. Note: The 8-puzzle problem is a type of sliding-block problem which is
used for testingnew search algorithms in artificial intelligence.

8-queens problem:

The aim of this problem is to place eight queens on a chessboard in an order where no
queen may attack another. A queen can attack other queens either diagonallyor in same
row and column.
From the following figure, we can understand the problem as well as its correct solution.

It is noticed from the above figure that each queen is set into the chessboard in a position
where no other queen is placed diagonally, in same row or column. Therefore, it is one right
approach tothe 8-queens problem.
For this problem, there are two main kinds of formulation:

1. Incremental formulation: It starts from an empty state where the operator augments a
queenat each step.
Following steps are involved in this formulation:

 States: Arrangement of any 0 to 8 queens on the chessboard.

 Initial State: An empty chessboard


 Actions: Add a queen to any empty box.
 Transition model: Returns the chessboard with the queen added in a box.
 Goal test: Checks whether 8-queens are placed on the chessboard without any
attack.
 Path cost: There is no need for path cost because only final states are

counted. In this formulation, there is approximately 1.8 x 1014 possible


sequence to investigate.

2. Complete-state formulation: It starts with all the 8-queens on the chessboard and
moves themaround, saving from the attacks.

Following steps are involved in this formulation

 States: Arrangement of all the 8 queens one per column with no queen attacking
the otherqueen.
 Actions: Move the queen at the location where it is safe from the attacks.
This formulation is better than the incremental formulation as it reduces the state

space from 1.8 x1014 to 2057, and it is easy to find the solutions.

Some Real-world problems

 Traveling salesperson problem(TSP): It is a touring problem where thesalesman

can visit each city only once. The objective is to find the shortest tour and sell-out the
stuff in each city.
 VLSI Layout problem: In this problem, millions of components and connections are
positioned on a chip in order to minimize the area, circuit-delays, stray-capacitances,
and maximizing the manufacturing yield.
The layout problem is split into two parts:

 Cell layout: Here, the primitive components of the circuit are grouped into cells, each

performing its specific function. Each cell has a fixed shape and size. The task is to place
the cells on the chip without overlapping each other.
 Channel routing: It finds a specific route for each wire through the gaps between the

cells.

 Protein Design: The objective is to find a sequence of amino acids which will fold into

3D protein having a property to cure some disease.


Searching for solutions:

We have seen many problems. Now, there is a need to search for solutions to solve them.
In this section, we will understand how searching can be used by the agent to solve a problem.

For solving different kinds of problem, an agent makes use of different strategies to reach
the goal by searching the best possible algorithms. This process of searching is known as
search strategy.
1.4 SEARCH ALGORITHMS

Search algorithms are one of the most important areas of Artificial Intelligence. This topic will
explain all about the search algorithms in AI.

Problem-solving agents:

In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational


agents or Problem-solving agents in AI mostly used these search strategies or algorithms to
solve a specific problem and provide the best result. Problem-solving agents are the goal-based
agents and use atomic representation. In this topic, we will learn various problem-solving
search algorithms.

Search Algorithm Terminologies:

Search: Searching is a step by step procedure to solve a search-problem in a given search


space.
A search problem can have three main factors:
Search Space: Search space represents a set of possible solutions, which a system may have.
Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns whether the goal state is
achieved or not.
Search tree: A tree representation of search problem is called Search tree. The root of the
search tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Transition model: A description of what each action do, can be represented as a transition
model.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.

Properties of Search Algorithms:


Following are the four essential properties of search algorithms to compare the efficiency of
these algorithms:
Completeness: A search algorithm is said to be complete if it guarantees to return a solution
if at least any solution exists for any random input.
Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest
path cost) among all other solutions, then such a solution for is said to be an optimal solution.
Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.
Space Complexity: It is the maximum storage space required at any point during the search,
as the complexity of the problem.

Types of search algorithms:


Based on the search problems we can classify the search algorithms into uninformed (Blind
search) search and informed search (Heuristic search) algorithms.

Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness, the location
of the goal. It operates in a brute-force way as it only includes information about how to traverse
the tree and how to identify leaf and goal nodes. Uninformed search applies a way in which
search tree is searched without any information about the search space like initial state
operators and test for the goal, so it is also called blind search. It examines each node of the
tree until it achieves the goal node.
It can be divided into five main types:
 Breadth-first search
 Uniform cost search
 Depth-first search
 Iterative deepening depth-first search
 Bidirectional Search
Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem
information is available which can guide the search. Informed search strategies can find a
solution more efficiently than an uninformed search strategy. Informed search is also called a
Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to
find a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in another way.
An example of informed search algorithms is a traveling salesman problem.
 Greedy Search
 A* Search
1.5 UNINFORMED SEARCH STRATEGIES
Uninformed search is a class of general-purpose search algorithms which operates in brute
force-way. Uninformed search algorithms do not have additional information about state or
search space other than how to traverse the tree, so it is also called blind search.
Following are the various types of uninformed search algorithms:
 Breadth-first Search
 Depth-first Search
 Depth-limited Search
 Iterative deepening depth-first search
 Uniform cost search
 Bidirectional Search
1. Breadth-first Search:
 Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadthwise in a tree or graph, so it is called breadth-first
search.
 BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.
 The breadth-first search algorithm is an example of a general-graph search algorithm.
 Breadth-first search implemented using FIFO queue data structure.
Advantages:
BFS will provide a solution if any solution exists.
If there are more than one solutions for a given problem, then BFS will provide the minimal
solution, which requires the least number of steps.
Disadvantages:
It requires lots of memory since each level of the tree must be saved into memory to expand
the next level.
BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm from
the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the
path which is shown by the dotted arrow, and the traversed path will be:

S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution
and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
2. Depth-first Search
 Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
 It is called the depth-first search because it starts from the root node and follows each
path to its greatest depth node before moving to the next path.
 DFS uses a stack data structure for its implementation.
 The process of the DFS algorithm is similar to the BFS algorithm.
Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.
Advantages:
 DFS requires very less memory as it only needs to store a stack of the nodes on the
path from root node to the current node.
 It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
Disadvantages:
 There is the possibility that many states keep re-occurring, and there is no guarantee
of finding the solution.
 DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the
order as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E, after traversing
E, it will backtrack the tree as E has no other successor and still goal node is not found. After
backtracking it will traverse node C and then G, and here it will terminate as it found goal node.

Completeness: DFS search algorithm is complete within finite state space as it will expand
every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm.
It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be much larger than d (Shallowest
solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root node, hence
space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or
high cost to reach to the goal node.
3. Depth-Limited Search Algorithm:
A depth-limited search algorithm is similar to depth-first search with a predetermined limit.
Depth-limited search can solve the drawback of the infinite path in the Depth-first search. In
this algorithm, the node at the depth limit will treat as it has no successor nodes further.
Depth-limited search can be terminated with two Conditions of failure:
 Standard failure value: It indicates that problem does not have any solution.
 Cut off failure value: It defines no solution for the problem within a given depth limit.
Advantage:
 Depth-limited search is Memory efficient.
Disadvantages:
 Depth-limited search also has a disadvantage of incompleteness.
 It may not be optimal if the problem has more than one solution.
Example:

Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Time Complexity: Time complexity of DLS algorithm is O(bℓ).
Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not
optimal even if ℓ>d.
4. Uniform-cost Search Algorithm:

Uniform Cost Search (UCS) is a type of uninformed search that performs a search based on
the lowest path cost. UCS helps us find the path from the starting node to the goal node with
the minimum path cost. Considering the scenario that we need to move from point A to point
B, which path would you choose? A->C->B or A->B:

The path cost of going from path A to path B is 5 and the path cost of path A to C to B is 4
(2+2). As UCS will consider the least path cost, that is, 4. Hence, A to C to B would be selected
in terms of uniform cost search.
Explanation
Concept:
 Frontier list will be based on the priority queue. Every new node will be added at the
end of the list and the list will give priority to the least cost path.
 The node at the top of the frontier list will be added to the expand list, which shows
that this node is going to be explored in the next step. It will not repeat any node. If the
node has already been explored, you can discard it.
 Explored list will be having the nodes list, which will be completely explored.
Algorithm:
1. Add the Starting node S in the frontier list with the path cost g(n) = 0 (starting point is
at 0 path cost).
2. Add this node to the Explored list, as we only have a single node in the frontier list. If
we have multiple nodes, then we will add that one node at the top of the frontier.
3. Now, explore this node by visiting all of its child nodes. After that, add this node to
the Explored list, as it is now fully explored.
4. Check if the added node is the goal node or not. Stop if the goal node is found, or else
move on to the next step.
5. Since new nodes are added to the frontier list, we need to compare and set the priority
queue again, depending upon the priority, that is, the minimum path cost g(n).
6. Now, move to back to step 2 and repeat the steps until the goal node is not added to the
explored list.

Solutions:
 Actual path: This is obtained by the frontier list.
 Traversed path: This is obtained by the explored list.

Example

Consider the following graph. Let the starting node be A and the goal node be G.

Finding path A to G using uniform cost search

mplementing UCS:

Frontier List Expand List Explored List


1. {(A,0)} A NULL
2. {(A-D, 3), (A-B, 5)} D {A}
3. {(A-B, 5), (A-D-E, 5), (A-D-F, 5)} B {A, D}
4. {(A-D-E, 5), (A-D-F, 5), (A-B-C, 6)} E {A, D, B}
5. {(A-D-F, 5), (A-B-C, 6), (A-D-E-B, 9)} F {A, D, B, E}
*here B is already explored
6. {(A-B-C, 6), (A-D-F-G,8)} C {A, D, B, E, F}
7. {(A-D-F-G,8), (A-B-C-E,12), (A-B-C-G, 14)} G {A, D, B, E, F, C}
*here E is already explored
8. {(A-D-F-G,8)} NULL {A, D, B, E, F, C, G}
# GOAL Found!

Hence we get:

1. Actual path => A -- D -- F -- G , with Path Cost = 8.


2. Traversed path => A -- D -- B -- E -- F -- C -- G

Evaluation

The uniform cost search algorithm can be evaluated by four of the following factors:

 Completeness: UCS is complete if the branching factor b is finite.


 Time complexity: The time complexity of UCS is exponential O(b(1+C/ε)),
because every node is checked.
 Space completeness: The space complexity is also exponential O(b(1+C/ε)),
because all the nodes are added to the list for comparisons of priority.
 Optimality: UCS gives the optimal solution or path to the goal node.

5. Iterative deepening depth-first Search:


The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search
algorithm finds out the best depth limit and does it by gradually increasing the limit until a goal
is found.
This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing
the depth limit after each iteration until the goal node is found.
This Search algorithm combines the benefits of Breadth-first search's fast search and depth-
first search's memory efficiency.
The iterative search algorithm is useful uninformed search when search space is large, and
depth of goal node is unknown.
Advantages:
 It combines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.
Disadvantages:
 The main drawback of IDDFS is that it repeats all the work of the previous phase.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm
performs various iterations until it does not find the goal node. The iteration performed by the
algorithm is given as:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Completeness:
This algorithm is complete is if the branching factor is finite.
Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time complexity is
O(bd).
Space Complexity:
The space complexity of IDDFS will be O(bd).
Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the node.
6. Bidirectional Search Algorithm:
Bidirectional search algorithm runs two simultaneous searches, one form initial state called as
forward-search and other from goal node called as backward-search, to find the goal node.
Bidirectional search replaces one single search graph with two small subgraphs in which one
starts the search from an initial vertex and other starts from goal vertex. The search stops when
these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
 Bidirectional search is fast.
 Bidirectional search requires less memory
Disadvantages:
 Implementation of the bidirectional search tree is difficult.
 In bidirectional search, one should know the goal state in advance.
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm divides one
graph/tree into two sub-graphs. It starts traversing from node 1 in the forward direction and
starts from goal node 16 in the backward direction.
The algorithm terminates at node 9 where two searches meet.

Completeness: Bidirectional Search is complete if we use BFS in both searches.


Time Complexity: Time complexity of bidirectional search using BFS is O(bd).
Space Complexity: Space complexity of bidirectional search is O(bd).
Optimal: Bidirectional search is Optimal.
1.6 HEURISTIC SEARCH STRATEGIES
Heuristics

A heuristic is a technique that is used to solve a problem faster than the classic
methods. These techniques are used to find the approximate solution of a problem
when classical methods do not. Heuristics are said to be the problem-solving
techniques that result in practical and quick solutions.

Heuristics are strategies that are derived from past experience with similar
problems. Heuristics use practical methods and shortcuts used to produce the
solutions that may or may not be optimal, but those solutions are sufficient in a
given limited timeframe.

History

Psychologists Daniel Kahneman and Amos Tversky have developed the study
of Heuristics in human decision-making in the 1970s and 1980s. However, this
concept was first introduced by the Nobel Laureate Herbert A. Simon, whose
primary object of research was problem-solving.

Why do we need Heuristics?

Heuristics are used in situations in which there is the requirement of a short-term


solution. On facing complex situations with limited resources and time,
Heuristics can help the companies to make quick decisions by shortcuts and
approximated calculations. Most of the heuristic methods involve mental
shortcuts to make decisions on past experiences.

The heuristic method might not always provide us the finest solution, but it is
assured that it helps us find a good solution in a reasonable time.

Based on context, there can be different heuristic methods that correlate with the
problem's scope. The most common heuristic methods are - trial and error,
guesswork, the process of elimination, historical data analysis. These methods
involve simply available information that is not particular to the problem but is
most appropriate. They can include representative, affect, and availability
heuristics.
Heuristic search techniques in AI (Artificial Intelligence)

We can perform the Heuristic techniques into two categories:

Direct Heuristic Search techniques in AI

It includes Blind Search, Uninformed Search, and Blind control strategy. These
search techniques are not always possible as they require much memory and time.
These techniques search the complete space for a solution and use the arbitrary
ordering of operations.

The examples of Direct Heuristic search techniques include Breadth-First Search


(BFS) and Depth First Search (DFS).

Weak Heuristic Search techniques in AI

It includes Informed Search, Heuristic Search, and Heuristic control strategy.


These techniques are helpful when they are applied properly to the right types of
tasks. They usually require domain-specific information.

The examples of Weak Heuristic search techniques include Best First Search
(BFS) and A*.

Before describing certain heuristic techniques, let's see some of the techniques
listed below:
o Bidirectional Search
o A* search
o Simulated Annealing
o Hill Climbing
o Best First search
o Beam search

Examples of heuristics in everyday life

Some of the real-life examples of heuristics that people use as a way to solve a
problem:

o Common sense: It is a heuristic that is used to solve a problem based on


the observation of an individual.
o Rule of thumb: In heuristics, we also use a term rule of thumb. This
heuristic allows an individual to make an approximation without doing an
exhaustive search.
o Working backward: It lets an individual solve a problem by assuming
that the problem is already being solved by them and working backward in
their minds to see how much a solution has been reached.
o Availability heuristic: It allows a person to judge a situation based on the
examples of similar situations that come to mind.
o Familiarity heuristic: It allows a person to approach a problem on the fact
that an individual is familiar with the same situation, so one should act
similarly as he/she acted in the same situation before.
o Educated guess: It allows a person to reach a conclusion without doing an
exhaustive search. Using it, a person considers what they have observed in
the past and applies that history to the situation where there is not any
definite answer has decided yet.

Types of heuristics

There are various types of heuristics, including the availability heuristic, affect
heuristic and representative heuristic. Each heuristic type plays a role in decision-
making. Let's discuss about the Availability heuristic, affect heuristic, and
Representative heuristic.

Availability heuristic

Availability heuristic is said to be the judgment that people make regarding the
likelihood of an event based on information that quickly comes into mind. On
making decisions, people typically rely on the past knowledge or experience of
an event. It allows a person to judge a situation based on the examples of similar
situations that come to mind.

Representative heuristic

It occurs when we evaluate an event's probability on the basis of its similarity


with another event.

Example: We can understand the representative heuristic by the example of


product packaging, as consumers tend to associate the products quality with the
external packaging of a product. If a company packages its products that remind
you of a high quality and well-known product, then consumers will relate that
product as having the same quality as the branded product.

So, instead of evaluating the product based on its quality, customers correlate the
products quality based on the similarity in packaging.
Affect heuristic

It is based on the negative and positive feelings that are linked with a certain
stimulus. It includes quick feelings that are based on past beliefs. Its theory is
one's emotional response to a stimulus that can affect the decisions taken by an
individual.

When people take a little time to evaluate a situation carefully, they might base
their decisions based on their emotional response.

Example: The affect heuristic can be understood by the example of


advertisements. Advertisements can influence the emotions of consumers, so it
affects the purchasing decision of a consumer. The most common examples of
advertisements are the ads of fast food. When fast-food companies run the
advertisement, they hope to obtain a positive emotional response that pushes you
to positively view their products.

If someone carefully analyzes the benefits and risks of consuming fast food, they
might decide that fast food is unhealthy. But people rarely take time to evaluate
everything they see and generally make decisions based on their automatic
emotional response. So, Fast food companies present advertisements that rely on
such type of Affect heuristic for generating a positive emotional response which
results in sales.

Limitation of heuristics

Along with the benefits, heuristic also has some limitations.

o Although heuristics speed up our decision-making process and also help us


to solve problems, they can also introduce errors just because something
has worked accurately in the past, so it does not mean that it will work
again.
o It will hard to find alternative solutions or ideas if we always rely on the
existing solutions or heuristics.

Conclusion

Heuristic search is an important aspect of Artificial Intelligence that has been


used to solve complex problems. It involves the use of heuristics; strategies or
methods which are applied in a problem-solving context and can be seen as rules
of thumb. A heuristic algorithm is one example, wherein different parameters
such as cost, risk and efficiency are used to guide the selection process in solving
a given problem. Heuristic approach in AI provides an efficient way of tackling
issues where limited resources are available, while heuristic technique focuses on
simplifying complex sets of data by looking at patterns and trends within it.
Furthermore, there are three main types of heuristics: Algorithmic Heuristics,
Informational Heuristics and Adaptive Heuristics - each with its own unique
characteristics.
Overall, heuristic search offers a powerful tool for Artificial Intelligence
practitioners when faced with difficult tasks involving large datasets or multiple
variables. Its ability to simplify complex information into understandable pieces
makes it invaluable for decision makers who need quick yet reliable solutions.
Additionally, its flexibility allows users to adapt their strategies depending on the
situation they face - resulting in more accurate results than traditional approaches
would have achieved alone. In conclusion, heuristic search remains a valuable
asset for those seeking rapid yet dependable answers from challenging problems.
1.7 A* ALGORITHM

A* search is the most commonly known form of best-first search. It uses heuristic function
h(n), and cost to reach the node n from the start state g(n). It has combined features of UCS
and greedy best-first search, by which it solve the problem efficiently. A* search algorithm
finds the shortest path through the search space using the heuristic function. This search
algorithm expands less search tree and provides optimal result faster. A* algorithm is similar
to UCS except that it uses g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we
can combine both costs as following, and this sum is called as a fitness number.

At each point in the search space, only those node is expanded which have the lowest value
of f(n) and the algorithm terminates when the goal is found.

Algorithm of A* search:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation function
(g+h), if node n is goal node then return success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

o A* search algorithm is the best algorithm than other search algorithms.


o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.
Disadvantages:

o It does not always produce the shortest path as it mostly based on heuristics and
approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated nodes in the
memory, so it is not practical for various large-scale problems.

Example:

In this example, we will traverse the given graph using the A* algorithm. The heuristic value
of all states is given in the below table so we will calculate the f(n) of each state using the
formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start state.
Here we will use OPEN and CLOSED list.

Solution:

Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}


Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with
cost 6.

Points to remember:

o A* algorithm returns the path which occurred first, and it does not search for all
remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete:

A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.

Optimal:

A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search will always find the least cost path.

Time Complexity:

The time complexity of A* search algorithm depends on heuristic function, and the number of
nodes expanded is exponential to the depth of solution d. So the time complexity is O(b^d),
where b is the branching factor.

Space Complexity:

The space complexity of A* search algorithm is O(b^d)


1.8 GAME PLAYING
Game Playing is an important domain of artificial intelligence. Games don’t require much
knowledge; the only knowledge we need to provide is the rules, legal moves and the
conditions of winning or losing the game. Both players try to win the game. So, both of them
try to make the best move possible at each turn. Searching techniques like BFS(Breadth First
Search) are not accurate for this as the branching factor is very high, so searching will take a
lot of time.

So, we need another search procedures that improve –


 Generate procedure so that only good moves are generated.
 Test procedure so that the best move can be explored first.

Game playing is a popular application of artificial intelligence that involves the development
of computer programs to play games, such as chess, checkers, or Go. The goal of game
playing in artificial intelligence is to develop algorithms that can learn how to play games
and make decisions that will lead to winning outcomes.

1. One of the earliest examples of successful game playing AI is the chess program Deep
Blue, developed by IBM, which defeated the world champion Garry Kasparov in 1997.
Since then, AI has been applied to a wide range of games, including two-player games,
multiplayer games, and video games.
There are two main approaches to game playing in AI, rule-based systems and machine
learning-based systems.

1. Rule-based systems use a set of fixed rules to play the game.


2. Machine learning-based systems use algorithms to learn from experience and make
decisions based on that experience.
In recent years, machine learning-based systems have become increasingly popular, as they
are able to learn from experience and improve over time, making them well-suited for
complex games such as Go. For example, AlphaGo, developed by DeepMind, was the first
machine learning-based system to defeat a world champion in the game of Go.

Game playing in AI is an active area of research and has many practical applications,
including game development, education, and military training. By simulating game playing
scenarios, AI algorithms can be used to develop more effective decision-making systems for
real-world applications.
The most common search technique in game playing is Minimax search procedure. It is
depth-first depth-limited search procedure. It is used for games like chess and tic-tac-toe.
Minimax algorithm uses two functions –
MOVEGEN : It generates all the possible moves that can be generated from the current
position.
STATICEVALUATION : It returns a value depending upon the goodness from the
viewpoint of two-player
This algorithm is a two player game, so we call the first player as PLAYER1 and second
player as PLAYER2. The value of each node is backed-up from its children. For PLAYER1
the backed-up value is the maximum value of its children and for PLAYER2 the backed-up
value is the minimum value of its children. It provides most promising move to PLAYER1,
assuming that the PLAYER2 has make the best move. It is a recursive algorithm, as same
procedure occurs at each level.

Figure 1: Before backing-up of values


Figure 2: After backing-up of values We assume that PLAYER1 will start the game.

4 levels are generated. The value to nodes H, I, J, K, L, M, N, O is provided by


STATICEVALUATION function. Level 3 is maximizing level, so all nodes of level 3 will
take maximum values of their children. Level 2 is minimizing level, so all its nodes will take
minimum values of their children. This process continues. The value of A is 23. That means
A should choose C move to win.

Advantages of Game Playing in Artificial Intelligence:

1. Advancement of AI: Game playing has been a driving force behind the development of
artificial intelligence and has led to the creation of new algorithms and techniques that
can be applied to other areas of AI.
2. Education and training: Game playing can be used to teach AI techniques and
algorithms to students and professionals, as well as to provide training for military and
emergency response personnel.
3. Research: Game playing is an active area of research in AI and provides an opportunity
to study and develop new techniques for decision-making and problem-solving.
4. Real-world applications: The techniques and algorithms developed for game playing
can be applied to real-world applications, such as robotics, autonomous systems, and
decision support systems.
Disadvantages of Game Playing in Artificial Intelligence:

1. Limited scope: The techniques and algorithms developed for game playing may not be
well-suited for other types of applications and may need to be adapted or modified for
different domains.
2. Computational cost: Game playing can be computationally expensive, especially for
complex games such as chess or Go, and may require powerful computers to achieve real-
time performance.
1.9 ALPHA BETA PRUNING

o Alpha-beta pruning is a modified version of the minimax algorithm. It is an


optimization technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game states it has
to examine are exponential in depth of the tree. Since we cannot eliminate the exponent,
but we can cut it to half. Hence there is a technique by which without checking each
node of the game tree we can compute the correct minimax decision, and this technique
is called pruning. This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta
Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only
prune the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:
a. Alpha: The best (highest-value) choice we have found so far at any point along
the path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along
the path of Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the same move as the
standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.

Condition for Alpha-beta pruning:

The main condition which required for alpha-beta pruning is:

1. α>=β

o The Max player will only update the value of alpha.


o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper nodes instead of
values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.
Pseudo-code for Alpha-beta Pruning:

1. function minimax(node, depth, alpha, beta, maximizingPlayer) is


2. if depth ==0 or node is a terminal node then
3. return static evaluation of node
4.
5. if MaximizingPlayer then // for Maximizer Player
6. maxEva= -infinity
7. for each child of node do
8. eva= minimax(child, depth-1, alpha, beta, False)
9. maxEva= max(maxEva, eva)
10. alpha= max(alpha, maxEva)
11. if beta<=alpha
12. break
13. return maxEva
14.
15. else // for Minimizer player
16. minEva= +infinity
17. for each child of node do
18. eva= minimax(child, depth-1, alpha, beta, true)
19. minEva= min(minEva, eva)
20. beta= min(beta, eva)
21. if beta<=alpha
22. break
23. return minEva

Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of Alpha-beta
pruning

Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β=
+∞, these value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and
Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and
node value will also 3.

In the next step, algorithm traverse the next successor of Node B which is node E, and the
values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value
of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where
α>=β, so the right successor of E will be pruned, and algorithm will not traverse it, and the
value at node E will be 5.

Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A,
the value of alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β=
+∞, these two values now passes to right successor of A which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α remains 3,
but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta
will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again
it satisfies the condition α>=β, so the next child of C which is G will be pruned, and the
algorithm will not compute the entire sub-tree G.

Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following
is the final game tree which is the showing the nodes which are computed and nodes which has
never computed. Hence the optimal value for the maximizer is 3 for this example.
Move Ordering in Alpha-Beta pruning:

The effectiveness of alpha-beta pruning is highly dependent on the order in which each node
is examined. Move order is an important aspect of alpha-beta pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of
the leaves of the tree, and works exactly as minimax algorithm. In this case, it also
consumes more time because of alpha-beta factors, such a move of pruning is called
worst ordering. In this case, the best move occurs on the right side of the tree. The time
complexity for such an order is O(bm).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning
happens in the tree, and best moves occur at the left side of the tree. We apply DFS
hence it first search left of the tree and go deep twice as minimax algorithm in the same
amount of time. Complexity in ideal ordering is O(bm/2).

Rules to find good ordering:

Following are some rules to find good ordering in alpha-beta pruning:

o Occur the best move from the shallowest node.


o Order the nodes in the tree such that the best nodes are checked first.
o Use domain knowledge while finding the best move. Ex: for Chess, try order: captures
first, then threats, then forward moves, backward moves.
o We can bookkeep the states, as there is a possibility that states may repeat.
1.10 CONSTRAINT SATISFACTION PROBLEMS (CSP)
Constraint Satisfaction Problems (CSPs) are a class of computational problems where the goal
is to find a solution that satisfies a set of constraints. These constraints impose restrictions on
the values or assignments of variables in such a way that the variables must be assigned values
from their respective domains while meeting all specified conditions.
Significance of Constraint Satisfaction Problem in AI
CSPs are highly significant in artificial intelligence for several reasons:
• They model a wide range of real-world problems where decision-making is subject to certain
conditions and limitations.
• CSPs offer a structured and general framework for representing and solving problems,
making them versatile in problem-solving applications.
• Many AI applications, such as scheduling, planning, and configuration, can be mapped to
CSPs, allowing AI systems to find optimal solutions efficiently.
The representation of Constraint Satisfaction Problems (CSPs) is crucial for effectively solving
these problems. Let's explore how to represent CSPs using variables, domains, and constraints:
Basic Components of CSP

1. Variables as Placeholders:
Variables in CSPs act as placeholders for problem components that need to be assigned
values. They represent the entities or attributes of the problem under consideration. For
example:
• In a Sudoku puzzle, variables represent the empty cells that need numbers.
• In job scheduling, variables might represent tasks to be scheduled.
• In map coloring, variables correspond to regions or countries that need to be
colored.
The choice of variables depends on the specific problem being modeled.
2. Domains:
Each variable in a CSP is associated with a domain, which defines the set of values that
the variable can take. Domains are a critical part of the CSP representation, as they
restrict the possible assignments of values to variables. For instance:
• In Sudoku, the domain for each empty cell is the numbers from 1 to 9.
• In scheduling, the domain for a task might be the available time slots.
• In map coloring, the domain could be a list of available colors.
Domains ensure that variable assignments remain within the specified range of values.
3. Constraints:
Constraints in CSPs specify the relationships or conditions that must be satisfied by the
variables. Constraints restrict the combinations of values that variables can take.
Constraints can be unary (involving a single variable), binary (involving two variables),
or n-ary (involving more than two variables). Constraints are typically represented in
the form of logical expressions, equations, or functions. For example:
• In Sudoku, constraints ensure that no two numbers are repeated in the same row,
column, or subgrid.
• In scheduling, constraints might involve ensuring that two tasks are not scheduled at
the same time.
• In map coloring, constraints require that adjacent regions have different colors.
Constraint specification is a crucial part of problem modeling, as it defines the rules
that the variables must follow.
Overall Representation:
To represent a CSP, you need to define:
• The set of variables: What entities or attributes need values?
• The domains: What are the possible values that each variable can take?
• The constraints: What conditions or limitations must be satisfied by the variables?
By defining these elements, you create a structured representation of the problem,
which is essential for CSP solvers to find valid solutions efficiently.

Solving Constraint Satisfaction Problems in Artificial Intelligence:


Introduction to CSP Solving Techniques:
Constraint Satisfaction Problems (CSPs) can be challenging to solve due to their
combinatorial nature. However, several techniques, such as backtracking and constraint
propagation, can be employed to find valid solutions efficiently.
1. Backtracking Search for CSP in Artificial Intelligence:
Backtracking is a widely used technique for solving CSPs. It is a systematic search
algorithm that explores possible assignments for variables, backtracking when it
encounters constraints that cannot be satisfied. The algorithm follows these steps:
• Choose an unassigned variable.
• Select a value from its domain.
• Check if the assignment violates any constraints.
• If a constraint is violated, backtrack to the previous variable and try another
value.
• Continue this process until all variables are assigned values, or a valid solution
is found.
2. Forward Checking:
The backtracking technique has been improved using forward checking. It tracks
the remaining accurate values of the unassigned variables after each assignment and
reduces the domains of variables whose values don’t match the assigned ones. As a
result, the search space is smaller, and constraint propagation is more effectively
accomplished.
3. Constraint Propagation:
Constraint propagation is a powerful technique that enforces constraints throughout
the CSP solving process. It narrows down the domains of variables by iteratively
applying constraints. It's often used in conjunction with backtracking to improve
efficiency. The concept of constraint propagation can be illustrated as follows:
• Step 1:
Start with an initial CSP problem in ai with variables, domains, and constraints.
• Step 2:
Apply constraints that have been specified in the problem to narrow down the
domains of variables. For instance, if two variables have a binary constraint that
one must be double the other, this constraint will eliminate many inconsistent
assignments.
• Step 3:
After constraint propagation, some variables may have their domains reduced to
only a few possibilities, making it easier to find valid assignments.
• Step 4:
If a variable's domain becomes empty during propagation, it indicates that the
current assignment is inconsistent, and backtracking is needed.

Illustration with a Simple CSP Example:


Let's consider a simplified Sudoku puzzle to illustrate the problem-solving process
step by step:
• Variables: 9x9 grid cells
• Domains: Numbers from 1 to 9
• Constraints: No number can repeat in the same row, column, or 3x3 subgrid.

Step 1: Start with an empty Sudoku grid.


Step 2: Apply the initial constraints for the given numbers, reducing the domains
of variables based on the puzzle's clues.
Step 3: Use constraint propagation to narrow down the domains further. For
example, if a row has two cells with domains {2, 5}, and the constraint specifies
that these two cells cannot have the same number, we can eliminate the possibility
of 5 for one of them.
Step 4: Continue applying constraints and propagating until the domains of
variables are either empty or filled with single values. If they are all filled, you have
a valid solution. If any variable's domain is empty, you backtrack to the previous
step and try an alternative assignment.
This simple example demonstrates how backtracking and constraint propagation
work together to efficiently find a solution to a CSP. The combination of systematic
search and constraint enforcement allows for solving complex problems in various
domains.
1. The Backtracking Algorithm: The backtracking algorithm is a popular method
for resolving CSPs. It looks for the search space by picking a variable, setting a
value for it, and then recursively scanning through the other variables. In the event
of a conflict, it goes back and tries a different value for the preceding variable. The
backtracking algorithm’s essential elements are:
o Variable Ordering: The order in which variables are chosen is known as
variable ordering.
o Value Ordering: The sequence in which values are assigned to variables is
known as value ordering.
o Constraint Propagation: Reducing the domain of variables based on
constraint compliance is known as constraint propagation.
2. Forward Checking: The backtracking technique has been improved using
forward checking. It tracks the remaining accurate values of the unassigned
variables after each assignment and reduces the domains of variables whose values
don’t match the assigned ones. As a result, the search space is smaller, and
constraint propagation is more effectively accomplished.
3. Constraint Propagation: Constraint propagation techniques reduce the search
space by removing values inconsistent with current assignments through local
consistency checks. To do this, techniques like generalized arc consistency and path
consistency are applied.

Types of Constraints in CSP

In constraint Satisfaction Problems (CSPs), constraints are used to specify


relationships between variables and limit the possible combinations of values that
can be assigned to those variables.
There are several types of constraints commonly used in CSPs:
• Unary Constraints: Unary constraints limit the possible values of a single
variable without considering the values of other variables. It is the easiest constraint
to find, as it has only one parameter. Example: The expression X1 ≠ 7 says that the
variable X1 cannot have the value 7.
• Binary Constraints: Binary constraints describe the relationship between two
variables and consist of only two variables. Example: X1< X2 indicates that X1
must be less than X2 in order to be true.
• Global Constraints: In contrast to unary or binary constraints, global
constraints involve multiple variables and impose a more complex relationship or
restriction between them. Global constraints are often used in CSP problems to
capture higher-level patterns, structures, or rules. These restrictions can apply to
any number of variables at once and are not limited to pairwise interactions.
It is further divided into two main categories:
• Alldifferent Constraint: The Alldifferent constraint (AllDiff) requires that
each variable in a set of variables has a unique value. You commonly apply
alldifferent constraints, when you want to be sure that no two variables in a set can
take the same value. Example: The expression alldifferent(X1, X2, X3) ensures that
the values of X1, X2, and X3 must be unique.
• Sum Constraint: The Sum Constraint requires that the sum of the values
assigned to a group of variables meet a particular requirement. It is useful for
expressing restrictions like “the sum of these variables should equal a certain
value.” Example: The expression Sum(X1, X2, X3) = 15 demands that the sum of
the values for X1, X2, and X3 be 15.

Extensions and Variations of CSPs:


While basic Constraint Satisfaction Problems (CSPs) are a fundamental concept,
some several extensions and variations make CSPs even more versatile. Let's
explore some of these concepts:
1. Soft Constraints:
In traditional CSPs, constraints are considered hard, meaning they must be
strictly satisfied for a solution to be valid. However, in some real-world
problems, it may be beneficial to allow for "soft" constraints that can be violated
to a certain degree. Soft constraints assign penalties or costs based on the degree
of violation. Solving CSPs with soft constraints often involves optimizing the
objective function to minimize the total cost.

Example:
In project scheduling, meeting deadlines can be considered a hard constraint,
but minimizing project costs can be a soft constraint where slight delays may be
acceptable if they reduce costs.

2. Global Constraints:
Global constraints are higher-level constraints that involve a larger number of
variables and often have a more complex relationship. They can express
relationships that would be cumbersome to specify using only binary
constraints. Global constraints help simplify the problem by encapsulating
multiple constraints into a single entity.

Example: The "all-different" global constraint enforces that all variables in a


set must take distinct values, which is useful in Sudoku puzzles and map
coloring problems.

3. Optimization Problems:
In standard CSPs, the goal is to find any valid solution. However, in
optimization problems, the aim is to find the best solution among multiple
possibilities, based on an objective function. Optimization problems include
finding the minimum or maximum value of this function while satisfying
constraints.
Example: In job scheduling, finding the schedule that minimizes costs or
maximizes efficiency is an optimization problem.

Real-World Examples of CSP in AI:


• Resource Allocation: In resource allocation problems, variables represent
tasks or jobs, domains represent resource assignments, and constraints ensure
that resource limits are not exceeded. Soft constraints may be used to optimize
resource usage while considering costs.
• Job Scheduling: Job scheduling problems involve assigning tasks to
available time slots. Constraints include task dependencies and resource
constraints. Optimization can aim to minimize makespan or maximize resource
utilization.
• Game Playing: In game playing, CSPs can represent game states, and
constraints define the rules of the game. Global constraints ensure that game
moves are legal, and optimization may aim to find the best move based on a
scoring function.
These examples illustrate how CSPs, with extensions and variations, can model
a wide range of problems in domains as diverse as project management,
manufacturing, and recreational activities. By introducing soft constraints,
global constraints, and optimization objectives, CSPs become powerful tools
for handling complex, real-world scenarios.

You might also like