0% found this document useful (0 votes)
6 views

Notes - B

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Notes - B

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Table of Contents

Artificial Intelligence ................................................................................................................................ 2


Unit -1 ..................................................................................................................................................... 2
1.1 Introduction ................................................................................................................................. 2
1.1.1 What is Artificial Intelligence? ..................................................................................................... 2
1.1.2 Why Artificial Intelligence?.......................................................................................................... 3
1.1.3 Goals of Artificial Intelligence ...................................................................................................... 3
1.1.4 What Comprises to Artificial Intelligence? ................................................................................... 3
1.1.5 Advantages of Artificial Intelligence ............................................................................................ 4
1.1.6 Disadvantages of Artificial Intelligence ........................................................................................ 4
1.1.7 Application of AI ......................................................................................................................... 5
1.2 Types of Artificial Intelligence ............................................................................................................ 7
1.2.1 AI type-1: Based on Capabilities .................................................................................................. 8
1.2.2 Artificial Intelligence type-2: Based on functionality .................................................................... 9
1.3 Types of AI Agents ........................................................................................................................... 10
1.3.1 Simple Reflex agent.................................................................................................................. 10
1.3.2. Model-based reflex agent ........................................................................................................ 11
1.3.3. Goal-based agents ................................................................................................................... 12
1.3.4. Utility-based agents ................................................................................................................. 13
1.3.5. Learning Agents ....................................................................................................................... 14
1.4 Problem-solving agents.................................................................................................................... 15
1.4.1 Search Algorithm Terminologies................................................................................................ 15
1.4.2 Properties of Search Algorithms ................................................................................................ 16
1.4.3 Types of search algorithms ........................................................................................................ 16
1.4.3.1 Uninformed/Blind Search ................................................................................................. 17
1.4.3.2 Informed Search................................................................................................................ 18
1.4.3.1 Uninformed Search Algorithms ............................................................................................... 18
1. Breadth-first Search: ................................................................................................................ 19
2. Depth-first Search ..................................................................................................................... 20
3. Depth-Limited Search Algorithm: ............................................................................................. 22
4. Uniform-cost Search Algorithm: ............................................................................................... 23
5. Iterative deepening depth-first Search: ..................................................................................... 25
1.4.3.2 Informed Search Algorithms ................................................................................................... 26
1. Best-first Search Algorithm (Greedy Search) .............................................................................. 27
2. A* Search Algorithm:................................................................................................................. 31
3.Hill Climbing Algorithm in Artificial Intelligence ..................................................................... 34
AI Basic Questions for Practice. ............................................................................................................. 41
Answers ................................................................................................................................................ 43

Artificial Intelligence

Unit -1
1.1 Introduction

1.1.1 What is Artificial Intelligence?

In today's world, technology is growing very fast, and we are getting in touch with different new technologies day by
day.

Here, one of the booming technologies of computer science is Artificial Intelligence which is ready to create a new
revolution in the world by making intelligent machines. The Artificial Intelligence is now all around us. It is currently
working with a variety of subfields, ranging from general to specific, such as self-driving cars, playing chess, proving
theorems, playing music, Painting, etc.

AI is one of the fascinating and universal fields of Computer science which has a great scope in future. AI holds a
tendency to cause a machine to work as a human.

Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man-
made," and intelligence defines "thinking power", hence AI means "a man-made thinking power."

So, we can define AI as:

"It is a branch of computer science by which we can create intelligent machines which can behave like a human, think

like humans, and able to make decisions."

Artificial Intelligence exists when a machine can have human based skills such as learning, reasoning, and solving
problems

With Artificial Intelligence you do not need to preprogram a machine to do some work, despite that you can create a
machine with programmed algorithms which can work with own intelligence, and that is the awesomeness of AI.
It is believed that AI is not a new technology, and some people says that as per Greek myth, there were Mechanical
men in early days which can work and behave like humans.

1.1.2 Why Artificial Intelligence?

Before Learning about Artificial Intelligence, we should know that what is the importance of AI and why should we
learn it. Following are some main reasons to learn about AI:

o With the help of AI, you can create such software or devices which can solve real-world problems very easily
and with accuracy such as health issues, marketing, traffic issues, etc.

o With the help of AI, you can create your personal virtual Assistant, such as Cortana, Google Assistant, Siri,
etc.

o With the help of AI, you can build such Robots which can work in an environment where survival of humans
can be at risk.

o AI opens a path for other new technologies, new devices, and new Opportunities.

1.1.3 Goals of Artificial Intelligence

Following are the main goals of Artificial Intelligence:

1. Replicate human intelligence

2. Solve Knowledge-intensive tasks

3. An intelligent connection of perception and action

4. Building a machine which can perform tasks that requires human intelligence such as:

o Proving a theorem

o Playing chess

o Plan some surgical operation

o Driving a car in traffic

5. Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain,
and can advise to its user.

1.1.4 What Comprises to Artificial Intelligence?

Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which
can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an
intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language
understanding, etc.

To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:
o Mathematics

o Biology

o Psychology

o Sociology

o Computer Science

o Neurons Study

o Statistics

1.1.5 Advantages of Artificial Intelligence

Following are some main advantages of Artificial Intelligence:

o High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it
takes decisions as per pre-experience or information.

o High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems
can beat a chess champion in the Chess game.

o High reliability: AI machines are highly reliable and can perform the same action multiple times with high
accuracy.

o Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the
ocean floor, where to employ a human can be risky.

o Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is
currently used by various E-commerce websites to show the products as per customer requirement.

o Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can make
our journey safer and hassle-free, facial recognition for security purpose, Natural language processing to
communicate with the human in human-language, etc.

1.1.6 Disadvantages of Artificial Intelligence

Every technology has some disadvantages, and thesame goes for Artificial intelligence. Being so advantageous
technology still, it has some disadvantages which we need to keep in our mind while creating an AI system.
Following are the disadvantages of AI:

o High Cost: The hardware and software requirement of AI is very costly as it requires lots of maintenance
to meet current world requirements.

o Can't think out of the box: Even we are making smarter machines with AI, but still they cannot work out
of the box, as the robot will only do that work for which they are trained, or programmed.
o No feelings and emotions: AI machines can be an outstanding performer, but still it does not have the
feeling so it cannot make any kind of emotional attachment with human, and may sometime be harmful for
users if the proper care is not taken.

o Increase dependency on machines: With the increment of technology, people are getting more dependent
on devices and hence they are losing their mental capabilities.

o No Original Creativity: As humans are so creative and can imagine some new ideas but still AI machines
cannot beat this power of human intelligence and cannot be creative and imaginative.

1.1.7 Application of AI

Artificial Intelligence has various applications in today's society. It is becoming essential for today's time because it
can solve complex problems with an efficient way in multiple industries, such as Healthcare, entertainment, finance,
education, etc. AI is making our daily life more comfortable and fast.

Following are some sectors which have the application of Artificial Intelligence:

Figure 1.1: Application of AI

1. AI in Astronomy
o Artificial Intelligence can be very useful to solve complex universe problems. AI technology can be helpful
for understanding the universe such as how it works, origin, etc.

2. AI in Healthcare

o In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going to have
a significant impact on this industry.

o Healthcare Industries are applying AI to make a better and faster diagnosis than humans. AI can help
doctors with diagnoses and can inform when patients are worsening so that medical help can reach to the
patient before hospitalization.

3. AI in Gaming

o AI can be used for gaming purpose. The AI machines can play strategic games like chess, where the
machine needs to think of a large number of possible places.

4. AI in Finance

o AI and finance industries are the best matches for each other. The finance industry is implementing
automation, chatbot, adaptive intelligence, algorithm trading, and machine learning into financial
processes.

5. AI in Data Security

o The security of data is crucial for every company and cyber-attacks are growing very rapidly in the digital
world. AI can be used to make your data more safe and secure. Some examples such as AEG bot, AI2
Platform,are used to determine software bug and cyber-attacks in a better way.

6. AI in Social Media

o Social Media sites such as Facebook, Twitter, and Snapchat contain billions of user profiles, which need to
be stored and managed in a very efficient way. AI can organize and manage massive amounts of data. AI
can analyze lots of data to identify the latest trends, hashtag, and requirement of different users.

7. AI in Travel & Transport

o AI is becoming highly demanding for travel industries. AI is capable of doing various travel related works
such as from making travel arrangement to suggesting the hotels, flights, and best routes to the customers.
Travel industries are using AI-powered chatbots which can make human-like interaction with customers for
better and fast response.

8. AI in Automotive Industry
o Some Automotive industries are using AI to provide virtual assistant to their user for better performance.
Such as Tesla has introduced TeslaBot, an intelligent virtual assistant.

o Various Industries are currently working for developing self-driven cars which can make your journey
more safe and secure.

9. AI in Robotics:

o Artificial Intelligence has a remarkable role in Robotics. Usually, general robots are programmed such that
they can perform some repetitive task, but with the help of AI, we can create intelligent robots which can
perform tasks with their own experiences without pre-programmed.

o Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid robot named as
Erica and Sophia has been developed which can talk and behave like humans.

10. AI in Entertainment

o We are currently using some AI based applications in our daily life with some entertainment services such
as Netflix or Amazon. With the help of ML/AI algorithms, these services show the recommendations for
programs or shows.

11. AI in Agriculture

o Agriculture is an area which requires various resources, labor, money, and time for best result. Now a day's
agriculture is becoming digital, and AI is emerging in this field. Agriculture is applying AI as agriculture
robotics, solid and crop monitoring, predictive analysis. AI in agriculture can be very helpful for farmers.

12. AI in E-commerce

o AI is providing a competitive edge to the e-commerce industry, and it is becoming more demanding in the
e-commerce business. AI is helping shoppers to discover associated products with recommended size,
color, or even brand.

13. AI in education:

o AI can automate grading so that the tutor can have more time to teach. AI chatbot can communicate with
students as a teaching assistant.

o AI in the future can be work as a personal virtual tutor for students, which will be accessible easily at any
time and any place.

1.2 Types of Artificial Intelligence


Artificial Intelligence can be divided in various types, there are mainly two types of main categorization which are
based on capabilities and based on functionally of AI. Following is flow diagram which explain the types of AI.

Figure 1.2: Types of AI

1.2.1 AI type-1: Based on Capabilities

1. Weak AI or Narrow AI:


o Narrow AI is a type of AI which is able to perform a dedicated task with intelligence.The most
common and currently available AI is Narrow AI in the world of Artificial Intelligence.

o Narrow AI cannot perform beyond its field or limitations, as it is only trained for one specific task.
Hence it is also termed as weak AI. Narrow AI can fail in unpredictable ways if it goes beyond its
limits.

o Apple Siriis a good example of Narrow AI, but it operates with a limited pre-defined range of
functions.

o IBM's Watson supercomputer also comes under Narrow AI, as it uses an Expert system approach
combined with Machine learning and natural language processing.

o Some Examples of Narrow AI are playing chess, purchasing suggestions on e-commerce site, self-
driving cars, speech recognition, and image recognition.

2. General AI:
o General AI is a type of intelligence which could perform any intellectual task with efficiency like a
human.
o The idea behind the general AI to make such a system which could be smarter and think like a human
by its own.

o Currently, there is no such system exist which could come under general AI and can perform any task
as perfect as a human.

o The worldwide researchers are now focused on developing machines with General AI.

o As systems with general AI are still under research, and it will take lots of efforts and time to develop
such systems.

3. Super AI:
o Super AI is a level of Intelligence of Systems at which machines could surpass human intelligence,
and can perform any task better than human with cognitive properties. It is an outcome of general AI.

o Some key characteristics of strong AI include capability include the ability to think, to reason,solve
the puzzle, make judgments, plan, learn, and communicate by its own.

o Super AI is still a hypothetical concept of Artificial Intelligence. Development of such systems in real
is still world changing task.

Figure 1.3:Types of AI

1.2.2 Artificial Intelligence type-2: Based on functionality

1. Reactive Machines
o Purely reactive machines are the most basic types of Artificial Intelligence.

o Such AI systems do not store memories or past experiences for future actions.

o These machines only focus on current scenarios and react on it as per possible best action.
o IBM's Deep Blue system is an example of reactive machines.

o Google's AlphaGo is also an example of reactive machines.

2. Limited Memory
o Limited memory machines can store past experiences or some data for a short period of time.

o These machines can use stored data for a limited time period only.

o Self-driving cars are one of the best examples of Limited Memory systems. These cars can store
recent speed of nearby cars, the distance of other cars, speed limit, and other information to navigate
the road.

3. Theory of Mind
o Theory of Mind AI should understand the human emotions, people, beliefs, and be able to interact
socially like humans.

o This type of AI machines are still not developed, but researchers are making lots of efforts and
improvement for developing such AI machines.

4. Self-Awareness
o Self-awareness AI is the future of Artificial Intelligence. These machines will be super intelligent, and
will have their own consciousness, sentiments, and self-awareness.

o These machines will be smarter than human mind.

o Self-Awareness AI does not exist in reality still and it is a hypothetical concept.

1.3 Types of AI Agents


Agents can be grouped into five classes based on their degree of perceived intelligence and capability. All these
agents can improve their performance and generate better action over the time. These are given below:

o Simple Reflex Agent

o Model-based reflex agent

o Goal-based agents

o Utility-based agent

o Learning agent

1.3.1 Simple Reflex agent


o The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current
percepts and ignore the rest of the percept history.

o These agents only succeed in the fully observable environment.

o The Simple reflex agent does not consider any part of percepts history during their decision and action
process.

o The Simple reflex agent works on Condition-action rule, which means it maps the current state to action.
Such as a Room Cleaner agent, it works only if there is dirt in the room.

o Problems for the simple reflex agent design approach:

o They have very limited intelligence

o They do not have knowledge of non-perceptual parts of the current state

o Mostly too big to generate and to store.

o Not adaptive to changes in the environment.

Figure 1.4:Simple Reflex Model

1.3.2. Model-based reflex agent

o The Model-based agent can work in a partially observable environment, and track the situation.

o A model-based agent has two important factors:

o Model: It is knowledge about "how things happen in the world," so it is called a Model-based
agent.

o Internal State: It is a representation of the current state based on percept history.

o These agents have the model, "which is knowledge of the world" and based on the model they perform
actions.

o Updating the agent state requires information about:

a. How the world evolves


b. How the agent's action affects the world.

Figure 1.5:Model based reflex agent.

1.3.3. Goal-based agents

o The knowledge of the current state environment is not always sufficient to decide for an agent to what to
do.

o The agent needs to know its goal which describes desirable situations.

o Goal-based agents expand the capabilities of the model-based agent by having the "goal" information.

o They choose an action, so that they can achieve the goal.

o These agents may have to consider a long sequence of possible actions before deciding whether the goal is
achieved or not. Such considerations of different scenario are called searching and planning, which makes
an agent proactive.
Figure 1.6:Goal based agent

1.3.4. Utility-based agents

o These agents are similar to the goal-based agent but provide an extra component of utility measurement
which makes them different by providing a measure of success at a given state.

o Utility-based agent act based not only goals but also the best way to achieve the goal.

o The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose
in order to perform the best action.

o The utility function maps each state to a real number to check how efficiently each action achieves the
goals.
Figure 1.7:Utility based agent

1.3.5. Learning Agents

o A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning
capabilities.

o It starts to act with basic knowledge and then able to act and adapt automatically through learning.

o A learning agent has mainly four conceptual components, which are:

a. Learning element: It is responsible for making improvements by learning from environment

b. Critic: Learning element takes feedback from critic which describes that how well the agent is
doing with respect to a fixed performance standard.

c. Performance element: It is responsible for selecting external action

d. Problem generator: This component is responsible for suggesting actions that will lead to new
and informative experiences.

o Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the
performance.
Figure 1.8:Learning Agent

1.4 Problem-solving agents


In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational agents or Problem-
solving agents in AI mostly used these search strategies or algorithms to solve a specific problem and provide the
best result. Problem-solving agents are the goal-based agents and use atomic representation. In this topic, we will learn
various problem-solving search algorithms.

1.4.1 Search Algorithm Terminologies

o Search: Searchingis a step by step procedure to solve a search-problem in a given search space. A search
problem can have three main factors:

a. Search Space: Search space represents a set of possible solutions, which a system may have.

b. Start State: It is a state from where agent begins the search.

c. Goal test: It is a function which observe the current state and returns whether the goal state is
achieved or not.

o Search tree: A tree representation of search problem is called Search tree. The root of the search tree is the
root node which is corresponding to the initial state.

o Actions: It gives the description of all the available actions to the agent.

o Transition model: A description of what each action do, can be represented as a transition model.
o Path Cost: It is a function which assigns a numeric cost to each path.

o Solution: It is an action sequence which leads from the start node to the goal node.

o Optimal Solution: If a solution has the lowest cost among all solutions.

1.4.2 Properties of Search Algorithms

Following are the four essential properties of search algorithms to compare the efficiency of these algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any solution
exists for any random input.

Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all
other solutions, then such a solution for is said to be an optimal solution.

Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.

Space Complexity: It is the maximum storage space required at any point during the search, as the complexity of the
problem.

1.4.3 Types of search algorithms

Based on the search problems we can classify the search algorithms into uninformed (Blind search) search and
informed search (Heuristic search) algorithms.
Figure 1.9:Search Algorithms

1.4.3.1 Uninformed/Blind Search

The uninformed search does not contain any domain knowledge such as closeness, the location of the goal. It
operates in a brute-force way as it only includes information about how to traverse the tree and how to identify leaf
and goal nodes. Uninformed search applies a way in which search tree is searched without any information about the
search space like initial state operators and test for the goal, so it is also called blind search.It examines each node of
the tree until it achieves the goal node.

It can be divided into five main types:

o Breadth-first search

o Uniform cost search

o Depth-first search

o Iterative deepening depth-first search

o Bidirectional Search
1.4.3.2 Informed Search

Informed search algorithms use domain knowledge. In an informed search, problem information is available which
can guide the search. Informed search strategies can find a solution more efficiently than an uninformed search
strategy. Informed search is also called a Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a good solution
in reasonable time.

Informed search can solve much complex problem which could not be solved in another way.

An example of informed search algorithms is a traveling salesman problem.

1. Greedy Search

2. A* Search

An uninformed search is a searching technique that has no additional information about the distance from the
current state to the goal.

Informed Search is another technique that has additional information about the estimate distance from the current
state to the goal.

Basis of Informed search Uninformed search


comparison

Basic knowledge Uses knowledge to find the steps to the No use of knowledge
solution.

Efficiency Highly efficient as consumes less time and Efficiency is mediatory


cost.

Cost Low Comparatively high

Performance Finds the solution more quickly. Speed is slower than the informed search.

Algorithms Heuristic depth-first and breadth-first search, Depth-first search, breadth-first search, and lowest
and A* search cost first search

1.4.3.1 Uninformed Search Algorithms


Uninformed search is a class of general-purpose search algorithms which operates in brute force-way. Uninformed
search algorithms do not have additional information about state or search space other than how to traverse the tree,
so it is also called blind search.

Following are the various types of uninformed search algorithms:

1. Breadth-first Search

2. Depth-first Search

3. Depth-limited Search

4. Iterative deepening depth-first search

5. Uniform cost search

6. Bidirectional Search

1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm
searches breadthwise in a tree or graph, so it is called breadth-first search.

o BFS algorithm starts searching from the root node of the tree and expands all successor node at the current
level before moving to nodes of next level.

o The breadth-first search algorithm is an example of a general-graph search algorithm.

o Breadth-first search implemented using FIFO queue data structure.

Advantages:

o BFS will provide a solution if any solution exists.

o If there are more than one solutions for a given problem, then BFS will provide the minimal solution which
requires the least number of steps.

Disadvantages:

o It requires lots of memory since each level of the tree must be saved into memory to expand the next level.

o BFS needs lots of time if the solution is far away from the root node.

Example:

In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to goal
node K. BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted arrow, and
the traversed path will be:

1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Figure 1:10:Breadth First Search.

Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes traversed in BFS
until the shallowest Node. Where the d= depth of shallowest solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS will find
a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.

2. Depth-first Search
o Depth-first search is a recursive algorithm for traversing a tree or graph data structure.

o It is called the depth-first search because it starts from the root node and follows each path to its greatest
depth node before moving to the next path.

o DFS uses a stack data structure for its implementation.

o The process of the DFS algorithm is similar to the BFS algorithm.

Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.
Advantage:

o DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to
the current node.

o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).

Disadvantage:

o There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.

o DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

Example:

In the below search tree, we have shown the flow of depth-first search, and it will follow the order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack the
tree as E has no other successor and still goal node is not found. After backtracking it will traverse node C and then
G, and here it will terminate as it found goal node.

Figure:1:11:Depth first search.


Completeness: DFS search algorithm is complete within finite state space as it will expand every node within a
limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node, hence space complexity of
DFS is equivalent to the size of the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost to reach to the
goal node.

3. Depth-Limited Search Algorithm:

A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-limited search
can solve the drawback of the infinite path in the Depth-first search. In this algorithm, the node at the depth limit
will treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any solution.

o Cutoff failure value: It defines no solution for the problem within a given depth limit.

Advantages:

Depth-limited search is Memory efficient.

Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.

o It may not be optimal if the problem has more than one solution.

Example:
Figure 1:12:Depth Limited Search

Completeness: DLS search algorithm is complete if the solution is above the depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not optimal even if ℓ>d.

4. Uniform-cost Search Algorithm:

Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This algorithm comes into
play when a different cost is available for each edge. The primary goal of the uniform-cost search is to find a path to
the goal node which has the lowest cumulative cost. Uniform-cost search expands nodes according to their path costs
form the root node. It can be used to solve any graph/tree where the optimal cost is in demand. A uniform-cost search
algorithm is implemented by the priority queue. It gives maximum priority to the lowest cumulative cost. Uniform
cost search is equivalent to BFS algorithm if the path cost of all edges is the same.

Advantages:

o Uniform cost search is optimal because at every state the path with the least cost is chosen.

Disadvantages:
o It does not care about the number of steps involve in searching and only concerned about path cost. Due to
which this algorithm may be stuck in an infinite loop.

Example:

Figure 1:13:Uniform cost Search

Completeness:

Uniform-cost search is complete, such as if there is a solution, UCS will find it.

Time Complexity:

Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then the number of steps is
= C*/ε+1. Here we have taken +1, as we start from state 0 and end to C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.

Space Complexity:

The same logic is for space complexity so, the worst-case space complexity of Uniform-cost search is O(b1 + [C*/ε]).

Optimal:

Uniform-cost search is always optimal as it only selects a path with the lowest path cost.
5. Iterative deepening depth-first Search:

The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search algorithm finds out the
best depth limit and does it by gradually increasing the limit until a goal is found.

This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the depth limit after
each iteration until the goal node is found.

This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first search's memory
efficiency.

The iterative search algorithm is useful uninformed search when search space is large, and depth of goal node is
unknown.

Advantages:

o It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory efficiency.

Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the previous phase.

Example:

Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm performs various
iterations until it does not find the goal node. The iteration performed by the algorithm is given as:
Figure 1:14: Iterative Deepening depth first search

1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

Completeness:

This algorithm is complete is if the branching factor is finite.

Time Complexity:

Let's suppose b is the branching factor and depth is d then the worst-case time complexity is O(bd).

Space Complexity:

The space complexity of IDDFS will be O(bd).

Optimal:

IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the node.

1.4.3.2 Informed Search Algorithms


So far we have talked about the uninformed search algorithms which looked through search space for all possible
solutions of the problem without having any additional knowledge about search space. But informed search
algorithm contains an array of knowledge such as how far we are from the goal, path cost, how to reach to goal
node, etc. This knowledge help agents to explore less to the search space and find more efficiently the goal node.

The informed search algorithm is more useful for large search space. Informed search algorithm uses the idea of
heuristic, so it is also called Heuristic search.

Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the most promising path.
It takes the current state of the agent as its input and produces the estimation of how close agent is from the goal.
The heuristic method, however, might not always give the best solution, but it guaranteed to find a good solution in
reasonable time. Heuristic function estimates how close a state is to the goal. It is represented by h(n), and it
calculates the cost of an optimal path between the pair of states. The value of the heuristic function is always
positive.

Admissibility of the heuristic function is given as:

1. h(n) <= h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be less than or equal to
the estimated cost.

Pure Heuristic Search

Pure heuristic search is the simplest form of heuristic search algorithms. It expands nodes based on their heuristic
value h(n). It maintains two lists, OPEN and CLOSED list. In the CLOSED list, it places those nodes which
have already expanded and in the OPEN list, it places nodes which have yet not been expanded.

On each iteration, each node n with the lowest heuristic value is expanded and generates all its successors and n
is placed to the closed list. The algorithm continues unit a goal state is found.

In the informed search we will discuss two main algorithms which are given below:

Best First Search Algorithm(Greedy search)

A* Search Algorithm

1. Best-first Search Algorithm (Greedy Search)

Greedy best-first search algorithm always selects the path which appears best at that moment. It is the combination of
depth-first search and breadth-first search algorithms. It uses the heuristic function and search. Best-first search allows
us to take the advantages of both algorithms. With the help of best-first search, at each step, we can choose the most
promising node. In the best first search algorithm, we expand the node which is closest to the goal node and the closest
cost is estimated by heuristic function, i.e.

1. f(n)= g(n).

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.


Best first search algorithm:

o Step 1: Place the starting node into the OPEN list.

o Step 2: If the OPEN list is empty, Stop and return failure.

o Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in the
CLOSED list.

o Step 4: Expand the node n, and generate the successors of node n.

o Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any successor
node is goal node, then return success and terminate the search, else proceed to Step 6.

o Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if the node
has been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the OPEN list.

o Step 7: Return to Step 2.

Advantages:

o Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms.

o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:

o It can behave as an unguided depth-first search in the worst case scenario.

o It can get stuck in a loop as DFS.

o This algorithm is not optimal.

Example:

Consider the below search problem, and we will traverse it using greedy best-first search. At each iteration, each
node is expanded using evaluation function f(n)=h(n) , which is given in the below table.
Figure 1.15:Best Search Algorithm

In this search example, we are using two lists which are OPEN and CLOSED Lists. Following are the iteration
for traversing the above example.
Figure 1:15(b):Best Search Algorithm

Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]


: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]


: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first search is O(bm).

Space Complexity: The worst case space complexity of Greedy best first search is O(b m). Where, m is the
maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state space is finite.

Optimal: Greedy best first search algorithm is not optimal.


2. A* Search Algorithm:

A* search is the most commonly known form of best-first search. It uses heuristic function h(n), and cost to
reach the node n from the start state g(n). It has combined features of UCS and greedy best-first search, by which
it solve the problem efficiently. A* search algorithm finds the shortest path through the search space using the
heuristic function. This search algorithm expands less search tree and provides optimal result faster. A*
algorithm is similar to UCS except that it uses g (n) +h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can combine
both costs as following, and this sum is called as a fitness number.

Figure 1:16: A* Algorithm.

At each point in the search space, only those node is expanded which have the lowest value of f(n),
and the algorithm terminates when the goal node is found.

Algorithm of A* search:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h), if node n
is goal node then return success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each successor n',
check whether n' is already in the OPEN or CLOSED list, if not then compute evaluation function for n' and
place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer which
reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

o A* search algorithm is the best algorithm than other search algorithms.

o A* search algorithm is optimal and complete.

o This algorithm can solve very complex problems.


Disadvantages:

o It does not always produce the shortest path as it mostly based on heuristics and approximation.

o A* search algorithm has some complexity issues.

o The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it
is not practical for various large-scale problems.

Example
In this example, we will traverse the given graph using the A* algorithm. The heuristic value of all states is given
in the below table so we will calculate the f(n) of each state using the formula f(n)= g(n) + h(n), where g(n) is
the cost to reach any node from start state.
Here we will use OPEN and CLOSED list.

Figure 1:17:A* Example

Solution:
Figure 1:17(b):A* Example.

Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with cost 6.

Points to remember:

o A* algorithm returns the path which occurred first, and it does not search for all remaining paths.

o The efficiency of A* algorithm depends on the quality of heuristic.

o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:

o Branching factor is finite.

o Cost at every action is fixed.


Optimal: A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should be an admissible heuristic for
A* tree search. An admissible heuristic is optimistic in nature.

o Consistency: Second required condition is consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search will always find the least cost path.

Time Complexity: The time complexity of A* search algorithm depends on heuristic function, and the number
of nodes expanded is exponential to the depth of solution d. So the time complexity is O(b^d), where b is the
branching factor.

Space Complexity: The space complexity of A* search algorithm is O(b^d)

3.Hill Climbing Algorithm in Artificial Intelligence

o Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the problem. It
terminates when it reaches a peak value where no neighbor has a higher value.

o Hill climbing algorithm is a technique which is used for optimizing the mathematical problems. One
of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem in which
we need to minimize the distance traveled by the salesman.

o It is also called greedy local search as it only looks to its good immediate neighbor state and not
beyond that.

o A node of hill climbing algorithm has two components which are state and value.

o Hill Climbing is mostly used when a good heuristic is available.

o In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a
single current state.

Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate
and Test method produce feedback which helps to decide which direction to move in the search space.

o Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost.

o No backtracking: It does not backtrack the search space, as it does not remember the previous states.

State-space Diagram for Hill Climbing:


The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing a graph
between various states of algorithm and Objective function/Cost.

On Y-axis we have taken the function which can be an objective function or cost function, and state-space on the x-
axis. If the function on Y-axis is cost then, the goal of search is to find the global minimum and local minimum. If
the function of Y-axis is Objective function, then the goal of the search is to find the global maximum and local
maximum.

Figure 1:18:Different regions in the state space landscape

Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another state
which is higher than it.

Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest value of
objective function.

Current state: It is a state in a landscape diagram where an agent is currently present.

Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have the same
value.

Shoulder: It is a plateau region which has an uphill edge.

Types of Hill Climbing Algorithm:

Simple hill Climbing:

Steepest-Ascent hill-climbing:

1. Simple Hill Climbing:

Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates the neighbor node
state at a time and selects the first one which optimizes current cost and set it as a current state. It only checks
it's one successor state, and if it finds better than the current state, then move else be in the same state. This algorithm
has the following features:

Less time consuming

Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:

Step 1: Evaluate the initial state, if it is goal state then return success and Stop.

Step 2: Loop Until a solution is found or there is no new operator left to apply.

Step 3: Select and apply an operator to the current state.

Step 4: Check new state:

If it is goal state, then return success and quit.

Else if it is better than the current state then assign new state as a current state.

Else if not better than the current state, then return to step2.

Step 5: Exit.

2. Steepest-Ascent hill climbing:

The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm examines all the
neighboring nodes of the current state and selects one neighbor node which is closest to the goal state. This algorithm
consumes more time as it searches for multiple neighbors

Algorithm for Steepest-Ascent hill climbing:

Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current state as initial state.

Step 2: Loop until a solution is found or the current state does not change.

Let SUCC be a state such that any successor of the current state will be better than it.

For each operator that applies to the current state:

Apply the new operator and generate a new state.

Evaluate the new state.

If it is goal state, then return it and quit, else compare it to the SUCC.

If it is better than SUCC, then set new state as SUCC.

If the SUCC is better than the current state, then set current state to SUCC.

Step 5: Exit.

Problems in Hill Climbing Algorithm:

1. Local Maximum: A local maximum is a peak state in the landscape which is better than each of its neighboring
states, but there is another state also present which is higher than the local maximum.

Solution: Backtracking technique can be a solution of the local maximum in state space landscape. Create a list of the
promising path so that the algorithm can backtrack the search space and explore other paths as well.
Figure 1:19:Local maximum
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the current state contains
the same value, because of this algorithm does not find any best direction to move. A hill-climbing search might be
lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve the problem.
Randomly select a state which is far away from the current state so it is possible that the algorithm could find non-
plateau region.

Figure 1:20:The Flat Maximum

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its surrounding areas,
but itself has a slope, and cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different directions, we can improve this problem.
Figure 1:21:The Ridge

AO* Algorithm (And-Or Algorithms)

Our real-life situations can’t be exactly decomposed into either AND tree or OR tree but is always a combination of
both. So, we need an AO* algorithm where O stands for ‘ordered’. AO* algorithm represents a part of the search
graph that has been explicitly generated so far.
AO* algorithm is given as follows:

Step-1: Create an initial graph with a single node (start node).


Step-2: Transverse the graph following the current path, accumulating node that has not yet been expanded or
solved.
Step-3: Select any of these nodes and explore it. If it has no successors then call this value- FUTILITY else calculate
f'(n) for each of the successors.
Step-4: If f'(n)=0, then mark the node as SOLVED.
Step-5: Change the value of f'(n) for the newly created node to reflect its successors by backpropagation.
Step-6: Whenever possible use the most promising routes, If a node is marked as SOLVED then mark the parent
node as SOLVED.
Step-7: If the starting node is SOLVED or value is greater than FUTILITY then stop else repeat from Step-2.

Means-Ends Analysis in Artificial Intelligence

We have studied the strategies which can reason either in forward or backward, but a mixture of the two directions is
appropriate for solving a complex and large problem. Such a mixed strategy, make it possible that first to solve the
major part of a problem and then go back and solve the small problems arise during combining the big parts of the
problem. Such a technique is called Means-Ends Analysis.

Means-Ends Analysis is problem-solving techniques used in Artificial intelligence for limiting search in AI programs.

It is a mixture of Backward and forward search technique.

The MEA technique was first introduced in 1961 by Allen Newell, and Herbert A. Simon in their problem-solving
computer program, which was named as General Problem Solver (GPS).
The MEA analysis process centered on the evaluation of the difference between the current state and goal state.

How means-ends analysis Works:

The means-ends analysis process can be applied recursively for a problem. It is a strategy to control search in problem-
solving. Following are the main Steps which describes the working of MEA technique for solving a problem.

First, evaluate the difference between Initial State and final State.

Select the various operators which can be applied for each difference.

Apply the operator at each difference, which reduces the difference between the current state and goal state.

In the MEA process, we detect the differences between the current state and goal state. Once these differences occur,
then we can apply an operator to reduce the differences. But sometimes it is possible that an operator cannot be applied
to the current state. So we create the sub-problem of the current state, in which operator can be applied, such type of
backward chaining in which operators are selected, and then sub goals are set up to establish the preconditions of the
operator is called Operator Subgoaling.

Algorithm for Means-Ends Analysis:

Let's we take Current state as CURRENT and Goal State as GOAL, then following are the steps for the MEA
algorithm.

Step 1: Compare CURRENT to GOAL, if there are no differences between both then return Success and Exit.

Step 2: Else, select the most significant difference and reduce it by doing the following steps until the success or
failure occurs.

Select a new operator O which is applicable for the current difference, and if there is no such operator, then signal
failure.

Attempt to apply operator O to CURRENT. Make a description of two states.


i) O-Start, a state in which O?s preconditions are satisfied.
ii) O-Result, the state that would result if O were applied In O-start.

If
(First-Part <------ MEA (CURRENT, O-START)
And
(LAST-Part <----- MEA (O-Result, GOAL), are successful, then signal Success and return the result of combining
FIRST-PART, O, and LAST-PART.

The above-discussed algorithm is more suitable for a simple problem and not adequate for solving complex problems.

Example of Mean-Ends Analysis:

Let's take an example where we know the initial state and goal state as given below. In this problem, we need to get
the goal state by finding differences between the initial state and goal state and applying operators.
Solution:

To solve the above problem, we will first find the differences between initial states and goal states, and for each
difference, we will generate a new state and will apply the operators. The operators we have for this problem are:

Move

Delete

Expand

1. Evaluating the initial state: In the first step, we will evaluate the initial state and will compare the initial and Goal
state to find the differences between both states.

Figure 1:21(a):Mean End Analysis


2. Applying Delete operator: As we can check the first difference is that in goal state there is no dot symbol which
is present in the initial state, so, first we will apply the Delete operator to remove this dot.

Figure 1:21(b):Mean End Analysis


3. Applying Move Operator: After applying the Delete operator, the new state occurs which we will again compare
with goal state. After comparing these states, there is another difference that is the square is outside the circle, so, we
will apply the Move Operator.
Figure 1:21©:Mean End Analysis

4. Applying Expand Operator: Now a new state is generated in the third step, and we will compare this state with
the goal state. After comparing the states there is still one difference which is the size of the square, so, we will
apply Expand operator, and finally, it will generate the goal state.

Figure 1:21(d):Mean End Analysis

AI Basic Questions for Practice.


1.What is Artificial intelligence?

(a) Putting your intelligence into Computer

(b) Programming with your own intelligence

(c) Making a Machine intelligent

(d) Playing a Game

(e) Putting more memory into Computer

2.Which is not the commonly used programming language for AI?

(a) PROLOG (b) Java (c) LISP (d) Perl (e) Java script.

3.What is state space?

(a) The whole problem

(b) Your Definition to a problem

(c) Problem you design

(d) Representing your problem with variable and parameter


(e) A space where You know the solution.

4.A production rule consists of

(a) A set of Rule (b) A sequence of steps

(c) Both (a) and (b) (d) Arbitrary representation to problem

(e) Directly getting solution.

5.Which search method takes less memory?

(a) Depth-First Search (b) Breadth-First search

(c) Both (a) and (b) (d) Linear Search.

(e) Optimal search.

6.A heuristic is a way of trying

(a) To discover something or an idea embedded in a program

(b) To search and measure how far a node in a search tree seems to be from a goal

(c) To compare two nodes in a search tree to see if one is better than the other

(d) Only (a) and (b)

(e) Only (a), (b) and (c).

7.A* algorithm is based on

(a) Breadth-First-Search (b) Depth-First –Search

(c) Best-First-Search (d) Hill climbing.

(e) Bulkworld Problem.

8.Which is the best way to go for Game playing problem?

(a) Linear approach (b) Heuristic approach

(c) Random approach (d) Optimal approach

(e) Stratified approach.

9.How do you represent “All dogs have tails”.

(a) ۷x: dog(x) hastail(x) (b) ۷x: dog(x) hastail(y)

(c) ۷x: dog(y) hastail(x) (d) ۷x: dog(x) has tail(x)

(e) ۷x: dog(x) has tail(y)

10.Which is not a property of representation of knowledge?

(a) Representational Verification (b) Representational Adequacy

(c) Inferential Adequacy (d) Inferential Efficiency

(e) Acquisitional Efficiency.


Answers

1. Answer : (c)

Reason : Because AI is to make things work automatically through machine without using human effort. Machine will
give the result with just giving input from human. That means the system or machine will act as per the requirement.

2. Answer : (d)

Reason : Because Perl is used as a script language, and not of much use for AI practice. All others are used to generate
an artificial program to a great extent.

3. Answer : (d)

Reason : Because state space is mostly concerned with a problem, when you try to solve a problem, we have to design
a mathematical structure to the problem which can only be through variables and parameters. Ex. You have given a
4-gallon jug and another 3gallon jugs. Neither has measuring marker on it. You have to fill the jugs with water .How
can you get exactly 2 gallons of water in to 4gallons.Here the state space can defined as set of ordered pairs
integers(x,y),such that x=0,1,2,3 or 4 and y=0,1,2 or 3;X represents the number of gallons in 4galoon jug and y
represents quantity of water in the 3-gallon jug.

4. Answer : (c)

Reason : When you are trying to solve a problem, you should design how to get a step by step solution with constraints
condition to your problem, e.g Chess board problem.

5. Answer : (a)

Reason : Depth-First Search takes less memory since only the nodes on the current path are stored, but in Breadth
First Search, all of the tree that has generated must be stored.

6. Answer : (e)

Reason : In a heuristic approach we discover certain idea and use heuristic functions to search for a goal and predicates
to compare nodes.

7. Answer : (c)

Reason : Because Best-first-search is giving the idea of optimization and quick choose of path, and all these
characteristic lies in A* algorithm.

8. Answer : (b)

Reason : We use Heuristic approach as it will find out brute force computation ,looking at hundreds of thousands of
positions. e.g Chess competition between Human and AI based Computer.

9. Answer : (a)

Reason : We represent the statement in mathematical logic taking ‘x ‘as Dog and which has tail. We can not represent
two variable x, y for the same object Dog which has tail. The symbol “۷ “represent all.

10. Answer : (a)

Reason : There is nothing to go for Representational verification, the verification comes under Representational
adequacy.

You might also like