Module-01 AIML 21CS54
Module-01 AIML 21CS54
Module-01
Introduction | Problem‐solving
Chapter: - Introduction: What is AI? Foundations and History of AI
1. Define Artificial Intelligence and list the task domains of Artificial Intelligence.
Ans: -
"It is a branch of computer science by which we can create intelligent machines which can
behave like a human, think like humans, and able to make decisions."
OR
Perception
Machine vision
Speech understanding
Touch (tactile or haptic) sensation
Robotics
Natural Language Processing
Natural Language Understanding
Speech Understanding
Language Generation
Machine Translation
Planning
Expert Systems
Machine Learning
Theorem Proving
Symbolic Mathematics
Game Playing
2. State and explain algorithm for Best First Search Algorithm with an
example.
Ans: -
Best First Search (Informed Search)
In BFS and DFS, when we are at a node, we can consider any of the adjacent as next node.
So, both BFS and DFS blindly explore paths without considering any cost function.
The idea of Best First Search is to use an evaluation function to decide which adjacent is most
promising and then explore.
Best First Search falls under the category of Heuristic Search or Informed Search.
We start from source "S" and search for goal "I" using given costs and Best First search.
pq initially contains S We remove s from and process unvisited neighbors of S to pq.
pq now contains {A, C, B} (C is put before B because C has lesser cost) We remove A
from pq and process unvisited neighbors of A to pq.
pq now contains {C, B, E, D} 43 We remove C from pq and process unvisited neighbors
of C to pq.
pq now contains {B, H, E, D} We remove B from pq and process unvisited neighbors of B
to pq.
pq now contains {H, E, D, F, G} We remove H from pq.
Since our goal "I" is a neighbor of H, we return. Analysis: The worst-case time complexity
for Best First Search is O (n * Log n) where n is number of nodes.
3. A Water Jug Problem: You are given two jugs, a 4-gallon one and a 3-gallon one, a pump
which has unlimited water which you can use to fill the jug, and the ground on which
water may be poured. Neither jug has any measuring markings on it. How can you get
exactly 2 gallons of water in the 4-gallon jug.
a. Write down the production rules for the above problem
b. Write any one solution to the above problem
Ans: -
State: (x, y)
where x represents the quantity of water in a 4-liter jug and y represents the quantity of
water in a 3-liter jug.
That is, x = 0, 1, 2, 3, or 4 y = 0, 1, 2, 3
Start state: (0, 0).
(3, 0) – Rule 9, Pour all the water from the 3-liter jug into the 4-liter jug.
(4, 2) – Rule 7, Pour water from the 3-liter jug into the 4-liter jug until the 4-liter jug is full.
Production systems provide appropriate structures for performing and describing search
processes.
A production system has four basic components: A set of rules each consisting of a left
side that determines the applicability of the rule and a right side that describes the operation
to be performed if the rule is applied.
A database of current facts established during the process of inference.
A control strategy that specifies the order in which the rules will be compared with facts in
the database and also specifies how to resolve conflicts in selection of several rules or
selection of more facts.
A rule firing module.
The production rules operate on the knowledge database.
Each rule has a precondition—that is, either satisfied or not by the knowledge database. If
the precondition is satisfied, the rule can be applied.
The Above figure is showing part of the game-tree for tic-tac-toe game. Following are some key
points of the game:
Explanation:
From the initial state, MAX has 9 possible moves as he starts first. MAX place x and MIN
place o, and both player plays alternatively until we reach a leaf node where one player has
three in a row or all squares are filled.
Both players will compute each node, minimax, the minimax value which is the best
achievable utility against an optimal adversary.
Suppose both the players are well aware of the tic-tac-toe and playing the best play. Each
player is doing his best to prevent another one from winning. MIN is acting against Max
in the game.
So, in the game tree, we have a layer of Max, a layer of MIN, and each layer is called as
Ply. Max place x, then MIN puts o to prevent Max from winning, and this game continues
until the terminal node.
In this either MIN wins, MAX wins, or it's a draw. This game-tree is the whole search
space of possibilities that MIN and MAX are playing tic-tac-toe and taking turns
alternately.
7. Explain Hill climbing issues which terminates algorithm without finding a goal state or
getting to state from which no better state can be generated?
Ans: -
Hill Climbing is heuristic search used for mathematical optimization problems in the field
of Artificial Intelligence.
1. Local maximum
It is a state which is better than its neighboring state however there exists a state
which is better than it (global maximum).
This state is better because here value of objective function is higher than its
neighbors.
2. Global maximum
It is the best possible state in the state space diagram. This because at this state,
objective function has highest value.
3. Plateau/flat local maximum
It is a flat region of state space where neighboring states have the same value.
4. Ridge
It is region which is higher than its neighbors but itself has a slope. It is a special
kind of local maximum.
5. Current state
The region of state space diagram where we are currently present during the search.
6. Shoulder
It is a plateau that has an uphill edge.
It examines the neighboring nodes one by one and selects the first neighboring node
which optimizes the current cost as next node.
8. Apply AO* algorithm for the following graph and find final path?
Ans: -
Algorithm:
Step 3: Select a node n that is both on OPEN and a member of T0. Remove it
from OPEN and place it in
CLOSE
Step 4: If n is the terminal goal node, then leveled n as solved and leveled all the
ancestors of n as solved. If the starting node is marked as solved then success and
exit.
Step 6: Expand n. Find all its successors and find their h (n) value, push them
into OPEN.
Step 8: Exit.
Implementation:
Step 1:
In the above graph, the solvable nodes are A, B, C, D, E, F and the unsolvable
nodes are G, H. Take A as the starting node. So, place A into OPEN.
Branches of AI:
1. Logical AI
In general, the facts of the specific situation in which it must act, and its goals are
all represented by sentences of some mathematical logical language.
The program decides what to do by inferring that certain actions are appropriate for
achieving its goals.
2. Search
Artificial Intelligence programs often examine large numbers of possibilities for
example, moves in a chess game and inferences by a theorem proving program.
Discoveries are frequently made about how to do this more efficiently in various
domains.
3. Pattern Recognition
When a program makes observations of some kind, it is often planned to compare
what it sees with a pattern.
For example, a vision program may try to match a pattern of eyes and a nose in a
scene in order to find a face.
More complex patterns are like a natural language text, a chess position or in the
history of some event.
4. Representation
Usually, languages of mathematical logic are used to represent the facts about the
world.
5. Inference
Others can be inferred from some facts.
Mathematical logical deduction is sufficient for some purposes, but new methods
of non-monotonic inference have been added to the logic since the 1970s.
The simplest kind of non-monotonic reasoning is default reasoning in which a
conclusion is to be inferred by default.
Ans: -
The Tic-Tac-Toe game consists of a nine-element vector called BOARD; it represents the
numbers 1 to 9 in three rows.
An element contains the value 0 for blank, 1 for X and 2 for O. A MOVETABLE vector
consists of 19,683 elements (39) and is needed where each element is a nine-element vector.
The contents of the vector are especially chosen to help the algorithm.
The algorithm makes moves by pursuing the following:
1. View the vector as a ternary number. Convert it to a decimal number.
2. Use the decimal number as an index in MOVETABLE and access the vector.
3. Set BOARD to this vector indicating how the board looks after the move. This
approach is capable in time but it has several disadvantages.
4. It takes more space and requires stunning effort to calculate the decimal numbers. This
method is specific to this game and cannot be completed.
POSSWIN (p) returns 0 if player p cannot win on the next move and
otherwise returns the number of the square that gives a winning move.
It checks each line using products 3*3*2 = 18 gives a win for X, 5*5*2=50
gives a win for O, and the winning move is the holder of the blank.
GO (n) makes a move to square n setting BOARD[n] to 3 or 5.
This algorithm is more involved and takes longer but it is more efficient in
storage which compensates for its longer time.
It depends on the programmer‘s skill.
11. Write the algorithms for breadth first search and depth-first search. Enlist the
advantages of each?
Ans: -
Advantages of BFS:
5. There is nothing like useless path in BFS, since it searches level by level.
Disadvantages of BFS:
All of the connected vertices must be stored in memory. So consumes more memory
Advantages of DFS:
2. Finds the larger distant element (from initial state) in less time.
Disadvantages of DFS:
Philosophy
Mathematics
History of AI
The first work that is now generally recognized as AI was done by Warren McCulloch and
Walter Pitts (1943).
They drew on three sources: knowledge of the basic physiology and function of neurons in
the brain; a formal analysis of propositional logic due to Russell and Whitehead; and
Turing‘s theory of computation.
Agents can be grouped into five classes based on their degree of perceived intelligence
and capability.
All these agents can improve their performance and generate better action over the
time. These are given below:
1) Simple Reflex Agent
• The Simple reflex agents are the simplest agents. These agents take decisions on
the basis of the current precepts and ignore the rest of the percept history.
• The Simple reflex agent does not consider any part of precepts history during their
decision and action process.
o The Model-based agent can work in a partially observable environment, and trackthe
situation.
o A model-based agent has two important factors:
Model-based agent.
1) Goal-based agents
The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.
1) Utility-based agent
These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at
a given state.
Utility-based agent act based not only goals but also the best way to achieve the goal.
The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
1) Learning agent
A learning agent in AI is the type of agent which can learn from its past experiences, or it
has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
o Critic: Learning element takes feedback from critic which describes that how well
the agent is doing with respect to a fixed performance standard.
The initial state that the agent starts in. For example, the initial state for our agent in Romania might be
described as in (Arad).
The state space forms a directed network or graph in which the nodes are states and the
links between nodes are actions. (The map of Romania shown in Figure can be interpreted
as a state-space graph if we view each road as standing for two driving actions, one in each
Formulating problems
we proposed a formulation of the problem of getting to Bucharest in terms of the initial
state, actions, transition model, goal test, and path cost. This formulation seems reasonable,
but it is still a model—an abstract mathematical description—and not the real thing.
Dept. of CS &E Page 25
21CS54 Artificial Intelligence & Machine Learning
All these considerations are left out of our state descriptions because they are irrelevant to
the problem of finding a route to Bucharest.
The process of removing detail from a representation is called abstraction.
Can we be more precise about defining the appropriate level of abstraction? Think of the
abstract states and actions we have chosen as corresponding to large sets of detailed world
states and detailed action sequences.
EXAMPLE PROBLEMS
The problem-solving approach has been applied to a vast array of task environments.
We list some of the best known here, distinguishing between toy and real-world
problems.
A toy problem is intended to illustrate or exercise various problem-solving methods.
A real-world problem is one whose solutions people actually care about. Such problems
tend not to have a single agreed-upon description, but we can give the general flavor of
their formulations.
Toy problems
This can be formulated as a problem as follows:
States: The state is determined by both the agent location and the dirt locations. The agent
is in one of two locations, each of which might or might not contain dirt. Thus, there are 2
× 22 = 8 possible world states. A larger environment with n locations has n · 2n states.
Dept. of CS &E Page 26
21CS54 Artificial Intelligence & Machine Learning
8- puzzle
States: A state description specifies the location of each of the eight tiles and
the blank in one of the nine squares.
Initial state: Any state can be designated as the initial state. Note that any
given goal can be reached from exactly half of the possible initial states
(Exercise 3.4).
Actions: The simplest formulation defines the actions as movements of the
blank space Left, Right, Up, or Down. Different subsets of these are possible
depending on where the blank is.
Transition model: Given a state and action, this returns the resulting state;
for example, if we apply Left to the start state in Figure 3.4, the resulting state
has the 5 and the blank switched.
Goal test: This checks whether the state matches the goal configuration shown
in Figure (Other goal configurations are possible.)
Path cost: Each step costs 1, so the path cost is the number of steps in the
path. What abstractions have we included here? The actions are abstracted to
their beginning and final states, ignoring the intermediate locations where the
block is sliding.
The goal of the 8-queens problem is to place eight queens on a chessboard such that no queen
attacks any other. (A queen attacks any piece in the same row, column or diagonal.) Figure
Our final toy problem was devised by Donald Knuth (1964) and illustrates how infinite
state spaces can arise.
Knuth conjectured that, starting with the number 4, a sequence of factorial square root, and
floor operations will reach any desired positive integer.
For example, we can reach 5 from 4 as follows:
Real-world problems
Route-finding algorithms are used in a variety of applications. Some, such as Web sites
and in-car systems that provide driving directions, are relatively straightforward extensions
of the Romania example.
Consider the airline travel problems that must be solved by a travel-planning Web site:
States: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or international, the state must
record extra information about these ―historical‖ aspects.
Initial state: This is specified by the user‘s query.
Actions: Take any flight from the current location, in any seat class, leaving after the
current time, leaving enough time for within-airport transfer if needed.
Transition model: The state resulting from taking a flight will have the flight‘s destination
as the current location and the flight‘s arrival time as the current time.
Goal test: Are we at the final destination specified by the user?
Path cost: This depends on monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage
awards, and so on.
Touring problems are closely related to route-finding problems, but with an important
difference. Consider, for example, the problem ―Visit every city in Figure at least once,
starting and ending in Bucharest.‖
So, the initial state would be in (Bucharest), Visited({Bucharest}), a typical intermediate
state would be in (Vaslui), Visited ({Bucharest, Urziceni, Vaslui}), and the goal test would
check whether the agent is in Bucharest and all 20 cities have been visited.
The traveling salesperson problem (TSP) is a touring problem in which each city must
be visited exactly once.
The aim is to find the shortest tour.
The problem is known to be NP-hard, but an enormous amount of effort has been expended
to improve the capabilities of TSP algorithms.
In addition to planning trips for traveling salespersons, these algorithms have been used for
tasks such as planning movements of automatic circuit-board drills and of stocking
machines on shop floors.
Channel routing finds a specific route for each wire through the gaps between the cells.
These search problems are extremely complex, but definitely worth solving.
protein design, in which the goal is to find a sequence of amino acids that will fold into a
three-dimensional protein with the right properties to cure some disease.
Figure: - Partial search trees for finding a route from Arad to Bucharest. Nodes that have been
expanded are shaded; nodes that have been generated but not yet expanded are outlined in bold;
nodes that have not yet been generated are shown in faint dashed lines.
n. STATE: the state in the state space to which the node corresponds;
n. PARENT: the node in the search tree that generated this node;
n. ACTION: the action that was applied to the parent to generate the node;
n. PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state to
the node, as indicated by the parent pointers.
uninformed search (also called blind search). The term means that the strategies have no
additional information about states beyond that provided in the problem definition.
All they can do is generate successors and distinguish a goal state from a non-goal state.
All search strategies are distinguished by the order in which nodes are expanded.
Breadth-first search
Breadth-first search is a simple strategy in which the root node is expanded first, then
all the successors of the root node are expanded next, then their successors, and so on.
In general, all the nodes are expanded at a given depth in the search tree before any
nodes at the next level are expanded.
Uniform-cost search
When all step costs are equal, breadth-first search is optimal because it always expands the
shallowest unexpanded node.
By a simple extension, we can find an algorithm that is optimal UNIFORM-
COSTSEARCH with any step-cost function.
Instead of expanding the shallowest node, uniform-cost search expands the node n with the
lowest path cost g(n).
Depth-first search
Depth-limited search
The embarrassing failure of depth-first search in infinite state spaces can be alleviated
by supplying depth-first search with a predetermined depth limit.
that is, nodes at depth are treated as if they have no successors. This approach is called
depth-limited search.