AI Important Questions
AI Important Questions
AND-OR Graph
In the above figure, the buying of a car may be broken down into smaller problems or tasks
that can be accomplished to achieve the main goal in the above figure, which is an example
of a simple AND-OR graph. The other task is to either steal a car that will help us accomplish
the main goal or use your own money to purchase a car that will accomplish the main goal.
The AND symbol is used to indicate the AND part of the graphs, which refers to the need
that all subproblems containing the AND to be resolved before the preceding node or issue
may be finished.
The start state and the target state are already known in the knowledge-
based search strategy known as the AO* algorithm, and the best path is identified by
heuristics. The informed search technique considerably reduces the algorithm’s time
complexity. The AO* algorithm is far more effective in searching AND-OR trees than the
A* algorithm.
Here in the above example below the Node which is given is the heuristic value i.e h(n).
Edge length is considered as 1.
Step 1
Step 3
f(C⇢H+I) is selected as the path with the lowest cost and the heuristic is also left unchanged
because it matches the actual cost. Paths H & I are solved because the heuristic for those
paths is 0,
but Path A⇢D needs to be calculated because it has an AND.
as we can see that path f(A⇢C+D) is get solved and this tree has become a solved tree now.
In simple words, the main flow of this algorithm is that we have to find firstly level 1st
heuristic
value and then level 2nd and after that update the values with going upward means towards
the root node.
In the above tree diagram, we have updated all the values.
Algorithm :
function KB_AGENT (percept) returns an action
KB : knowledge base
t : time ( counter initially 0)
TELL(KB, MAKE_PERCEPT_SENTENCE (percept,t) )
action = ASK(KB, MAKE_ACTION_QUERY (t) )
TELL(KB, MAKE_ACTION_SENTENCE (action,t) )
t=t+1
return action
If a percept is given, agent adds it to KB, then it will ask KB for the best action and then tells
KB that it has in fact taken that action.
The Wumpus World in AI is a classic problem demonstrating various ideas such as search
algorithms, planning, and decision-making. The wumpus world in AI is a straightforward
environment in which an agent (a computer program or a robot) must traverse a grid world
filled with obstacles, hazards, and dangerous wumpus. Wumpus is a fictional character that
kills the player in the game. The agent must travel the globe for a safe route to the treasure
without falling into pits or being killed by the wumpus.
Introduction
The Wumpus World in AI is a classic problem based on reasoning with knowledge where
the scenario entails a world comprising a grid of chambers, each with pits, obstacles, and a
wumpus. The agent's mission is to locate the gold and escape the world without being killed
by wumpus or falling into a pit. The wumpus is a fierce creature that can detect the agent and
kill it if it is in the same area as it. As a result, the agent can only perform a few activities,
such as moving forward, turning, shooting an arrow, and grabbing the money.
The Wumpus World in AI is an important research problem because it offers a simple yet
challenging setting for testing and developing intelligent agents. The problem has
uncertainty, partial observability, and numerous objectives, making it a good test for different
AI techniques like search algorithms, reinforcement learning, and planning. Real-world
applications of the Wumpus World issue include designing intelligent agents for autonomous
vehicles, robotics, and game creation.
In the following parts, we will examine the game rules and the various AI methods used to
solve the Wumpus World problem. We will also discuss how the problem is pertinent in real-
world applications and the difficulties in designing intelligent agents to deal with the
Wumpus World.
The Wumpus World in AI is a basic yet difficult AI environment that demonstrates search
algorithms, planning, and decision-making concepts. It is a simulated world comprising a
grid of rooms where an agent must negotiate obstacles, hazards, and a dangerous creature
known as the wumpus. The agent's main goal is to find a safe way to the treasure and escape
the world without falling into pits or being killed by the wumpus.
To build an intelligent agent for the Wumpus World, we must first define the problem's
Performance, Environment, Actuators, and Sensors (PEAS).
1. Performance:
o +1000 bonus points if the agent returns from the tunnel with the gold.
o Being eaten by the wumpus or plummeting into the pit results in a -1000 point
penalty.
o Each move is worth -1, and using an arrow is worth -10.
o The game is over if either agent dies or exits the tunnel.
2. Environment:
o A four-by-four grid of chambers.
o The operative begins in room square [1, 1], facing the right.
o Wumpus and gold locations are selected randomly except for the first
square [1,1].
o Except for the first square, each square in the tunnel has a 0.2 chance of being
a pit.
3. Actuators: They are the actions that the agent can take to interact with the world. The
worker in Wumpus World in AI can carry out the following tasks:
o Left turn
o Right turn
o Move forward
o Grab
o Release
o Shoot
4. Sensors: They are how the agent senses its surroundings. The agent's instruments in
the Wumpus World provide the following information:
o If the agent is in the chamber next to the wumpus, he will notice the stench.
(Not diagonally).
o If the agent is in the room immediately adjacent to the pit, he will notice a
breeze.
o The agent will notice the glitter in the chamber with the gold.
o The agent will notice the bump if he runs into a wall.
o When the Wumpus is shot, it lets out a horrifying scream that can be heard
throughout the tunnel.
o These perceptions can be represented as a five-element list with distinct
indicators for each sensor.
o For example, if an agent detects stench and breeze but not glitter, bump, or
scream, it can be depicted as [Stench, Breeze, None, None].
The Wumpus world in AI is a cave with four chambers linked by passageways. So there are a
total of 16 chambers that are linked to one another. We now have a knowledge-based agent
who will advance in this universe. The cave has a chamber with a beast named Wumpus, who
eats anyone who enters it. The agent can shoot the wumpus, but the agent only has one
projectile. Some pit rooms in the Wumpus world in AI are bottomless, and if the agent falls
into one of them, he will be stuck there eternally. The exciting aspect of this cave is
discovering a heap of gold in one of its rooms. So the agent's objective is to locate the gold
and climb out of the cave without being eaten by wumpus or falling into Pits. The agent will
be rewarded if he returns with gold, but he will be punished if he is eaten by wumpus or slips
into the pit.
Some elements can assist the agent in navigating the tunnel. These elements are listed below:
• The rooms adjacent to the Wumpus chamber are stinky, so there will be a stench.
• The room closest to the PITs has a breeze, so if the agent gets close to the PIT, he will
notice the breeze.
• Glitter will be present in the chamber if the room contains gold.
• If the agent confronts the wumpus, it can be killed, and the wumpus will scream
horribly, which can be heard throughout the cave.
Exploring the Wumpus World
We will now explore the Wumpus world in AI and use logical reasoning to determine how
the agent will reach its objective.
Agent's first step: Initially, the agent is in the first room or on the square [1,1], and we
already know that this room is safe for the agent, so we will add the symbol OK to the below
diagram (a) to indicate that room is safe. Then, the agent is represented by symbol A, the
breeze by symbol B, the glitter or gold by symbol G, the visited chamber by symbol V, the
pits by symbol P, and the wumpus by symbol W.
The agent does not detect any breeze or Stench in Room [1,1], implying that the neighboring
squares are also fine.
Agent's second step: Now that the agent has to proceed forward, it will either go to [1,
2] or [2, 1]. Assume the agent moves to room [2, 1]. The agent detects a breeze in this
chamber, indicating that the pit is nearby. The pit can be in [3, 1] or [2, 2], so we'll put the
symbol P? to indicate whether or not this is a Pit room.
Now, the agent will pause and reflect before making any bad moves. Finally, the agent will
return to the [1, 1] chamber. The agent visits rooms [1,1] and [2,1], so we will use V to
symbolize the visited squares.
Agent's third step: At the third stage, the agent will proceed to room [1,2], which is fine.
The agent detects a stench in the area [1,2], indicating the presence of a Wumpus nearby. But,
according to the game's regulations, wumpus cannot be in the room [1,1] nor in [2,2]. (Agent
had not detected any stench when he was at [2,1]). As a result, the agent deduces that the
wumpus is in room [1,3], and in the present state, there is no breeze, implying that there is no
Pit and no Wumpus in [2,2]. So it's safe, and we'll label it OK, and the agent will move in
further [2,2].
Agent's fourth move: Because there is no stench or breeze in room [2,2], let us assume the
agent chooses to relocate to [2,3]. The agent detects glitter in the room [2,3], so it should take
the gold and climb out of the cave.
o Decision Tree is a Supervised learning technique that can be used for both
classification and Regression problems, but mostly it is preferred for solving
Classification problems. It is a tree-structured classifier, where internal nodes
represent the features of a dataset, branches represent the decision rules and each
leaf node represents the outcome.
o In a Decision tree, there are two nodes, which are the Decision Node and Leaf
Node. Decision nodes are used to make any decision and have multiple branches,
whereas Leaf nodes are the output of those decisions and do not contain any further
branches.
o The decisions or the test are performed on the basis of features of the given dataset.
o It is a graphical representation for getting all the possible solutions to a
problem/decision based on given conditions.
o It is called a decision tree because, similar to a tree, it starts with the root node, which
expands on further branches and constructs a tree-like structure.
o In order to build a tree, we use the CART algorithm, which stands for Classification
and Regression Tree algorithm.
o A decision tree simply asks a question, and based on the answer (Yes/No), it further
split the tree into subtrees.
o Below diagram explains the general structure of a decision tree:
Note: A decision tree can contain categorical data (YES/NO) as well as numeric data.
There are various algorithms in Machine learning, so choosing the best algorithm for the given
dataset and problem is the main point to remember while creating a machine learning model.
Below are the two reasons for using the Decision tree:
o Decision Trees usually mimic human thinking ability while making a decision, so it is
easy to understand.
o The logic behind the decision tree can be easily understood because it shows a tree-like
structure.
Root Node: Root node is from where the decision tree starts. It represents the entire
dataset, which further gets divided into two or more homogeneous sets.
Leaf Node: Leaf nodes are the final output node, and the tree cannot be segregated further
after getting a leaf node.
Splitting: Splitting is the process of dividing the decision node/root node into sub-nodes
according to the given conditions.
Branch/Sub Tree: A tree formed by splitting the tree.
Pruning: Pruning is the process of removing the unwanted branches from the tree.
Parent/Child node: The root node of the tree is called the parent node, and other nodes
are called the child nodes.
In a decision tree, for predicting the class of the given dataset, the algorithm starts from the
root node of the tree. This algorithm compares the values of root attribute with the record (real
dataset) attribute and, based on the comparison, follows the branch and jumps to the next node.
For the next node, the algorithm again compares the attribute value with the other sub-nodes
and move further. It continues the process until it reaches the leaf node of the tree. The complete
process can be better understood using the below algorithm:
o Step-1: Begin the tree with the root node, says S, which contains the complete dataset.
o Step-2: Find the best attribute in the dataset using Attribute Selection Measure
(ASM).
o Step-3: Divide the S into subsets that contains possible values for the best attributes.
o Step-4: Generate the decision tree node, which contains the best attribute.
o Step-5: Recursively make new decision trees using the subsets of the dataset created in
step -3. Continue this process until a stage is reached where you cannot further classify
the nodes and called the final node as a leaf node.
Example: Suppose there is a candidate who has a job offer and wants to decide whether he
should accept the offer or Not. So, to solve this problem, the decision tree starts with the root
node (Salary attribute by ASM). The root node splits further into the next decision node
(distance from the office) and one leaf node based on the corresponding labels. The next
decision node further gets split into one decision node (Cab facility) and one leaf node. Finally,
the decision node splits into two leaf nodes (Accepted offers and Declined offer). Consider the
below diagram:
While implementing a Decision tree, the main issue arises that how to select the best attribute
for the root node and for sub-nodes. So, to solve such problems there is a technique which is
called as Attribute selection measure or ASM. By this measurement, we can easily select the
best attribute for the nodes of the tree. There are two popular techniques for ASM, which are:
o Information Gain
o Gini Index
1. Information Gain:
Where,
2. Gini Index:
o Gini index is a measure of impurity or purity used while creating a decision tree in the
CART (Classification and Regression Tree) algorithm.
o An attribute with the low Gini index should be preferred as compared to the high Gini
index.
o It only creates binary splits, and the CART algorithm uses the Gini index to create
binary splits.
o Gini index can be calculated using the below formula:
Pruning is a process of deleting the unnecessary nodes from a tree in order to get the optimal
decision tree.
A too-large tree increases the risk of overfitting, and a small tree may not capture all the
important features of the dataset. Therefore, a technique that decreases the size of the learning
tree without reducing accuracy is known as Pruning. There are mainly two types of
trees pruning technology used:
o It is simple to understand as it follows the same process which a human follow while
making any decision in real-life.
o It can be very useful for solving decision-related problems.
o It helps to think about all the possible outcomes for a problem.
o There is less requirement of data cleaning compared to other algorithms.
Now we will implement the Decision tree using Python. For this, we will use the dataset
"user_data.csv," which we have used in previous classification models. By using the same
dataset, we can compare the Decision tree classifier with other classification models such
as KNN SVM, LogisticRegression, etc.
Steps will also remain the same, which are given below:
K-Nearest
KNN isalgorithm.
employed
developed
The
onearticle
of theto
this
most Neighbors
explores
tackle
algorithm
basic yet (KNN)
classification
the fundamentals,
in 1951,and
essentialalgorithm
which
regression
workings,
was subsequently
problems.
and
is a algorithms
classification supervised
implementation
Evelyn
expanded
machine
Fixmachine
in and
by
of Thomas
the
Joseph
learning
KNN Cover.
Hodges
method
learning. It
belongs to the supervised learning domain and finds intense application in pattern
recognition, data mining, and intrusion detection.
It is widely disposable in real-life scenarios since it is non-parametric, meaning it does not
make any underlying assumptions about the distribution of data (as opposed to other
algorithms such as GMM, which assume a Gaussian distribution of the given data). We are
given some prior data (also called training data), which classifies coordinates into groups
identified by an attribute.
As an example, consider the following table of data points containing two features:
KNN Algorithm working visualization
Now, given another set of data points (also called testing data), allocate these points to a
group by analysing the training set. Note that the unclassified points are marked as ‘White.’