0% found this document useful (0 votes)
16 views

B.Tech AIML 5th Semester CSE University Questions

Questions for practice

Uploaded by

goodeverall001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

B.Tech AIML 5th Semester CSE University Questions

Questions for practice

Uploaded by

goodeverall001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

5th Semester Regular Examination: 2023-24

Subject: Artificial Intelligence and Machine Learning


Course: B.Tech
Q1) Answer the following questions.

a)Which agent is the most powerful agent in artificial intelligence?

Ans. In AI, learning agents are considered the most powerful type of agent because they can improve
their performance over time through experience. Unlike simple reflex agents or model-based agents,
learning agents use historical data to update their knowledge and decision-making strategies, making
them adaptable and more robust in complex environments.

b) Define a rational agent in the context of Artificial Intelligence.

Ans. A rational agent in AI is an agent that acts to achieve the best possible outcome or, when there
is uncertainty, the best expected outcome. It takes actions based on its knowledge, environment, and
capabilities to maximize its performance measure. Rationality is context-dependent, meaning an
agent's rational behavior depends on its goals, knowledge, and the information it receives from its
environment.

c) Differentiate between informed and uninformed search with example.

Ans. Informed Search (Heuristic Search): Uses additional information (heuristics) to find
a solution more efficiently. A popular example is the A* algorithm, which uses a heuristic
function to estimate the cost of reaching the goal from a given node, allowing it to explore
promising paths first.

Example: In a map navigation problem, the distance between two cities could be used as a
heuristic to guide the search.

Uninformed Search (Blind Search): Does not use any domain-specific knowledge to
search for a solution. It explores nodes without guidance. A common example is Breadth-
First Search (BFS), which systematically explores all nodes level by level.

Example: In solving a maze, BFS explores all paths evenly until it finds the exit.

d) State two differences between propositional logic and first order logic.

Ans. Expressiveness:

 Propositional Logic: Deals with simple, atomic propositions that are either true or
false. It cannot express relationships between objects.
 First-Order Logic (FOL): More expressive, allowing quantification over objects and
the relationships between them (e.g., "All humans are mortal").
Variables:

 Propositional Logic: Does not include variables, only propositions (e.g., P, Q).
 First-Order Logic: Includes variables and quantifiers like "for all" (∀) and "there
exists" (∃), which enable it to represent more complex statements (e.g., ∀x Human(x)
→ Mortal(x)).

e) What do you mean by uncertainty in reasoning?

Ans. Uncertainty in reasoning refers to situations where the outcome of actions or the truth of
statements is not known with certainty due to incomplete, noisy, or ambiguous information. In AI,
probabilistic reasoning methods like Bayesian networks or fuzzy logic are often used to handle
uncertainty, allowing the system to make predictions or decisions with a certain level of confidence,
rather than relying on binary true/false logic.

f) Differentiate between worst ordering and ideal ordering in alpha-beta pruning.

Ans. Worst Ordering: Occurs when alpha-beta pruning is applied, but the nodes are
evaluated in an order that provides minimal pruning. This results in exploring more nodes,
leading to a performance close to that of a basic minimax algorithm without pruning.

Example: In a chess game tree, if the least favorable moves are evaluated first, the algorithm
has to explore a larger portion of the tree.

Ideal Ordering: Occurs when nodes are evaluated in the most efficient order, allowing
maximum pruning of branches. This drastically reduces the number of nodes to be evaluated,
significantly improving the algorithm’s efficiency.

Example: If the most favorable moves are evaluated first, large portions of the tree are
pruned, speeding up the decision-making process.

g) Explain with example about unification.

Ans. Unification is the process of making two logical expressions identical by finding a
substitution for variables that allows this. It is used in first-order logic and Prolog for pattern
matching in logical reasoning.

Example: Consider two logical statements:

 P(x, y) and P(Alice, z)

Unification can occur by substituting x = Alice and y = z, making both expressions equivalent
to P(Alice, z). This allows the inference mechanism to treat them as the same logical
statement.
h) Is Bayesian network supervised or unsupervised? Justify your answer.

Ans. Bayesian network can be used in both supervised and unsupervised learning contexts,
depending on how it is applied.

 Supervised learning: When the Bayesian network is used to model the conditional
probabilities between features (input variables) and a known target (output variable),
it functions in a supervised manner. For instance, when training a Bayesian network
to predict the likelihood of a disease given symptoms, the network is trained on
labeled data (symptoms and known outcomes).
 Unsupervised learning: A Bayesian network can also be used to model the joint
probability distribution of a set of variables without any predefined labels or outputs.
In this case, it functions as an unsupervised learning tool, learning the relationships
between variables.

Thus, Bayesian networks are flexible and can be applied to both supervised and
unsupervised learning tasks, depending on the problem setup and whether labeled data is
available.

i)What are the concepts of statistical learning?

Ans. Statistical learning is a framework for understanding and modeling data using statistics
and probability. Key concepts include:

1. Training Data: A dataset used to train the model by fitting the relationships between
input variables (features) and the target variable (label).
2. Model: A mathematical or computational construct that represents the relationship
between input variables and output. Examples include linear regression models,
decision trees, and neural networks.
3. Loss Function: A measure of how well the model’s predictions match the actual
outcomes. Common loss functions include Mean Squared Error (MSE) for regression
tasks and Cross-Entropy Loss for classification.
4. Regularization: Techniques to prevent overfitting by penalizing overly complex
models (e.g., L1, L2 regularization).
5. Bias-Variance Tradeoff: Describes the trade-off between the model’s complexity
(variance) and the error due to simplifying assumptions (bias). Finding the right
balance minimizes prediction error.
6. Generalization: The ability of a model to perform well on unseen data (test data), not
just on the training data. A well-generalized model avoids overfitting.
7. Probability Distributions: Statistical learning often involves understanding
probability distributions (e.g., Gaussian, Bernoulli) to describe data and uncertainty.

j) Differentiate between knowledge representation and search.

Ans. Knowledge Representation:

o Definition: The process of encoding information about the world in a form


that an AI system can use to reason and make decisions. It involves designing
the structure of data, facts, and relationships so that the system can interpret
and manipulate it.
o Goal: To represent the real world in a format that allows inference and
decision-making.
o Example: Propositional logic, semantic networks, and ontologies are
examples of knowledge representation techniques used to represent
relationships, categories, and facts about the world.
2. Search:
o Definition: The process of exploring possible solutions to a problem by
systematically checking and evaluating possible states or paths. Search
algorithms are used when the solution is not directly available and needs to be
found by exploring a space of possible options.
o Goal: To find the best path or solution to a problem by navigating through a
space of possible states or actions.
o Example: Algorithms like Depth-First Search (DFS), Breadth-First Search
(BFS), and A* algorithm are examples of search techniques.

Key Differences:

 Knowledge Representation focuses on how knowledge is structured and stored,


while Search deals with the exploration and retrieval of knowledge or solutions.
 Knowledge Representation involves static structures for facts and relationships,
whereas Search is a dynamic process of traversing through possibilities to solve a
problem.

PART-II

Q2) Only Focussed-Short Answer Type Questions

a)How are Artificial Intelligence and Machine Learning related?

Ans. Artificial Intelligence (AI) and Machine Learning (ML) are closely related fields, but
they are not the same.

 AI is the broader concept of creating intelligent systems that can perform tasks
typically requiring human intelligence. This includes reasoning, learning, decision-
making, and understanding natural language. AI covers a wide range of techniques,
from rule-based systems to robotics.
 ML is a subset of AI that focuses on developing algorithms that allow machines to
learn from data. Rather than being explicitly programmed for specific tasks, ML
models learn from patterns in data and improve their performance over time. ML is a
key technique that powers many AI systems.

Relationship: AI is the overall goal (building intelligent systems), and ML is one of the key
tools used to achieve that goal by enabling systems to learn and adapt.
b)Explain with example what is Means-Ends-Analysis.

Ans. Means-Ends Analysis (MEA) is a problem-solving technique used to reduce the


difference between the current state and the desired goal state by selecting actions (means)
that bring the agent closer to the goal (ends). It breaks down the problem into sub-goals and
then identifies steps to reduce the gap between the current state and the goal state.

Example: Suppose you want to travel from City A to City B, but you don’t have a direct
flight.

 Current State: You are in City A.


 Goal State: You need to be in City B.
 Means: The actions or means might include booking a flight from City A to a nearby
City C, and then another flight from City C to City B.

The MEA process involves finding the intermediate steps to reduce the difference between
where you are now (City A) and your final goal (City B).

c)Describe the concept of a multi-agent system and elucidate the advantages and
challenges associated with coordination and interaction among multiple agents within
an environment.

Ans. A Multi-Agent System (MAS) consists of multiple agents that interact with each other
within a shared environment. Each agent in the system is autonomous, meaning it can make
its own decisions based on its perception of the environment. The agents in an MAS may
work collaboratively, competitively, or independently, depending on the system’s design and
the problem it aims to solve.

Advantages:

1. Scalability: MAS systems can handle complex tasks by distributing the workload among
multiple agents, which can improve efficiency and performance.
2. Robustness: Since the system consists of multiple agents, the failure of one agent may not
critically affect the overall system.
3. Parallelism: Agents can work in parallel, which can significantly reduce the time required to
solve certain problems.

Challenges:

1. Coordination: Ensuring that agents coordinate their actions effectively to achieve a common
goal can be complex, especially in collaborative settings.
2. Communication Overhead: If agents need to share information frequently, it can lead to
high communication overhead, affecting performance.
3. Conflicting Objectives: In competitive environments, agents may have conflicting goals,
making it difficult to achieve global optimization.
4. Resource Management: Allocating and sharing resources between agents can be a challenge
in resource-limited environments.
d)What is best first search? Explain its advantages over BFS and DFS with a
suitable example.

Ans. Best-First Search (BFS) is a search algorithm that selects the most promising node to
explore based on a given evaluation function (often a heuristic). It combines elements of both
depth-first and breadth-first search by using a priority queue to prioritize nodes that are
likely to lead to the goal.

Advantages over BFS and DFS:

 BFS explores all nodes level by level, which can be inefficient when the search space
is large.
 DFS explores nodes deep into the search tree but might get stuck in deep branches
without finding a solution.

Best-first search addresses these issues by prioritizing nodes that seem closer to the goal,
reducing the number of explored nodes and speeding up the search process.

Example: In pathfinding, if you're looking for the shortest path from a start point to a goal,
BFS would explore all paths evenly, while DFS might get stuck exploring one path too
deeply. Best-first search, however, uses a heuristic like the estimated distance to the goal
(such as Euclidean distance in a grid) to prioritize nodes closer to the goal, which can find a
solution faster.

e)Is A* algorithm able to find a suitable solution from the state space graph of a
problem. Justify your answer with suitable explanation.

Ans. Yes, the A* algorithm is designed to find the optimal solution in a state space graph if
the heuristic used is admissible (never overestimates the cost to reach the goal) and
consistent (the estimated cost is always less than or equal to the estimated cost from any
neighboring vertex plus the step cost to that neighbor).

Justification:

 A* combines heuristics (to estimate the cost to the goal) with the actual cost so far.
The cost function in A* is f(n)=g(n)+h(n), where:
o g(n) is the cost from the start node to node n.
o h(n) is the heuristic estimate of the cost from n to the goal.

Because A* expands the most promising nodes first (those with the lowest f(n)), it efficiently
finds the shortest path. If the heuristic is admissible, A* guarantees finding the optimal
solution.

f)Differentiate between forward chaining and backward chaining with example.

Ans. Forward Chaining:


 Process: Starts with known facts and applies inference rules to extract more data until
the goal is reached.
 Use Case: Used in data-driven systems where you accumulate knowledge until a
conclusion can be reached.
 Example: In a medical diagnosis system, we start with observed symptoms (facts)
and apply rules to determine the possible diseases (conclusion).

Backward Chaining:

 Process: Starts with the goal and works backward by applying inference rules to see if
the facts can support the goal.
 Use Case: Used in goal-driven systems where the system tries to prove a hypothesis.
 Example: To diagnose a disease, the system starts with a possible disease (goal) and
checks if the symptoms (facts) support the diagnosis.

g)Explain the working of alpha-beta pruning with example. How it is different than
minimax algorithm?

Ans. Alpha-beta pruning is an optimization technique for the minimax algorithm that
reduces the number of nodes evaluated in the search tree. It works by "pruning" branches that
cannot influence the final decision, effectively ignoring parts of the tree that do not need to be
explored.

 Alpha represents the best value that the maximizer can guarantee.
 Beta represents the best value that the minimizer can guarantee.
 As the algorithm traverses the tree, if it finds that a certain move will lead to a worse
outcome than a previously evaluated move, it stops exploring that branch.

Example: In a game tree, suppose we are evaluating moves in a chess game. If one move
clearly leads to a better outcome for the opponent, alpha-beta pruning will stop evaluating
further moves along that branch because the opponent would never allow us to reach that
position.

Difference from Minimax:

 Minimax Algorithm: Explores all nodes of the game tree, which can be
computationally expensive.
 Alpha-Beta Pruning: Skips unnecessary nodes and thus reduces the search space,
making the process faster without affecting the outcome.

Both algorithms aim to find the optimal move, but alpha-beta pruning does so more
efficiently by eliminating unpromising branches.

i)Compare and contrast propositional logic and first-order logic in terms of expressive
power and representational capabilities. Provide examples to highlight scenarios where
each logic type is more suitable for knowledge representation.
Ans. Propositional Logic (PL):

 Expressive Power: Propositional logic deals with simple, atomic propositions that
can either be true or false. It does not allow for the expression of relations between
objects or the use of variables. For example, in propositional logic, a fact like "The
sky is blue" would be represented as a single atomic proposition, such as P, where P
stands for "The sky is blue".
 Representational Capabilities: PL can only handle specific facts and relationships
between propositions using logical connectives (AND, OR, NOT). It lacks the ability
to represent objects, properties, and relationships between objects.
 Example: If we want to represent the fact that "It is raining" and "The ground is wet",
we might use two propositions:
o R: It is raining.
o W: The ground is wet.
o We can then form a logical expression such as: R → W (If it rains, the ground
will be wet).

First-Order Logic (FOL):

 Expressive Power: FOL, also known as predicate logic, extends propositional logic
by allowing the use of quantifiers (e.g., ∀ for "for all" and ∃ for "there exists") and
predicates that can express relations between objects. It allows for more detailed
representations involving objects, properties, and relations between objects.
 Representational Capabilities: FOL can express general rules and relationships
involving objects. For example, it can represent "All humans are mortal" using
variables and quantifiers: ∀x (Human(x) → Mortal(x)).
 Example: To represent the fact that "All humans are mortal" and "Socrates is a
human", we can use:
o ∀x (Human(x) → Mortal(x)) (All humans are mortal).
o Human(Socrates) (Socrates is a human).
o From this, we can deduce Mortal(Socrates) (Socrates is mortal).

 Propositional Logic is suitable for simple reasoning tasks involving specific facts
without the need for representing relations between different objects or using
variables. It is appropriate when the knowledge base consists of concrete, unchanging
propositions.
 First-Order Logic is more powerful and suitable when dealing with complex systems
that involve relationships between multiple objects, or when general rules need to be
expressed. It is useful in domains like artificial intelligence, where reasoning about
objects and their properties is essential (e.g., "If a person is a parent, they have a
child").

j)What is the difference between neural net learning and genetic learning? Explain with
suitable examples.

Ans. Neural Network Learning:


 Overview: Neural network learning is a process of training an artificial neural
network using algorithms like backpropagation to adjust the weights between neurons
based on input-output pairs (supervised learning). It is inspired by the human brain,
where neurons are connected by synapses.
 Learning Mechanism: Neural networks learn by minimizing the error between the
predicted output and the actual output through an iterative process of weight updates.
The error is propagated back through the network during training.
 Example: A neural network can be trained to recognize handwritten digits. Given an
image of a digit, the network adjusts its weights to minimize the difference between
the predicted label (e.g., "3") and the actual label.
 Strengths: Neural networks are particularly good at learning complex patterns in
data, making them highly suitable for image recognition, speech recognition, and
language processing tasks.

Genetic Learning:

 Overview: Genetic learning, inspired by the process of natural selection, is a type of


evolutionary algorithm. In this approach, a population of potential solutions to a
problem evolves over time, with "fitter" solutions being more likely to reproduce and
generate new solutions (offspring).
 Learning Mechanism: Genetic algorithms apply operators like mutation, crossover,
and selection to evolve a population of solutions. Over successive generations, the
population ideally converges on an optimal or near-optimal solution.
 Example: Genetic algorithms can be used to solve optimization problems, such as
finding the shortest path in a traveling salesperson problem (TSP). Each possible
route is treated as an individual, and the population of routes evolves to minimize the
total distance traveled.
 Strengths: Genetic algorithms are well-suited for problems where the search space is
large and complex, especially when no clear gradient or objective function can be
defined.

Key Differences:

 Learning Process: Neural networks learn by adjusting the weights of connections


between neurons using gradient descent, while genetic algorithms evolve a population
of solutions using principles of selection, crossover, and mutation.
 Application: Neural networks are typically used for pattern recognition and
classification tasks, whereas genetic algorithms are often used for optimization and
search problems where an explicit solution model is not available.

k)What are the characteristics of Rote Learning? Is it good or bad? Justify your
answer.

Ans. Rote learning is a memorization technique based on repetition. Key characteristics


include:

1. Memorization without Understanding: Information is learned by repeated rehearsal without


necessarily understanding the underlying concepts or relationships.
2. Lack of Application: Learners often cannot apply the memorized information to new or
novel situations, as they are focused on recall rather than comprehension.
3. Surface-Level Learning: The learning focuses on surface-level details rather than deeper
conceptual understanding. Learners may remember facts or formulas but may struggle to
explain their significance.
4. Short-Term Retention: Rote learning often leads to short-term memory retention, making it
easy to forget the material after a certain period, especially if it's not regularly revisited.
5. No Transferability: The knowledge gained through rote learning is typically not transferable
to different contexts. For example, memorizing the steps to solve a math problem doesn’t
necessarily mean the learner can solve similar problems in new scenarios.
6. Repetition-Based: Rote learning relies heavily on constant repetition, which may take time
and effort without necessarily building deeper understanding.

Rote learning can be both good and bad depending on the context:

 Good:
o Quick Memorization: Rote learning is useful for tasks that require memorization of
facts, formulas, or procedures that don’t necessarily require deep understanding. For
example, memorizing multiplication tables, vocabulary, or periodic table elements
can be helpful for quick recall.
o Efficiency in Recalling Information: In some cases, being able to quickly recall
information is necessary (e.g., memorizing phone numbers, dates, or historical facts
for a quiz).
o Foundation for Further Learning: In some cases, rote learning can provide a
foundation for deeper learning. For example, memorizing basic math operations can
serve as a foundation for understanding more complex mathematical concepts.
 Bad:
o Lack of Understanding: Rote learning discourages critical thinking and
understanding. It often leads to a shallow grasp of information, making it difficult to
apply knowledge to new contexts.
o Poor Long-Term Retention: Information learned by rote may be easily forgotten
over time, especially if not reinforced by deeper learning methods.
o Not Suitable for Complex Concepts: For complex subjects (e.g., problem-solving,
reasoning, or creative thinking), rote learning is generally ineffective because it does
not promote conceptual understanding or flexible thinking.

Justification: Rote learning has its place in certain situations where memorization is the
primary goal, but for more complex tasks that require problem-solving, understanding, or
application of knowledge, rote learning is not ideal. It is best used in combination with more
meaningful learning strategies, like active learning or conceptual understanding.

l)Explain the Maximum-likelihood parameter learning model with example.

Ans. Maximum Likelihood Estimation (MLE) is a statistical method used to estimate the
parameters of a probabilistic model. It finds the values of the parameters that maximize the
likelihood function, i.e., it finds the set of parameters that makes the observed data most
probable.
Steps in Maximum Likelihood Estimation:

1. Define the Likelihood Function: The likelihood function represents the probability of
observing the given data as a function of the model parameters.
2. Maximize the Likelihood: The goal is to find the parameter values that maximize this
likelihood function. Often, instead of maximizing the likelihood directly, we maximize the
log-likelihood (because it simplifies the math).
3. Estimate Parameters: The parameters that maximize the likelihood are considered the most
likely values given the data.

Example:

Suppose we are trying to estimate the probability p of a coin landing heads in a biased coin
flip experiment. We perform n trials, and observe k heads. The outcome of each trial can be
modeled as a Bernoulli random variable, and the likelihood function is based on the binomial
distribution.

Let:

 p = the probability of heads (the parameter we want to estimate).


 k = the number of heads observed in nnn trials.

The likelihood function for the probability p, given the observed data, is:

L(p)=P(Data∣p)=(nk)pk(1−p)n−kL(p) = P(\text{Data} | p) = \binom{n}{k} p^k (1-p)^{n-


k}L(p)=P(Data∣p)=(kn)pk(1−p)n−k

The term (nk)\binom{n}{k}(kn) is the binomial coefficient and does not depend on p, so we
focus on maximizing:

L(p)=pk(1−p)n−kL(p) = p^k (1-p)^{n-k}L(p)=pk(1−p)n−k

To simplify, we take the natural logarithm of the likelihood function, giving us the log-
likelihood:

log⁡L(p)=klog⁡(p)+(n−k)log⁡(1−p)\log L(p) = k \log(p) + (n-k) \log(1-


p)logL(p)=klog(p)+(n−k)log(1−p)

To find the value of ppp that maximizes this log-likelihood, we take the derivative with
respect to ppp and set it to zero:

ddp[klog⁡(p)+(n−k)log⁡(1−p)]=0\frac{d}{dp} \left[ k \log(p) + (n-k) \log(1-p) \right] = 0dpd


[klog(p)+(n−k)log(1−p)]=0

This simplifies to:

kp−n−k1−p=0\frac{k}{p} - \frac{n-k}{1-p} = 0pk−1−pn−k=0

Solving for ppp:


p=knp = \frac{k}{n}p=nk

Thus, the maximum likelihood estimate for ppp is simply the proportion of heads observed,
kn\frac{k}{n}nk.

Interpretation:

In this example, MLE gives the most likely estimate for the probability of flipping heads
based on the observed data. If, for instance, you flipped a coin 100 times and got 60 heads,
MLE would estimate that the probability p of heads is 0.600.600.60.

Application:

MLE is widely used in machine learning and statistics for parameter estimation in models
such as:

 Logistic regression
 Gaussian mixture models
 Hidden Markov models

MLE's strength lies in its ability to provide a consistent and asymptotically unbiased estimate
of parameters, given sufficient data.

PART-III

Only Long Answer Type Questions

Q3) a) Explain the structure of a typical intelligent agent, breaking down its
components such as agent program, percept sequence and actuators. Explain how these
components interact to achieve intelligent behaviour in an agent.

Ans. An intelligent agent is an autonomous entity that perceives its environment through
sensors, takes actions using actuators, and aims to achieve certain goals. The structure of a
typical intelligent agent can be broken down into several key components:

1. Agent Program:
o The agent program is the core logic that controls the agent's behavior. It is a
function that maps from percept sequences (inputs received over time) to actions.
The agent program processes the percepts, makes decisions, and chooses actions to
perform.
o The program can be simple (like a reflex agent) or more complex, involving
reasoning, learning, and planning.
2. Percept Sequence:
o This refers to the complete history of everything the agent has perceived since it
was activated. The percept sequence can vary in complexity depending on the
agent's design.
o It serves as the input to the agent program and helps in decision-making.
o Examples of percepts include sensor data, camera input, sound, temperature, etc.
3. Sensors:
o Sensors are the physical components or mechanisms that gather information from
the environment.
o They can range from cameras, microphones, or any sensory device that helps the
agent perceive its surroundings.
4. Actuators:
o Actuators are mechanisms through which the agent interacts with the environment.
o They allow the agent to perform actions like movement (e.g., motors), manipulation
(e.g., robotic arms), or communication (e.g., speakers or displays).
5. Environment:
o The environment is where the agent operates. It provides the input (percepts) and is
affected by the agent’s actions.

Interaction to Achieve Intelligent Behavior:

 The sensors gather percepts from the environment.


 The percept sequence (all gathered percepts) is provided to the agent program.
 The agent program processes the percept sequence, applies its internal rules (or learning
model), and decides on an action.
 The actuators then execute the chosen action, interacting with and altering the
environment.
 This cycle repeats, enabling the agent to act autonomously, adaptively, and intelligently.

b)Solve the following Constraint Satisfaction Problem (CSP)

CROSS

+ROADS

DANGER

Ans. CSPs involve variables, domains for these variables, and constraints that must be
satisfied. The goal here is to solve the problem involving the words:

 CROSS
 ROADS
 DANGER

This can be modeled as a cryptarithmetic puzzle. In such puzzles, each letter represents a
unique digit (0-9), and the task is to find the digits that satisfy the arithmetic sum. The
equation is:

CROSS+ROADS=DANGER

Step-by-Step Approach:

1. Each letter represents a digit.


o C, R, O, S, A, D, N, G, E are variables.
o The possible domain for each variable is {0,1,2,…,9}\{0, 1, 2, \dots, 9\}{0,1,2,…,9}.
2. Constraints:
o Each letter corresponds to a unique digit.
o CROSS + ROADS = DANGER must hold.

Assign values to each letter so that the equation holds. Typically, these types of problems are
solved through backtracking or constraint propagation methods.

Q4) Write the following sentences in FOPL.

1. Sentence: Every athlete is not only strong but also intelligent.

FOPL Translation:

∀x(Athlete(x)→(Strong(x)∧Intelligent(x)))

2. Anyone who plays a game or sport is an athelete.


∀x((PlaysGame(x)∨PlaysSport(x))→Athlete(x))
3. Swimming, running and jumping are sports whereas cricket and football are games.
Sport(Swimming)∧Sport(Running)∧Sport(Jumping)∧Game(Cricket)∧Game(Football)
4. Everyone who is both strong and intelligent definitely succeeds in his career.
∀x((Strong(x)∧Intelligent(x))→Succeeds(x))
5. Sachin is a very good runner.
GoodRunner(Sachin)
6. All sportsman encourage each other in sports.
∀x∀y(Athlete(x)∧Athlete(y)∧x =y→Encourages(x,y))
7. A player who fails to accept the defeat never wins a game.
∀x(¬AcceptsDefeat(x)→¬Wins(x))
8. All my friends are sportsmen and they all like each other.
∀x(Friend(x)→Athlete(x))∧∀x∀y(Friend(x)∧Friend(y)∧x =y→Likes(x,y))

Use resolution to prove that Sachin succeeds in his career.

Ans. To prove that Sachin succeeds in his career using resolution, we need to follow a step-
by-step process, which involves:

1. Writing the information in First-Order Predicate Logic (FOPL).


2. Converting the FOPL into Conjunctive Normal Form (CNF).
3. Using the resolution process to deduce that Sachin succeeds in his career.

We are tasked with proving Sachin succeeds in his career, i.e., Succeeds(Sachin).

Step 1: Negate the goal

To use resolution, we first negate the goal that we want to prove. The goal is
Succeeds(Sachin), so we negate it:
¬Succeeds(Sachin)

We will add this negated goal to our set of clauses.

Step 2: Convert to Conjunctive Normal Form (CNF)

Next, we convert each of the FOPL statements into CNF.

1. ∀x(Athlete(x)→(Strong(x)∧Intelligent(x)))

First, rewrite the implication:

∀x(¬Athlete(x)∨(Strong(x)∧Intelligent(x)))

Break the conjunction:

∀x((¬Athlete(x)∨Strong(x))∧(¬Athlete(x)∨Intelligent(x)))

The CNF clauses are:

¬Athlete(x)∨Strong(x)

¬Athlete(x)∨Intelligent(x)

2. ∀x((Strong(x)∧Intelligent(x))→Succeeds(x))

Rewrite the implication:

∀x(¬(Strong(x)∧Intelligent(x))∨Succeeds(x))

Apply De Morgan’s law:

∀x((¬Strong(x)∨¬Intelligent(x))∨Succeeds(x))

The CNF clause is:

¬Strong(x)∨¬Intelligent(x)∨Succeeds(x)

3. GoodRunner(Sachin)

This just remains as is, and it's assumed from the problem that GoodRunner(Sachin)
implies that Sachin is an athlete:

Athlete(Sachin)

4. The negated goal: ¬Succeeds(Sachin)

Step 3: Apply Resolution


Now, we will apply the resolution process to derive a contradiction.

We have the following CNF clauses:

1. ¬Athlete(Sachin)∨Strong(Sachin)
2. ¬Athlete(Sachin)∨Intelligent(Sachin)
3. ¬Strong(Sachin)∨¬Intelligent(Sachin)∨Succeeds(Sachin)
4. Athlete(Sachin) (from GoodRunner(Sachin))
5. ¬Succeeds(Sachin) (negated goal)

Now, perform resolution:

 From clause (4) Athlete(Sachin) and clause (1) ¬Athlete(Sachin)∨Strong(Sachin) we


can resolve to get:

Strong(Sachin)

 From clause (4) Athlete(Sachin) and clause (2) ¬Athlete(Sachin)∨Intelligent(Sachin),


we can resolve to get:

Intelligent(Sachin)

 Now, we have Strong(Sachin) and Intelligent(Sachin).


 Using these, we resolve with clause (3)
¬Strong(Sachin)∨¬Intelligent(Sachin)∨Succeeds(Sachin) to get:

Succeeds(Sachin)

 Finally, Succeeds(Sachin) contradicts the negated goal ¬Succeeds(Sachin)

Conclusion

Since we derived a contradiction, we conclude that Sachin does indeed succeed in his career.

Q5) a) What are the causes of uncertainty in real world? Explain the need of
probabilistic reasoning in AI with justification.

Ans. Causes of Uncertainty in the Real World:

1. Incomplete Information: In many real-world situations, we don’t have complete


knowledge of the environment. For example, in medical diagnosis, a doctor may not
have access to all the information about a patient’s condition.
2. Ambiguity: Multiple interpretations may exist for a given situation or observation.
For example, a symptom like a headache can be caused by a variety of illnesses,
making it ambiguous.
3. Noise and Inaccurate Data: Sensors, human inputs, or environmental factors can
introduce errors. For instance, data collected from IoT sensors may be noisy or
malfunctioning.
4. Complex and Dynamic Environments: The real world is constantly changing,
making it difficult to predict future states accurately. For example, weather patterns or
stock market trends are dynamic and difficult to model precisely.
5. Lack of Knowledge: Some areas of a problem space may be unknown, leading to
uncertainty. For example, in robotics, uncertainty about the environment can arise
when the robot is navigating through an unfamiliar area.

Need for Probabilistic Reasoning in AI:

 Handling Uncertainty: Probabilistic reasoning allows AI systems to deal with


uncertain information by assigning probabilities to events or hypotheses. It provides a
way to reason and make decisions in the face of incomplete or ambiguous data.
 Improved Decision-Making: Instead of making binary decisions (true/false),
probabilistic reasoning provides a graded response (e.g., 70% chance of rain), leading
to more informed decision-making.
 Learning from Data: Probabilistic models allow AI systems to learn patterns from
large datasets and update their knowledge as new data becomes available. This is
particularly useful in fields like machine learning, speech recognition, and computer
vision.

Justification:
In AI systems, especially when working with dynamic environments or uncertain inputs (like
predicting stock market prices or diagnosing diseases), probabilistic reasoning helps make
more robust decisions. It allows AI to quantify the level of certainty in its predictions and can
adjust beliefs when new information is available, leading to better and more adaptive
performance.

b)State Baye’s theorem in artificial intelligence. Explain briefly how Baye’s theorem
calculates the prediction of an event with respect to addition of new clause.

Ans. Bayes’ Theorem:

P(H∣E)=P(E)P(E∣H)⋅P(H)

Where:

 P(H∣E) is the posterior probability, the probability of hypothesis H given the


evidence E.
 P(E∣H) is the likelihood, the probability of evidence E given that the hypothesis H is
true.
 P(H) is the prior probability of the hypothesis before seeing the evidence.
 P(E) is the marginal likelihood, the probability of the evidence under all possible
hypotheses.

Explanation of Bayes’ Theorem: Bayes’ theorem allows updating the probability of a


hypothesis as new evidence is observed. It helps in refining our beliefs based on additional
information. In AI, this is crucial for systems that need to adapt and improve their predictions
with new data, such as medical diagnosis systems, spam filters, and weather prediction
models.
Bayesian Inference: When a new piece of evidence (or clause) is added, Bayes' theorem is
used to update the prior probability of a hypothesis. The posterior probability becomes the
new updated belief about the hypothesis, and this can be used in further reasoning or
predictions.

Q6) a) What are the two main classes of statistical learning? Explain with examples.
Write the applications of statistical learning.

Ans.

1. Supervised Learning:
o Definition: Supervised learning involves learning a function that maps input
data (features) to a target output (labels) based on a labeled dataset.
o Example: Classifying emails as spam or not spam based on a labeled training
dataset.
o Common Algorithms: Decision trees, support vector machines, neural
networks.
2. Unsupervised Learning:
o Definition: In unsupervised learning, the algorithm learns patterns or
structures from data without explicit labels. It focuses on finding hidden
relationships or clusters in the data.
o Example: Clustering customers into different segments based on purchasing
behavior.
o Common Algorithms: k-means clustering, hierarchical clustering, principal
component analysis (PCA).

Applications of Statistical Learning:

 Fraud Detection: Identifying fraudulent transactions in banking.


 Image Recognition: Classifying objects in images (supervised learning).
 Customer Segmentation: Grouping customers based on their behavior or preferences
(unsupervised learning).
 Predictive Maintenance: Predicting machine failure in industrial applications.

b)Explain the architecture of rule based expert system with neat sketch. Describe the
functions of each block.

Ans. A rule-based expert system is an AI system that applies rules to input data to draw
conclusions or make decisions. The architecture typically consists of the following
components:

1. Knowledge Base

 Function: Contains the domain-specific knowledge in the form of rules, facts, and heuristics.
Each rule is often in an "IF-THEN" format (e.g., "IF condition THEN conclusion").
 Example: "IF the temperature is above 38°C AND the patient has a cough THEN the diagnosis
is flu."

2. Inference Engine

 Function: The core component that applies logical reasoning to the knowledge base to infer
new facts or solutions. It evaluates which rules apply to the given data and fires the
appropriate rules to reach a conclusion.
 Example: In a medical diagnosis system, the inference engine matches the patient's
symptoms with rules to suggest possible diseases.

3. Working Memory (Fact Base)

 Function: Stores the current state of information, including the input data and any
intermediate conclusions drawn during the reasoning process. It keeps track of facts that are
dynamically updated during the reasoning.
 Example: Patient's symptoms, medical history, and newly inferred data during diagnosis.

4. User Interface

 Function: Facilitates communication between the user and the expert system. Users provide
input (e.g., symptoms), and the system gives output (e.g., diagnosis) based on the reasoning
of the inference engine.
 Example: A doctor entering patient details into the system, and the system outputting
possible diagnoses.

5. Explanation Facility

 Function: Provides explanations for the conclusions drawn by the system. It helps users
understand why a particular decision or conclusion was reached.
 Example: "The system diagnosed flu because the patient had a high fever and cough."

6. Knowledge Acquisition Module

 Function: Allows the system to be updated with new knowledge. Domain experts can add,
modify, or delete rules to keep the system up-to-date with the latest information.
 Example: Adding new rules related to a recently discovered disease.

You might also like