0% found this document useful (0 votes)
19 views9 pages

Unit - 3 AI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views9 pages

Unit - 3 AI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Unit – 3

SHORT ANSWER QUESTIONS

1. Label the term “Uncertainty” with an example.


The agent can never be completely certain about the state of the external world since
there is ambiguity and uncertainty.
Reasons:
- Sensors have limited precision
- Sensors have limited accuracy
- There are hidden variables that sensors can’t see
- The future is unknown, uncertain, i.e. cannot foresee all possible future events
which may happen
Uncertainty in the world model:
- True uncertainty: rules are probabilistic in nature
- Laziness: too hard to determine exceptionless rules
Takes too much work to determine all of the relevant factors Too hard to
use the enormous rules that result
- Theoretical ignorance: don’t know all the rules
Problem domain has no complete theory
- Practical ignorance: do know all the rules BUT
Haven’t collected all relevant information for a particular case
2. What is mean by Acting under Uncertainty in the World Model?
Probabilistic reasoning is a way of knowledge representation where we apply the
concept of probability to indicate the uncertainty in knowledge. In probabilistic
reasoning, we combine probability theory with logic to handle the uncertainty.
We use probability in probabilistic reasoning because it provides a way to handle the
uncertainty that is the result of someone’s laziness and ignorance.
In the real world, there are lots of scenarios, where the certainty of something is not
confirmed.
Need of probabilistic reasoning in AI:
- When there are unpredictable outcomes.
- When specifications or possibilities of predicates becomes too large to handle.
- When an unknown error occurs during an experiment.
In probabilistic reasoning, there are two ways to solve problems with uncertain
knowledge:
- Bayes’ rule
- Bayesian Statistics
Uncertainty in the world model:
- True uncertainty: rules are probabilistic in nature
- Laziness: too hard to determine exceptionless rules
Takes too much work to determine all of the relevant factors Too hard to
use the enormous rules that result
- Theoretical ignorance: don’t know all the rules
Problem domain has no complete theory
- Practical ignorance: do know all the rules BUT
Haven’t collected all relevant information for a particular case
3. Explain the semantics of Bayesian Network.
There are two ways in which we can understand Semantics of Bayesian networks:
- See the network as representation of the joint probability distribution.
This is useful in understanding how to construct networks.
- See the networks as an encoding of a collection of conditional independence
statements. This is useful in designing inference procedure. However, the two ways
are equivalent.

LONG ANSWERS

2. Elaborate the concept of basic probability notion in detail. Probability can be defined
as a chance that an uncertain event will occur. It is the numerical measure of the
likelihood that an event will occur. The value of probability always remain between 0
and 1 that represent ideal uncertainties.
0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.
P(A) = 0, indicates total uncertainty in an event A. P(A) =
1, indicates total certainty in an event A.
We can find the probability of an uncertain event by using the below formula.

P(~A) = probability of a not happening event.


P(~A) + P(A) = 1.
Event: Each possible outcome of a variable is called an event.
Sample space: The collection of all possible events is called sample space. Random
variable: Random variables are used to represent the events and objects in the real
world.
Prior probability: The prior probability of an event is probability computed before
observing new information.
Posterior Probability: The probability that is calculated after all evidence or
information has taken into account. It is a combination of prior probability and new
information.
3. Write a note on Probability theory, Utility theory and Decision theory.
Probability Theory:
Probability can be defined as a chance that an uncertain event will occur. It is the
numerical measure of the likelihood that an event will occur. The value of probability
always remain between 0 and 1 that represent ideal uncertainties.
0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.
P(A) = 0, indicates total uncertainty in an event A. P(A) =
1, indicates total certainty in an event A.
We can find the probability of an uncertain event by using the below formula.

P(~A) = probability of a not happening event.


P(~A) + P(A) = 1.
Event: Each possible outcome of a variable is called an event.
Sample space: The collection of all possible events is called sample space. Random
variable: Random variables are used to represent the events and objects in the real
world.
Prior probability: The prior probability of an event is probability computed before
observing new information.
Posterior Probability: The probability that is calculated after all evidence or
information has taken into account. It is a combination of prior probability and new
information.
Utility Theory:
- Defines axioms on preferences that involve uncertainty and ways to
manipulate them.
- Uncertainty is modeled through lotteries.
Lottery:
[p : A; (1 – p) : C]
Outcome A with probability p Outcome C
with probability (1-p)
- The following six constraints are known as the axioms of utility theory. The
axioms are the most obvious semantic constraints on preferences with lotteries.
Axioms of the utility theory:
Orderability: Given any two states, a rational agent prefers one of them, else the two
as equally preferable.
Transitivity: Given any three states, if an agent prefers A to B and prefers B to C, agent
must prefer A to C.
Continuity: If some state B is between A and C in preference, then there is a p for
which the rational agent will be indifferent between state B and the lottery in which A
comes with probability p, C with probability (1-p) Substitutability: If an agent is
indifferent between two lotteries, A and B, then there is a complex lottery in which A
can be substituted with B. Monotonicity: If an agent prefers A to B, then the agent must
prefers the lottery in which A occurs with a higher probability.
Decomposability: Compound lotteries can be reduced to simpler lotteries using the law
of probability.
Decision Theory:
Decision theory, combined with probability theory, allows us to make optimal decisions in
situations involving uncertainty such as those encountered in pattern recognition.
Classification problems can be broken down into two separate stages, inference stage
and decision stage. The inference stage involves using the training data to learn the
model for the joint distribution p(x, Ck) or equivalently p(x, t), which gives us the most
complete probabilistic description of the situation. In the end, we must decide on
optimal choice based on our situation. This decision stage is generally very simple, even
trivial, once we have solved the inference problem.
4. List the various Axioms of Probability.
Probability of Event:
- For any event E, 0 ≤ P(E) ≤ 1

Probability of Sample Space:


- For Sample Space, P(S) =
1 Mutually Exclusive Event:
- P(A U B) = P(A) + P(B) for mutually exclusive events
- These Mutually exclusive events mean that such events cannot occur together.
Mutually Exhaustive:
- Mutually exhaustive events mean that such events together make up everything
that can possibly happen in a random experiment.
5. Elaborate the features of Exact Inference and Approximate Inference in
Bayesian Network.
Exact inference by enumeration:
Slightly intelligent way to sum out variables from the joint without actually
constructing its explicit representation.
m
Recursive depth-first enumeration: O(n) space, O(d ) time
Algorithm:
Function ENUMERATION-ASK(X, e, bn) returns a distribution over X Inputs:
X, the query variable
e, observed values for variables E
bn, a Bayesian network with variables {X} U E U Y Q(X)
 a distribution over X, initially empty
For each value xi of X do
Extend e with value xi for X
Q(xi)  ENUMERATE-ALL(VARS[bn], e)
Return NORMALIZE(Q(X))
Function ENUMERATE-ALL(vars, e) returns a real number If
EMPTY?(vars) then return 1.0
Y  FIRST(vars) If Y
has value y in e
Then return P(y | Pa(Y)) x ENUMERATE-ALL(REST(vars),e) Else
return ∑y P(y | Pa(Y)) x ENUMERATE-ALL(REST(vars), ey) Exact
Inference by variable elimination:
- Carry out summations right-to-light, storing intermediate results to avoid
re-computation.
Algorithm:
Function ELIMINATION-ASK(X, e, bn) returns a distribution over X Inputs: X,
the query variable
e, evidence specified as an event
bn, a belief network specifying joint distribution P(X1, , Xn)
factors  []; vars  REVERSE(VARS[bn]) for
each var in vars do
factors  [MAKE-FACTOR(var, e)|factors]
if var is a hidden variable then factors  SUM-OUT(var, factors) return
NORMALIZE(POINTWISE-PRODUCT(factors)) Approximate Inference
by stochastic simulation:
Basic idea:
- Draw N samples from a sampling distribution S
- Compute an approximate posterior probability P’
- Show this converges to the true probability P
Outline:
- Sampling from an empty network
- Rejection sampling: reject samples disagreeing with evidence
- Likelihood weighting: use evidence to weight samples
- Markov chain Monte Carlo (MCMC): sample from a stochastic process whose
stationary distribution is the true posterior.
Sampling from an empty network:
Function PRIOR-SAMPLE(bn) returns an event sampled from bn Inputs: bn, a
belief network specifying joint distribution P(X1, ,
Xn)
x  an event with n
elements for i = 1 to n do
xi  a random sample from P(Xi | parents(Xi))
given the values of Parents(Xi) in x
return x
Rejection sampling:
Function REJECTION-SAMPLING(X, e, bn, N) returns an estimate of P(X|e) Local
variables: N, a vector of counts over Xi, initially zero
For j = 1 to N do
x  PRIOR-SAMPLE(bn)
if x is consistent with e then
N[x]  N[x] + 1 where x is the value of X in x
return NORMALIZE(N[X])
Likelihood weighting:
Function LIKELIHOOD-WEIGHTING(X, e, bn, N) returns an estimate of P(X|e)
Local variables: W, a vector of weighted counts over X, initially zero For j = 1 to
N do
x, w  WEIGHTED-SAMPLE(bn)
W[x]  W[x] + w where x is the value of X in x
Return NORMALIZE(W[X])
Function WEIGHTED-SAMPLE(bn, e) return an event and a weight x  an
event with n elements; w  1
for i = 1 to n do
if Xi has a value xi in e
then w  w x P(Xi = xi | parents(Xi))
else xi  a random sample from P(Xi | parents(Xi)) return x, w
Approximate inference using MCMC:
“State” of network = current assignment to all variables.
Generate next state by sampling one variable given Markov blanket Sample
each variable in turn, keeping evidence fixed.
Algorithm:
Function MCMC-ASK(X, e, bn, N) returns an estimate of P(X|e) Local
variables: N[X], a vector of counts over X, initially zero
Z, the non-evidence variables in bn
x, the current state of the network, initially copied from e initialize x
with random values for the variables in Y
for j = 1 to N do for
each Zi in Z do
sample the value of Zi in x from P(Zi|mb(Zi))
given the value of MB(Zi) in x
N[x]  N[x] + 1 where x is the value of X in x
Return NORMALIZE(N[X])
6. List the procedure for efficient representation of conditional distributions.
We can represent the conditional distribution in more efficient way by utilizing
deterministic nodes
- It has its value specified exactly by the value of its parents.
- i.e. The relationship between the parent nodes Canadian, US, Mexican and the child
node North American is simply that the child is the disjunction of the parents.
- From individual probabilities, the entire CPT can be built Qcold
= P(~fever | cold, ~flu, ~malaria) = 0.6
Qflu = P(~fever | ~cold, flu, ~malaria) = 0.2
Qmalaria = P(~fever | ~cold, ~flu, malaria) = 0.1
- With the general rule is

You might also like