IAI UNIT 6
IAI UNIT 6
Bayesian networks represent a probabilistic graphical model that encodes the dependencies
among a set of variables. Their semantics explain the meaning of the graph structure and the
probabilities involved.
Key Concepts
1. Graph Structure:
Conditional distributions describe the probability of one variable given others. In AI, representing
these explicitly can be infeasible due to high dimensionality. Ef cient representations make
computation and storage manageable while retaining accuracy.
1. Exponential Growth:
For a variable with n parents, its Conditional Probability Table (CPT) grows exponentially
2. High Dimensionality:
◦ Large systems require massive storage and computation.
3. Redundancy:
◦ Many probabilities are unnecessary or repetitive.
1. Factored Representations:
◦ Decompose joint distributions into smaller, local conditional probabilities using
Bayesian Networks.
2. Noisy-OR and Noisy-AND Models:
Applications
1. Medical Diagnosis:
◦ Ef ciently model symptoms given diseases.
2. Speech Recognition:
◦ Manage large-scale dependencies between phonemes and words.
3. Robotics:
◦ Simplify probabilistic reasoning for sensor data and actions.
4. Natural Language Processing:
fi
fi
fi
fi
fi
◦ Handle complex conditional probabilities in language models.
Advantages
1. Scalability:
◦ Works well for large systems.
2. Ef ciency:
◦ Reduces computation and storage costs.
3. Simpli ed Inference:
◦ Speeds up probabilistic reasoning.
4. Flexibility:
◦ Adapts to both discrete and continuous variables.
Ef cient representations are crucial in AI, enabling large-scale systems to operate effectively
without overwhelming computational resources.
1. Complex Networks:
◦ The size of the state space grows exponentially with the number of variables, making
exact inference impractical.
2. Dynamic Environments:
◦ Real-world applications often require quick responses, which are better served by
approximate methods.
3. Ef ciency:
1. Sampling-Based Methods:
◦ Focus on sampling the most important regions of the state space by weighting
samples based on their likelihood.
fi
fi
fi
fi
fi
4. c. Gibbs Sampling:
◦ A speci c type of sampling where one variable is updated at a time based on its
conditional probability given the others.
5. Variational Inference:
◦ Approximate the true distribution with a simpler one to make computations easier.
6. a. Mean Field Approximation:
◦ Simpli es the network by temporarily xing certain variables, reducing the problem
into smaller, manageable parts.
1. Medical Diagnosis:
◦ Estimating the probability of diseases given symptoms when exact calculations are
too slow.
2. Robotics:
◦ Making quick decisions about the robot’s position and actions based on noisy sensor
data.
3. Natural Language Processing (NLP):
Advantages
1. Scalability:
◦ Works ef ciently for large networks.
2. Speed:
◦ Delivers results faster than exact inference methods.
3. Flexibility:
◦ Can be adapted for both discrete and continuous variables.
Limitations
fi
fi
fi
fi
fi
1. Accuracy:
◦ May not provide exact results, especially if the network is highly complex.
2. Convergence Issues:
◦ Some methods, like Gibbs sampling, may require many iterations to stabilize.
3. Implementation Complexity:
◦ Advanced methods like variational inference require careful design and tuning.