0% found this document useful (0 votes)
93 views76 pages

Lecture 13: Bayesian Networks I: CS221 / Spring 2019 / Charikar & Sadigh

The document provides an introduction to Bayesian networks. It begins by reviewing factor graphs and probabilistic concepts like joint, marginal, and conditional distributions. It then outlines the plan to cover modeling with Bayesian networks, including an introduction to probabilistic programs and performing inference to answer queries. The goal is to introduce a principled probabilistic framework called Bayesian networks for reasoning under uncertainty by representing dependencies between variables as a directed acyclic graph and assigning probabilities to events.

Uploaded by

Farheen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views76 pages

Lecture 13: Bayesian Networks I: CS221 / Spring 2019 / Charikar & Sadigh

The document provides an introduction to Bayesian networks. It begins by reviewing factor graphs and probabilistic concepts like joint, marginal, and conditional distributions. It then outlines the plan to cover modeling with Bayesian networks, including an introduction to probabilistic programs and performing inference to answer queries. The goal is to introduce a principled probabilistic framework called Bayesian networks for reasoning under uncertainty by representing dependencies between variables as a directed acyclic graph and assigning probabilities to events.

Uploaded by

Farheen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Lecture 13: Bayesian networks I

CS221 / Spring 2019 / Charikar & Sadigh


Pac-Man competition

1. (1783) Adam Klein


2. (1769) Jonathan Hollenbeck
3. (1763) Michael Du

CS221 / Spring 2019 / Charikar & Sadigh 1


cs221.stanford.edu/q Question
Earthquakes and burglaries are independent events that will cause an
alarm to go off. Suppose you hear an alarm. How does hearing on the
radio that there’s an earthquake change your beliefs?

it increases the probability of burglary

it decreases the probability of burglary

it does not change the probability of burglary

CS221 / Spring 2019 / Charikar & Sadigh 2


• Situations like these arise all the time in practice: we have a lot of unknowns which are all dependent on
one another. If we obtain evidence on some of these unknowns, how does that affect our belief about the
other unknowns?
• In this lecture, we’ll see how we can perform this type of reasoning under uncertainty in a principled way
using Bayesian networks.
Review: definition

X1 X2 X3

f1 f2 f3 f4

Definition: factor graph

Variables:
X = (X1 , . . . , Xn ), where Xi ∈ Domaini
Factors:
f1 , . . . , fm , with each fj (X) ≥ 0
m
Y
Weight(x) = fj (x)
j=1

CS221 / Spring 2019 / Charikar & Sadigh 4


• Last week, we talked about factor graphs, which uses local factors to specify a weight Weight(x) for each
assignment x in a compact way. The stated objective was to find the maximum weight assignment.
• Given any factor graph, we saw a number of algorithms (backtracking search, beam search, Gibbs sampling,
variable elimination) for (approximately) optimizing this objective.
Review: person tracking

Problem: person tracking

Sensors report positions: 0, 2, 2. Objects don’t move very fast and


sensors are a bit noisy. What path did the person take?
t1 t2
X1 X2 X3
o1 o2 o3

• Variables Xi : location of object at time i


• Transition factors ti (xi , xi+1 ): incorporate physics
• Observation factors oi (xi ): incorporate sensors
[demo: maxVariableElimination()]
What do the factors mean?

CS221 / Spring 2019 / Charikar & Sadigh 6


• As an example, recall the object tracking example. We defined observation factors to capture the fact that
the true object position is close to the sensor reading, and the transition factors to capture the fact that
the true object positions across time are close to each other.
• We just set them rather arbitrarily. Is there a more principled way to think about these factors beyond
being non-negative functions?
Course plan

Search problems
Constraint satisfaction problems
Markov decision processes
Adversarial games Bayesian networks

Reflex States Variables Logic


”Low-level intelligence” ”High-level intelligence”

Machine learning

CS221 / Spring 2019 / Charikar & Sadigh 8


• Much of this class has been on developing modeling frameworks. We started with state-based models,
where we cast real-world problems as finding paths or policies through a state graph.
• Then, we saw that for a large class of problems (such as scheduling), it was much more convenient to use
the language of factor graphs.
• While factor graphs could be reduced to state-based models by fixing the variable ordering, we saw that
they also led to notions of treewidth and variable elimination, which allowed us to understand our models
much better.
• In this lecture, we will introduce another modeling framework, Bayesian networks, which are factor graphs
imbued with the language of probability. This will give probabilistic life to the factors of factor graphs.
Roadmap

Basics

Probabilistic programs

Inference

CS221 / Spring 2019 / Charikar & Sadigh 10


• Bayesian networks were popularized in AI by Judea Pearl in the 1980s, who showed that having a coherent
probabilistic framework is important for reasoning under uncertainty.
• There is a lot to say about the Bayesian networks (CS228 is an entire course about them and their cousins,
Markov networks). So we will devote most of this lecture focusing on modeling.
Review: probability (example)
Random variables: sunshine S ∈ {0, 1}, rain R ∈ {0, 1}

Joint distribution:
s r P(S = s, R = r)
0 0 0.20
P(S, R) = 0 1 0.08
1 0 0.70
1 1 0.02

Marginal distribution: Conditional distribution:


s P(S = s) s P(S = s | R = 1)
P(S) = 0 0.28 P(S | R = 1) = 0 0.8
1 0.72 1 0.2

(aggregate rows) (select rows, normalize)

CS221 / Spring 2019 / Charikar & Sadigh 12


• Before introducing Bayesian networks, let’s review probability (at least the relevant parts). We start with
an example about the weather. Suppose we have two boolean random variables, S and R representing
sunshine and rain. Think of an assignment to (S, R) as representing a possible state of the world.
• The joint distribution specifies a probability for each assignment to (S, R) (state of the the world). We
use lowercase letters (e.g., s and r) to denote values and uppercase letters (e.g., S and R) to denote
random variables. Note that P(S = s, R = r) is a probability (a number) while P(S, R) is a distribution (a
table of probabilities). We don’t know what state of the world we’re in, but we know what the probabilities
are (there are no unknown unknowns). The joint distribution contains all the information and acts as the
central source of truth.
• From it, we can derive a marginal distribution over a subset of the variables. We get this by aggregating
the rows that share the same value of S. The interpretation is that we are interested in S. We don’t
explicitly care about R, but we want to take into account R’s effect on S. We say that R is marginalized
out. This is a special form of elimination. In the last lecture, we leveraged max-elimination, where we
took the max over the eliminated variables; here, we are taking a sum.
• The conditional distribution selects rows of the table matching the condition (right of the bar), and then
normalizes the probabilities so that they sum to 1. The interpretation is that we observe the condition
(R = 1) and are interested in S. This is the conditioning that we saw for factor graphs, but where we
normalize the selected rows to get probabilities.
Review: probability (general)
Random variables:
X = (X1 , . . . , Xn ) partitioned into (A, B)
Joint distribution:
P(X) = P(X1 , . . . , Xn )
Marginal distribution:
P
P(A) = b P(A, B = b)
Conditional distribution:
P(A | B = b) ∝ P(A, B = b)

CS221 / Spring 2019 / Charikar & Sadigh 14


• In general, we have n random variables X1 , . . . , Xn and let X denote all of them. Suppose X is partitioned
into A and B (e.g., A = (X1 , X3 ) and B = (X2 , X4 , X5 ) if n = 5).
• The marginal and conditional distributions can be defined over the subsets A and B rather than just single
variables.
• Of course, we can also have a hybrid too: for n = 3, P(X1 | X3 = 1) marginalizes out X2 and conditions
on X3 = 1.
• It is important to remember the types of objects here: P(A) is a table where rows are possible assignments
to A, whereas P(A = a) is a number representing the probability of the row corresponding to assignment
a.
Probabilistic inference task
Random variables: unknown quantities in the world
X = (S, R, T, A)
In words:
• Observe evidence (traffic in autumn): T = 1, A = 1
• Interested in query (rain?): R
In symbols:

R | T = 1, A = 1)
P(|{z}
| {z }
query condition
(S is marginalized out)

CS221 / Spring 2019 / Charikar & Sadigh 16


• At this point, you should have all the definitions to compute any marginal or conditional distribution given
access to a joint probability distribution. But what is this really doing and how is this useful?
• We should think about each assignment x as a possible state of the world (it’s raining, it’s not sunny,
there is traffic, it is autumn, etc.). Think of the joint distribution as one giant database that contains full
information about how the world works.
• In practice, we’d like to ask questions by querying this probabilistic database. First, we observe some
evidence, which effectively fixes some of the variables. Second, we are interested in the distribution of
some set of variables which we didn’t observe. This forms a query, and the process of answering this query
(computing the desired distribution) is called probabilistic inference.
Challenges
Modeling: How to specify a joint distribution P(X1 , . . . , Xn ) com-
pactly?

Bayesian networks (factor graphs for probability distributions)

Inference: How to compute queries P(R | T = 1, A = 1) efficiently?

Variable elimination, Gibbs sampling, particle filtering (analogue of


algorithms for finding maximum weight assignment)

CS221 / Spring 2019 / Charikar & Sadigh 18


• In general, a joint distribution over n variables has size exponential in n. From a modeling perspective,
how do we even specify an object that large? Here, we will see that Bayesian networks, based on factor
graphs, offer an elegant solution.
• From an algorithms perspective, there is still the question of how we perform probabilistic inference ef-
ficiently. In the next lecture, we will see how we can adapt all of the algorithms that we saw before for
computing maximum weight assignments in factor graphs, essentially by replacing a max with a sum.
• The two desiderata are rather synergistic, and it is the same property — conditional independence — that
makes both possible.
Bayesian network (alarm)
def
P(B = b, E = e, A = a) = p(b)p(e)p(a | b, e)
B E b p(b) e p(e) b e a p(a | b, e)
1  1  0 0 0 1
A
0 1− 0 1− 0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 0
1 1 1 1

p(b) =  · [b = 1] + (1 − ) · [b = 0]
p(e) =  · [e = 1] + (1 − ) · [e = 0]
p(a | b, e) = [a = (b ∨ e)]
CS221 / Spring 2019 / Charikar & Sadigh 20
• Let us try to model the situation. First, we establish that there are three variables, B (burglary), E
(earthquake), and A (alarm). Next, we connect up the variables to model the dependencies.
• Unlike in factor graphs, these dependencies are represented as directed edges. You can intuitively think
about the directionality as suggesting causality, though what this actually means is a deeper question and
beyond the scope of this class.
• For each variable, we specify a local conditional distribution (a factor) of that variable given its parent
variables. In this example, B and E have no parents while A has two parents, B and E. This local
conditional distribution is what governs how a variable is generated.
• We are writing the local conditional distributions using p, while P is reserved for the joint distribution over
all random variables, which is defined as the product.
Bayesian network (alarm)
p(b) p(e)

B E B p(a | b, e) E

A A

P(B = b, E = e, A = a) = p(b)p(e)p(a | b, e)

Bayesian networks are a special case of factor graphs!

CS221 / Spring 2019 / Charikar & Sadigh 22


• Note that the local conditional distributions (e.g., p(a | b, e)) are non-negative so they can be thought
of simply as factors of a factor graph. The joint probability of an assignment is then the weight of that
assignment.
• In this light, Bayesian networks are just a type of factor graphs, but with additional structure and inter-
pretation.
Probabilistic inference (alarm)
Joint distribution:
b e a P(B = b, E = e, A = a)
0 0 0 (1 − )2
0 0 1 0
0 1 0 0
0 1 1 (1 − )
1 0 0 0
1 0 1 (1 − )
1 1 0 0
1 1 1 2

Queries: P(B)? P(B | A = 1)? P(B | A = 1, E = 1)?

[demo:  = 0.05]

CS221 / Spring 2019 / Charikar & Sadigh 24


• Bayesian networks can be used to capture common reasoning patterns under uncertainty (which was one
of their first applications).
• Consider the following model: Suppose the probability of an earthquake is  and the probability of a
burglary is  and both are independent. Suppose that the alarm always goes off if either an earthquake or
a burglary occurs.
• In the prior, we can eliminate A and E and get that the probability of the burglary is .
1
• Now suppose we hear the alarm A = 1. The probability of burglary is now P(B = 1 | A = 1) = 2− .
• Now suppose that you hear on the radio that there was an earthquake (E = 1). Then the probability of
burglary goes down to P(B = 1 | A = 1, E = 1) =  again.
Explaining away

B E

Key idea: explaining away

Suppose two causes positively influence an effect. Conditioned on


the effect, conditioning on one cause reduces the probability of the
other cause.

CS221 / Spring 2019 / Charikar & Sadigh 26


• This last phenomenon has a special name: explaining away. Suppose we have two cause variables B
and E, which are parents of an effect variable A. Assume the causes influence the effect positively (e.g.,
through the OR function).
• Conditioned on the effect A = 1, there is some posterior probability of B. Conditioned on the effect A = 1
and the other cause E = 1, the new posterior probability is reduced. We then say that the other cause E
has explained away B.
Definition

Definition: Bayesian network

Let X = (X1 , . . . , Xn ) be random variables.


A Bayesian network is a directed acyclic graph (DAG) that spec-
ifies a joint distribution over X as a product of local conditional
distributions, one for each node:
n
def
Y
P(X1 = x1 , . . . , Xn = xn ) = p(xi | xParents(i) )
i=1

CS221 / Spring 2019 / Charikar & Sadigh 28


• Without further ado, let’s define a Bayesian network formally. A Bayesian network defines a large joint
distribution in a modular way, one variable at a time.
• First, the graph structure captures what other variables a given variable depends on.
• Second, we specify a local conditional distribution for variable Xi , which is a function that specifies a
distribution over Xi given an assignment xParents(i) to its parents in the graph (possibly none). The joint
distribution is simply defined to be the product of all of the local conditional distributions together.
• Notationally, we use lowercase p (in p(xi | xParents(i) )) to denote a local conditional distribution, and
uppercase P to denote the induced joint distribution over all variables. While the two can coincide, it is
important to keep these things separate in your head!
Special properties
Key difference from general factor graphs:

Key idea: locally normalized

All X
factors (local conditional distributions) satisfy:
p(xi | xParents(i) ) = 1 for each xParents(i)
xi

Implications:
• Consistency of sub-Bayesian networks
• Consistency of conditional distributions

CS221 / Spring 2019 / Charikar & Sadigh 30


• But Bayesian networks are more than that. The key property is that all the local conditional distributions,
being distributions, sum to 1 over the first argument.
• This simple property results in two important properties of Bayesian networks that are not present in
general factor graphs.
Consistency of sub-Bayesian networks

B E

A short calculation:
P
P(B = b, E = e) = a P(B = b, E = e, A = a)
P
= ap(b)p(e)p(a | b, e)
P
= p(b)p(e) a p(a | b, e)

= p(b)p(e)

CS221 / Spring 2019 / Charikar & Sadigh 32


• First, let’s see what happens when we marginalize A (by performing algebra on the joint probability). We
see that we end up with p(b)p(e), which actually defines a sub-Bayesian network with one fewer variable,
and the same local conditional probabilities.
• If one marginalizes out all the variables, then one gets 1, which verifies that a Bayesian network actually
defines a probability distribution.
• The philosophical ramification of this property is that there could be many other variables that depend
on the variables you’ve modeled (earthquakes also impacts traffic) but as long as you don’t observe them,
they can be ignored mathematically (ignorance is bliss). Note that this doesn’t mean that knowing about
the other things isn’t useful.
Consistency of sub-Bayesian networks

Key idea: marginalization

Marginalization of a leaf node yields a Bayesian network without


the node.

B E
B E
A

p(b) p(e)

p(b) p(e)

B p(a | b, e) E
B E
A
CS221 / Spring 2019 / Charikar & Sadigh 34
• This property is very attractive, because it means that whenever we have a large Bayesian network, where
we don’t care about some of the variables, we can just remove them (graph operations), and this encodes
the same distribution as we would have gotten from marginalizing out variables (algebraic operations).
The former, being visual, can be more intuitive.
Consistency of local conditionals

Key idea: local conditional distributions

Local conditional distributions (factors) are the true conditional


distributions.

A B C

D E

F G H

P(D = d | A = a, B = b) = p(d | a, b)
| {z } | {z }
from probabilistic inference by definition

CS221 / Spring 2019 / Charikar & Sadigh 36


• Note that the local conditional distributions p(d | a, b) are simply defined by the user. On the other hand,
the quantity P(D = d | A = a, B = b) is not defined, but follows from probabilistic inference on the joint
distribution defined by the Bayesian network.
• It’s not clear a priori that the two have anything to do with each other. The second special property that
we get from using Bayesian networks is that the two are actually the same.
• To show this, we can remove all the descendants of D by the consistency of sub-Bayesian networks,
leaving us with the Bayesian network P(A = a, B = b, D = d) = p(a)p(b)p(d | a, b). By the chain rule,
P(A = a, B = b, D = d) = P(A = a, B = b)P(D = d | A = a, B = b). If we marginalize out D, then
we are left with the Bayesian network P(A = a, B = b) = p(a)p(b). From this, we can conclude that
P(D = d | A = a, B = b) = p(d | a, b).
• This argument generalizes to any Bayesian network and local conditional distribution.
Medical diagnosis

Problem: cold or allergies?

You are coughing and have itchy eyes. Do you have a cold or
allergies?

[demo]
Variables: Cold, Allergies, Cough, Itchy eyes
Bayesian network: Factor graph:

C A C A

H I H I

CS221 / Spring 2019 / Charikar & Sadigh 38


• Here is another example (a cartoon version of Bayesian networks for medical diagnosis). Allergies and cold
are the two hidden variables that we’d like to infer (we have some prior over these two). Cough and itchy
eyes are symptoms that we observe as evidence, and we have some likelihood model of these symptoms
given the hidden causes.
• We can use the demo to infer the hidden state given the evidence.
Summary so far

B E

• Set of random variables capture state of world

• Local conditional distributions ⇒ joint distribution

• Probabilistic inference task: ask questions

• Captures reasoning patterns (e.g., explaining away)

• Factor graph interpretation (for inference later)

CS221 / Spring 2019 / Charikar & Sadigh 40


Roadmap

Basics

Probabilistic programs

Inference

CS221 / Spring 2019 / Charikar & Sadigh 41


Probabilistic programs
Goal: make it easier to write down complex Bayesian networks

Key idea: probabilistic program

Write a program to generate an assignment (rather than specifying


the probability of an assignment).

CS221 / Spring 2019 / Charikar & Sadigh 42


Probabilistic programs
B E

Probabilistic program: alarm

B ∼ Bernoulli()
E ∼ Bernoulli()
A=B∨E

Key idea: probabilistic program

A randomized program that sets the random variables.

def Bernoulli(epsilon):
return random.random() < epsilon

CS221 / Spring 2019 / Charikar & Sadigh 43


• There is another way of writing down Bayesian networks other than graphically or mathematically, and that
is as a probabilistic program. A probabilistic program is a randomized program that invokes a random
number generator to make random choices. Executing this program will assign values to a collection of
random variables X1 , . . . , Xn ; that is, generating an assignment.
• The probability (e.g., fraction of times) that the program generates that assignment is exactly the proba-
bility under the joint distribution specified by that program.
• We should think of this program as outputting the state of the world (or at least the part of the world
that we care about for our task).
• Note that the probabilistic program is only used to define joint distributions. We usually wouldn’t actually
run this program directly.
• For example, we show the probabilistic program for alarm. B ∼ Bernoulli() simply means that P(B =
1) = . Here, we can think about Bernoulli() as a randomized function (random() < epsilon) that
returns 1 with probability  and 0 with probability 1 − .
Probabilistic program: example

Probabilistic program: object tracking

X0 = (0, 0)
For each time step i = 1, . . . , n:
With probability α:
Xi = Xi−1 + (1, 0) [go right]
With probability 1 − α:
Xi = Xi−1 + (0, 1) [go down]

Bayesian network structure:

X1 X2 X3 X4 X5

CS221 / Spring 2019 / Charikar & Sadigh 45


• This is a more interesting generative model since it has a for loop, which allows us to determine the
distribution over a templatized set of n variables rather than just 3 or 4.
• In these cases, variables are generally indexed by something like time or location.
• We can also draw the Bayesian network. Each Xi only depends on Xi−1 . This is a chain-structured
Bayesian network, called a Markov model.
Probabilistic program: example

(press ctrl-enter to save)


Run

CS221 / Spring 2019 / Charikar & Sadigh 47


• Try clicking [Run]. Each time a new assignment of (X1 , . . . , Xn ) is chosen.
Probabilistic inference: example

Query: what are possible trajectories given evidence X10 = (8, 2)?

(press ctrl-enter to save)


Run

CS221 / Spring 2019 / Charikar & Sadigh 49


• This program only serves for defining the distribution. Now we can query that distribution and ask the
question: suppose the program set X10 = (8, 2); what is the distribution over the other variables?
• In the demo, note that all trajectories are constrained to go through (8, 2) at time step 10.
Application: language modeling

Probabilistic program: Markov model

For each position i = 1, 2, . . . , n:


Generate word Xi ∼ p(Xi | Xi−1 )

Wreck a nice beach

X1 X2 X3 X4

CS221 / Spring 2019 / Charikar & Sadigh 51


• In the context of natural language, a Markov model is known as a bigram model. A higher-order general-
ization of bigram models are n-gram models (more generally known as higher-order Markov models).
• Language models are often used to measure the ”goodness” of a sentence, mostly within the context of a
larger system such as speech recognition or machine translation.
Application: object tracking

Probabilistic program: hidden Markov model (HMM)

For each time step t = 1, . . . , T :


Generate object location Ht ∼ p(Ht | Ht−1 )
Generate sensor reading Et ∼ p(Et | Ht )
(3,1) (3,2)

H1 H2 H3 H4 H5

E1 E2 E3 E4 E5
4 5

Applications: speech recognition, information extraction, gene finding

CS221 / Spring 2019 / Charikar & Sadigh 53


• Markov models are limiting because they do not have a way of talking about noisy evidence (sensor
readings). They can be extended quite easily to hidden Markov models, which introduce a parallel sequence
of observation variables.
• For example, in object tracking, Ht denotes the true object location, and Et denotes the noisy sensor
reading, which might be (i) the location Ht plus noise, or (ii) the distance from Ht plus noise, depending
on the type of sensor.
• In speech recognition, Ht would be the phonemes or words and Et would be the raw acoustic signal.
Application: multiple object tracking

Probabilistic program: factorial HMM

For each time step t = 1, . . . , T :


For each object o ∈ {a, b}:
o
Generate location Hto ∼ p(Hto | Ht−1 )
Generate sensor reading Et ∼ p(Et | Hta , Htb )

H1a H2a H3a H4a

H1b H2b H3b H4b

E1 E2 E3 E4

CS221 / Spring 2019 / Charikar & Sadigh 55


• An extension of an HMM, called a factorial HMM, can be used to track multiple objects. We assume
that each object moves independently according to a Markov model, but that we get one sensor reading
which is some noisy aggregated function of the true positions.
• For example, Et could be the set {Hta , Htb }, which reveals where the objects are, but doesn’t say which
object is responsible for which element in the set.
Application: document classification
Question: given a text document, what is it about?

Probabilistic program: naive Bayes

Generate label Y ∼ p(Y )


For each position i = 1, . . . , L:
Generate word Wi ∼ p(Wi | Y )

travel

W1 W2 ... WL
beach Paris
CS221 / Spring 2019 / Charikar & Sadigh 57
• Naive Bayes is a very simple model which can be used for classification. For document classification, we
generate a label and all the words in the document given that label.
• Note that the words are all generated independently, which is not a very realistic model of language, but
naive Bayes models are surprisingly effective for tasks such as document classification.
Application: topic modeling
Question: given a text document, what topics is it about?

Probabilistic program: latent Dirichlet allocation

Generate a distribution over topics α ∈ RK


For each position i = 1, . . . , L:
Generate a topic Zi ∼ p(Zi | α)
Generate a word Wi ∼ p(Wi | Zi )

α {travel:0.8,Europe:0.2}

travel Z1 Z2 ... ZL Europe

beach W1 W2 ... WL Euro


CS221 / Spring 2019 / Charikar & Sadigh 59
• A more sophisticated model of text is latent Dirichlet Allocation (LDA), which allows a document to not
just be about one topic (which was true in naive Bayes), but about multiple topics.
• Here, the distribution over topics α is chosen per document from a Dirichlet distribution. Note that α is a
continuous-valued random variable. For each position, we choose a topic according to that per-document
distribution and generate a word given that topic.
• Latent Dirichlet Alloction (LDA) has been very infuential for modeling not only text but images, videos,
music, etc.; any sort of data with hidden structure. It is very related to matrix factorization.
Application: medical diagnostics
Question: If patient has has a cough and fever, what disease(s) does
he/she have?
1 Pneumonia 0 Cold 0 Malaria

1 Fever 1 Cough 0 Vomit

Probabilistic program: diseases and symptoms

For each disease i = 1, . . . , m:


Generate activity of disease Di ∼ p(Di )
For each symptom j = 1, . . . , n:
Generate activity of symptom Sj ∼ p(Sj | D1:m )

CS221 / Spring 2019 / Charikar & Sadigh 61


• We already saw a special case of this model. In general, we would like to diagnose many diseases and
might have measured many symptoms and vitals.
Application: social network analysis
Question: Given a social network (graph over n people), what types of
people are there?
politician H1

0 E12 E13 0

scientist H2 E23 H3 scientist

Probabilistic program: stochastic block model

For each person i = 1, . . . , n:


Generate person type Hi ∼ p(Hi )
For each pair of people i 6= j:
Generate connectedness Eij ∼ p(Eij | Hi , Hj )

CS221 / Spring 2019 / Charikar & Sadigh 63


• One can also model graphs such as social networks. A very naive-Bayes-like model is that each node
(person) has a ”type”. Whether two people interact with each other is determined solely by their types
and random chance.
• Note: there are extensions called mixed membership models which, like LDA, allow each person to have
multiple types.
• In summary, it is quite easy to come up with probabilistic programs that tell a story of how the world works
for the domain of interest. These probabilistic programs define joint distributions over assignments to a
collection of variables. Usually, these programs describe how some collection of hidden variables H that
you’re interested in behave, and then describe the generation of the evidence E that you see conditioned
on H. After defining the model, one can do probabilistic inference to compute P(H | E = e).
Roadmap

Basics

Probabilistic programs

Inference

CS221 / Spring 2019 / Charikar & Sadigh 65


Review: probabilistic inference
Input
Bayesian network: P(X1 = x1 , . . . , Xn = xn )
Evidence: E = e where E ⊆ X is subset of variables
Query: Q ⊆ X is subset of variables

Output
P(Q = q | E = e) for all values q

Example: if coughing and itchy eyes, have a cold?


P(C | H = 1, I = 1)
CS221 / Spring 2019 / Charikar & Sadigh 66
Example: Markov model
X1 X2 X3 X4

Query: P(X3 = x3 | X2 = 5) for all x3


Tedious way:
X
∝ p(x1 )p(x2 = 5 | x1 )p(x3 | x2 = 5)p(x4 | x3 )
x1 ,x4

!
X
∝ p(x1 )p(x2 = 5 | x1 ) p(x3 | x2 = 5)
x1

∝ p(x3 | x2 = 5)

Fast way:
[whiteboard]
CS221 / Spring 2019 / Charikar & Sadigh 67
• Let’s first compute the query the old-fashioned way by grinding through the algebra. Then we’ll see a
faster, more graphical way, of doing this.
• We start by transforming the query into an expression that references the joint distribution, which allows
us to rewrite as the product of the local conditional probabilities. To do this, we invoke the definition of
marginal and conditional probability.
• One convenient shortcut we will take is make use of the proportional-to (∝) relation. Note that in the end,
we need to construct a distribution over X3 . This means that any quantity (such as P(X2 = 5)) which
doesn’t depend on X3 can be folded into the proportionality constant. If you don’t believe this, keep it
around to convince yourself that it doesn’t matter. Using ∝ can save you a lot of work.
P
• Next, we do some algebra to push the summations P inside. We notice that x4 p(x4 | x3 ) = 1 because
it’s a local conditional distribution. The factor x1 p(x1 )p(x2 = 5 | x1 ) can also be folded into the
proportionality constant.
• The final result is p(x3 | x2 = 5), which matches the query as we expected by the consistency of local
conditional distributions.
General strategy
Query:

P(Q | E = e)

Algorithm: general probabilistic inference strategy


• Remove (marginalize) variables that are not ancestors of Q
or E.
• Convert Bayesian network to factor graph.
• Condition on E = e (shade nodes + disconnect).
• Remove (marginalize) nodes disconnected from Q.
• Run probabilistic inference algorithm (manual, variable elim-
ination, Gibbs sampling, particle filtering).

CS221 / Spring 2019 / Charikar & Sadigh 69


• Our goal is to compute the conditional distribution over the query variables Q ⊆ H given evidence E = e.
We can do this with our bare hands by chugging through all the algebra starting with the definition of
marginal and conditional probability, but there is an easier way to do this that exploits the structure of the
Bayesian network.
• Step 1: remove variables which are not ancestors of Q or E. Intuitively, these don’t have an influence on Q
and E, so they can be removed. Mathematically, this is due to the consistency of sub-Bayesian networks.
• Step 2: turn this Bayesian network into a factor graph by simply introducing one factor per node which
is connected to that node and its parents. It’s important to include all the parents and the child into one
factor, not separate factors. From here out, all we need to think about is factor graphs.
• Step 3: condition on the evidence variables. Recall that conditioning on nodes in a factor graph shades
them in, and as a graph operation, rips out those variables from the graph.
• Step 4: remove nodes which are not connected to Q. These are independent of Q, so they have no impact
on the results.
• Step 5: Finally, run a standard probabilistic inference algorithm on the reduced factor graph. We’ll do this
manually for now using variable elimination. Later we’ll see automatic methods for doing this.
Example: alarm
b e a p(a | b, e)
B E b p(b) e p(e) 0 0 0 1
0 0 1 0
1  1  0 1 0 0
A 0 1 1 1
0 1− 0 1− 1 0 0 0
1 0 1 1
1 1 0 0
1 1 1 1

[whiteboard]
Query: P(B)
• Marginalize out A, E
Query: P(B | A = 1)

• Condition on A = 1

CS221 / Spring 2019 / Charikar & Sadigh 71


• Here is another example: the simple v-structured alarm network from last time.
• P(B) = p(b) trivially after marginalizing out A and E (step 1).
• For P(B | A = 1), step 1 doesn’t do anything. Conditioning (step 3) creates a factor graph with factors
p(b), p(e), and p(aP = 1 | b, e). In step 5, we eliminate E by replacing it and its incident factors with a
new factor f (b) = e p(e)p(a = 1 | b, e). Then, we multiply all the factors (which should only be unary
factors on the query variable B) and normalize: P(B = b | A = 1) ∝ p(b)f (b).
• To flesh this out, for b = 1, we have ( + (1 − )) = . For b = 0, we have (1 − )( + 0) = (1 − ). The
 1
normalized result is thus P(B = 1 | A = 1) = +(1−) = 2− .
• For a probabilistic interpretation, note that all we’ve done is calculate P(B = b | A = 1) =
P(B=b)P(A=1|B=b) p(b)f (b)
P(A=1) =P p(bi )f (bi ) , where the first equality follows from Bayes’ rule and the second
bi ∈Domain(B)
follows from the fact that the local conditional distributions are the true conditional distributions. The
Bayesian network has simply given us a methodical, algorithmic way to calculate this probability.
Example: A-H (section)
A B C

D E

F G H

[whiteboard]
Query: P(C | B = b)
• Marginalize out everything else, note C ⊥
⊥B
Query: P(C, H | E = e)
• Marginalize out A, D, F, G, note C ⊥
⊥H|E

CS221 / Spring 2019 / Charikar & Sadigh 73


• In the first example, once we marginalize out all variables we can, we are left with C and B, which are
disconnected. We condition on B, which just removes that node, and so we’re just left with P(C) = p(c),
as expected.
• In the second example, note that the two query variables are independent,
P so we can compute them
separately. The result is P(C = c, H = h | E = e) ∝ p(c)p(h | e) b p(b)p(e | b, c).
• If we had the actual values of these probabilities, we can compute these quantities.
Summary
Bayesian networks: modular definition of large joint distribution over
variables

Probabilistic inference: condition on evidence, query variables of interest

Next time: algorithms for probabilistic inference

CS221 / Spring 2019 / Charikar & Sadigh 75

You might also like