Expectation Maximization - Georgia Tech - Machine Learning - English
Expectation Maximization - Georgia Tech - Machine Learning - English
>> Well, because you said it was the maximum likelihood. Scenario.
>> Right.
the means, and that's just this calculation here. So that's computing
the expectation. Defining the Z varaibles from the muse. The centers.
Within each cluster J. What's the likelihood it came from cluster J and
the things we assign to that cluster. But here, we actually are kind
points in there, and it only counts half towards the average, and we
place, and so we're just doing this weighted average of the data points.
>> So,
even get that for the Gaussian case, the z i variable will
probability. They come from some Gaussian because they have infinite extent. So
were ones and zeroes, you would end up with exactly k means.
I think.
>> I dunno, I never really thought about that. Let's
>> Mm-hm. Then, what would happen? We send these means back, and what we do
>> Mm-hm.
>> Which is very similar actually to what this does. Except that
hidden max or something. Then you would end up with exactly k-means.
>> I think
you're right.
>> Huh.
>> Okay.
The, the data is going to be more and more likely over time.