Machine Learning Formulae
Machine Learning Formulae
Leandro Minku
April 22, 2024
In general, I do not ask students to memorise formulas for the exam. Rather,
students are expected to demonstrate that they are able to understand the
formulas. However, some formulas directly represent the core ideas of their
underlying approaches. Therefore, if you understand the ideas, you should be
able to remember the corresponding formulas. According to that, this document
lists the formulas from Weeks 1–4 and 7 that you are expected to remember by
heart for the ML exam.
1 Logistic Regression
p1
logit(p1 ) = ln = wT x
1 − p1
py = p(y|x, w)
T
If w x ≥ 0, predict class 1. Otherwise, predict class 0.
p1
logit(p1 ) = ln = wT ϕ(x)
1 − p1
py = p(y|ϕ(x), w)
T
If w ϕ(x) ≥ 0, predict class 1. Otherwise, predict class 0.
p0 = 1 − p1
N
Y
L(w) = py(i)
i=1
N
X
ln(L(w)) = ln py(i)
i=1
E(w) = −ln(L(w))
1
2 Gradient Descent and IRLS
w = w − η∇E(w)
−1
w = w − HE (w)∇E(w)
PS: you do not need to remember the equation corresponding to the gradient
and Hessian of the cross entropy loss function, just the general equations above.
h(x) = wT ϕ(x) + b
X
h(x) = a(n) y (n) k(x, x(n) ) + b
n∈S
4 General Formulas
√
Euclidean norm: ∥w∥ = wT w.