0% found this document useful (0 votes)
27 views

Machine Learning Formulae

Uploaded by

rahulmatade21
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Machine Learning Formulae

Uploaded by

rahulmatade21
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Formulas that you are expected to remember

Leandro Minku
April 22, 2024

In general, I do not ask students to memorise formulas for the exam. Rather,
students are expected to demonstrate that they are able to understand the
formulas. However, some formulas directly represent the core ideas of their
underlying approaches. Therefore, if you understand the ideas, you should be
able to remember the corresponding formulas. According to that, this document
lists the formulas from Weeks 1–4 and 7 that you are expected to remember by
heart for the ML exam.

1 Logistic Regression
 
p1
logit(p1 ) = ln = wT x
1 − p1

py = p(y|x, w)
T
If w x ≥ 0, predict class 1. Otherwise, predict class 0.
 
p1
logit(p1 ) = ln = wT ϕ(x)
1 − p1

py = p(y|ϕ(x), w)
T
If w ϕ(x) ≥ 0, predict class 1. Otherwise, predict class 0.

p0 = 1 − p1

N
Y
L(w) = py(i)
i=1

N
X
ln(L(w)) = ln py(i)
i=1

E(w) = −ln(L(w))

1
2 Gradient Descent and IRLS
w = w − η∇E(w)

−1
w = w − HE (w)∇E(w)
PS: you do not need to remember the equation corresponding to the gradient
and Hessian of the cross entropy loss function, just the general equations above.

3 SVM and Soft Margin SVM


h(x) = wT x + b

h(x) = wT ϕ(x) + b

X
h(x) = a(n) y (n) k(x, x(n) ) + b
n∈S

When using the linear kernel, k(x, x(n) ) = xT x(n) .


If h(x) > 0, predict class +1. If h(x) < 0, predict class -1.
PS: you do not need to remember the formula for calculating b in the dual
representation.

4 General Formulas

Euclidean norm: ∥w∥ = wT w.

You might also like