Lesson 6: Practical Deep Learning For Coders (V2)
Lesson 6: Practical Deep Learning For Coders (V2)
Stateful
Memory
Representation
\begin{proof} We may assume that $\mathcal{I}$ is an abelian sheaf on
$\mathcal{C}$. \item Given a morphism $\Delta : \mathcal{F} \to \mathcal{I}$
is an injective and let $\mathfrak q$ be an abelian sheaf on $X$. Let
$\mathcal{F}$ be a fibered complex. Let $\mathcal{F}$ be a category.
Basic NN with single hidden layer
Output: #classes
Matrix product
FC1: # activations
Output
Conv1: # filters * (h/2) * (w/2)
Hidden
Convolution stride 2
Input
Input: #channel * h * w
Predicting char 3 using chars 1 & 2 NB: layer operations not shown;
remember that arrows represent layer operations
FC2: # activations
Output
char 2 input FC1: # activations
Hidden
Input
char 1 input: vocab size
Predicting char 4 using chars 1, 2 & 3
InputHidden
char 4 output: vocab size
HiddenOutput
HiddenHidden
FC3: # activations
InputHidden
HiddenOutput
HiddenHidden
Hidden
Input
char 1 input
Predicting chars 2 to n using chars 1 to n-1
InputHidden
HiddenOutput
HiddenHidden
Hidden
Initialize to zeros
Input
Predicting chars 2 to n using chars 1 to n-1 using stacked RNNs
Initialize to zeros
Unrolled stacked RNNs for sequences
char 3 input
char 2 input
char 1 input
Backprop
InputHidden
HiddenOutput
HiddenHidden
Loss
char 3 input
char 2 input