Lab 5
Lab 5
This tutorial will give a short introduction to PyTorch basics, and get you setup for writing your own
neural networks.
This notebook is part of a lecture series on Deep Learning at the University of
Amsterdam.
The full list of tutorials can be found at https://uvadlc-notebooks.rtfd.io (https://uvadlc-
notebooks.rtfd.io).
Setup
This notebook requires some packages besides pytorch-lightning.
Welcome to our PyTorch tutorial for the Deep Learning course 2020 at the University of
Amsterdam!
The following notebook is meant to give a short introduction to PyTorch basics, and
get you setup for writing your own neural networks.
PyTorch is an open source machine learning
framework that allows you to write your own neural networks and optimize them efficiently.
However, PyTorch is not the only framework of its kind.
Alternatives to PyTorch include
[TensorFlow](https://www.tensorflow.org/), [JAX](https://github.com/google/jax) and [Caffe]
(http://caffe.berkeleyvision.org/).
We choose to teach PyTorch at the University of Amsterdam
because it is well established, has a huge developer community (originally developed by
Facebook), is very flexible and especially used in research.
Many current papers publish their code
in PyTorch, and thus it is good to be familiar with PyTorch as well.
Meanwhile, TensorFlow
(developed by Google) is usually known for being a production-grade deep learning library.
Still, if
you know one machine learning framework in depth, it is very easy to learn another one because
many of them use the same concepts and ideas.
For instance, TensorFlow's version 2 was heavily
inspired by the most popular features of PyTorch, making the frameworks even more similar.
If you
are already familiar with PyTorch and have created your own neural network projects, feel free to
just skim this notebook.
website (https://pytorch.org/tutorials/).
Yet, we choose to create our own tutorial which is designed
to give you the basics particularly necessary for the practicals, but still understand how PyTorch
works under the hood.
Over the next few weeks, we will also keep exploring new PyTorch features
in the series of Jupyter notebook tutorials about deep learning.
We will use a set of standard libraries that are often used in machine learning projects.
If you are
running this notebook on Google Colab, all libraries should be pre-installed.
If you are running this
notebook locally, make sure you have installed our dl2020 environment (link
(https://github.com/uvadlc/uvadlc_practicals_2020/blob/master/environment.yml)) and have
activated it.
Note: you may need to restart the kernel to use updated packages.
C:\Users\Administrator\AppData\Local\Temp\ipykernel_12872\3457578344.py:15: Dep
recationWarning: `set_matplotlib_formats` is deprecated since IPython 7.23, dir
ectly use `matplotlib_inline.backend_inline.set_matplotlib_formats()`
set_matplotlib_formats("svg", "pdf")
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 2/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
At the time of writing this tutorial (mid of August 2021), the current stable version is 1.9.
You should
therefore see the output Using torch 1.9.0 , eventually with some extension for the CUDA
version on Colab.
In case you use the dl2020 environment, you should see Using torch
1.6.0 since the environment was provided in October 2020.
It is recommended to update the
PyTorch version to the newest one.
If you see a lower version number than 1.6, make sure you
have installed the correct the environment, or ask one of your TAs.
In case PyTorch 1.10 or newer
will be published during the time of the course, don't worry.
The interface between PyTorch
versions doesn't change too much, and hence all code should also be runnable with newer
versions.
As in every machine learning framework, PyTorch provides functions that are stochastic like
generating random numbers.
However, a very good practice is to setup your code to be
reproducible with the exact same random numbers.
This is why we set a seed below.
Tensors
Tensors are the PyTorch equivalent to Numpy arrays, with the addition to also have support for
GPU acceleration (more on that later).
The name "tensor" is a generalization of concepts you
already know.
For instance, a vector is a 1-D tensor, and a matrix a 2-D tensor.
When working with
neural networks, we will use tensors of various shapes and number of dimensions.
Most common functions you know from numpy can be used on tensors as well.
Actually, since
numpy arrays are so similar to tensors, we can convert most tensors to numpy arrays (and back)
but we don't need it too often.
Initialization
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 3/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
In [16]: x = Tensor(2, 3, 4)
print(x)
The function torch.Tensor allocates memory for the desired tensor, but reuses any values that
have already been in the memory.
To directly assign values to the tensor during initialization, there
are many alternatives including:
tensor([[1., 2.],
[3., 4.]])
In [18]: # Create a tensor with random values between 0 and 1 with the shape [2, 3, 4]
x = torch.rand(2, 3, 4)
print(x)
You can obtain the shape of a tensor in the same way as in numpy ( x.shape ), or using the
.size method:
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 4/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
Size: 2 3 4
Tensors can be converted to numpy arrays, and numpy arrays back to tensors.
To transform a
numpy array into a tensor, we can use the function torch.from_numpy :
[3 4]]
To transform a PyTorch tensor back to a numpy array, we can use the function .numpy() on
tensors:
Numpy array: [0 1 2 3]
The conversion of tensors to numpy require the tensor to be on the CPU, and not the GPU (more
on GPU support in a later section).
In case you have a tensor on GPU, you need to call .cpu()
on the tensor beforehand.
Hence, you get a line like np_arr = tensor.cpu().numpy() .
Operations
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 5/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
In [22]: x1 = torch.rand(2, 3)
x2 = torch.rand(2, 3)
y = x1 + x2
print("X1", x1)
print("X2", x2)
print("Y", y)
Calling x1 + x2 creates a new tensor containing the sum of the two inputs.
However, we can
also use in-place operations that are applied directly on the memory of a tensor.
We therefore
change the values of x2 without the chance to re-accessing the values of x2 before the
operation.
An example is shown below:
In [23]: x1 = torch.rand(2, 3)
x2 = torch.rand(2, 3)
print("X1 (before)", x1)
print("X2 (before)", x2)
x2.add_(x1)
print("X1 (after)", x1)
print("X2 (after)", x2)
In-place operations are usually marked with a underscore postfix (e.g. "add_" instead of "add").
In [24]: x = torch.arange(6)
print("X", x)
X tensor([0, 1, 2, 3, 4, 5])
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 6/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
In [25]: x = x.view(2, 3)
print("X", x)
X tensor([[0, 1, 2],
[3, 4, 5]])
X tensor([[0, 3],
[1, 4],
[2, 5]])
Other commonly used operations include matrix multiplications, which are essential for neural
𝐱
networks.
Quite often, we have an input vector , which is transformed using a learned weight
matrix 𝐖.
There are multiple ways and functions to perform matrix multiplication, some of which
we list below:
torch.matmul : Performs the matrix product over two tensors, where the specific behavior
depends on the dimensions.
If both inputs are matrices (2-dimensional tensors), it performs
the standard matrix product.
For higher dimensional inputs, the function supports broadcasting
(for details see the documentation
(https://pytorch.org/docs/stable/generated/torch.matmul.html?
highlight=matmul#torch.matmul)).
Can also be written as a @ b , similar to numpy.
torch.mm : Performs the matrix product over two matrices, but doesn't support broadcasting
(see documentation (https://pytorch.org/docs/stable/generated/torch.mm.html?
highlight=torch%20mm#torch.mm))
torch.bmm : Performs the matrix product with a support batch dimension.
If the first tensor 𝑇
𝑏×𝑛×𝑚
is of shape ( ), and the second tensor ( 𝑅 𝑏×𝑚×𝑝 𝑂
), the output is of shape (
𝑏×𝑛×𝑝 𝑏
), and has been calculated by performing matrix multiplications of the submatrices
𝑇 𝑅 𝑂𝑖 = 𝑇𝑖 @𝑅𝑖
of and :
torch.einsum : Performs matrix multiplications and more (i.e. sums of products) using the
Einstein summation convention.
Explanation of the Einstein sum can be found in assignment
1.
In [37]: x = torch.arange(6)
x = x.view(2, 3)
print("X", x)
X tensor([[0, 1, 2],
[3, 4, 5]])
W tensor([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 7/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
Indexing
In [40]: x = torch.arange(12).view(3, 4)
print("X", x)
X tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
tensor([1, 5, 9])
tensor([0, 1, 2, 3])
tensor([3, 7])
tensor([[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
If our neural network would output a single scalar value, we would talk about taking the derivative,
but you will see that quite often we will have multiple output variables ("values"); in that case we
talk about gradients.
It's a more general term.
𝐱
Given an input , we define our function by manipulating that input, usually by matrix-
multiplications with weight matrices and additions with so-called bias vectors.
As we manipulate
our input, we are automatically creating a computational graph.
This graph shows how to arrive
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 8/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
So, to recap: the only thing we have to do is to compute the output, and then we can ask PyTorch
to automatically get the gradients.
In [47]: x = torch.ones((3,))
print(x.requires_grad)
False
We can change this for an existing tensor using the function requires_grad_() (underscore
indicating that this is a in-place operation).
Alternatively, when creating a tensor, you can pass the
argument
requires_grad=True to most initializers we have seen above.
In [46]: x.requires_grad_(True)
print(x.requires_grad)
True
In order to get familiar with the concept of a computation graph, we will create one for the following
function:
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 9/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
In [59]: a = x + 2
b = a**2
c = b + 3
y = c.mean()
print("Y", y)
Y tensor(12.6667, grad_fn=<MeanBackward0>)
In [62]: y.grad
Using the statements above, we have created a computation graph that looks similar to the figure
below:
2 x
3 b
𝑎 𝑥 2𝑏 𝑎
We calculate based on the inputs and the constant , is squared, and so on.
The
visualization is an abstraction of the dependencies between inputs and outputs of the operations
we have applied.
Each node of the computation graph has automatically defined a function for
calculating the gradients with respect to its inputs, grad_fn .
You can see this when we printed
𝑦
the output tensor .
This is why the computation graph is usually visualized in the reverse direction
(arrows point from the result to the inputs).
We can perform backpropagation on the computation
graph by calling the
function backward() on the last output, which effectively calculates
the
gradients for each tensor that has the property
requires_grad=True :
In [63]: y.backward()
x.grad will now contain the gradient ∂𝑦/∂x, and this gradient indicates how a change in 𝐱 will
𝑦
affect output given the current input 𝐱 = [0,1,2] :
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 10/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
In [64]: print(x.grad)
GPU support
A crucial feature of PyTorch is the support of GPUs, short for Graphics Processing Unit.
A GPU
can perform many thousands of small operations in parallel, making it very well suitable for
performing large matrix operations in neural networks.
When comparing GPUs to CPUs, we can
list the following main differences (credit: Kevin Krewell, 2009
(https://blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/))
CPUs and GPUs have both different advantages and disadvantages, which is why many
computers contain both components and use them for different tasks.
In case you are not familiar
with GPUs, you can read up more details in this NVIDIA blog post
(https://blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/) or here
(https://www.intel.com/content/www/us/en/products/docs/processors/what-is-a-gpu.html).
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 11/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
If you have a GPU on your computer but the command above returns False, make sure you have
the correct CUDA-version installed.
The dl2020 environment comes with the CUDA-toolkit 10.1,
which is selected for the Lisa supercomputer.
Please change it if necessary (CUDA 10.2 is
currently common).
On Google Colab, make sure that you have selected a GPU in your runtime
setup (in the menu, check under Runtime -> Change runtime type ).
Device cpu
In [68]: x = torch.zeros(2, 3)
x = x.to(device)
print("X", x)
In case you have a GPU, you should now see the attribute device='cuda:0' being printed next
to your tensor.
The zero next to cuda indicates that this is the zero-th GPU device on your
computer.
PyTorch also supports multi-GPU systems, but this you will only need once you have
very big networks to train (if interested, see the PyTorch documentation
(https://pytorch.org/docs/stable/distributed.html#distributed-basics)).
We can also compare the
runtime of a large matrix multiplication on the CPU with a operation on the GPU:
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 12/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
Depending on the size of the operation and the CPU/GPU in your system, the speedup of this
operation can be >50x.
As matmul operations are very common in neural networks, we can
already see the great benefit of training a NN on a GPU.
The time estimate can be relatively noisy
here because we haven't run it for multiple times.
Feel free to extend this, but it also takes longer
to run.
When generating random numbers, the seed between CPU and GPU is not synchronized.
Hence,
we need to set the seed on the GPU separately to ensure a reproducible code.
Note that due to
different GPU architectures, running the same code on different GPUs does not guarantee the
same random numbers.
Still, we don't want that our code gives us a different output every time we
run it on the exact same hardware.
Hence, we also set the seed on the GPU:
We will introduce the libraries and all additional parts you might need to train a neural network in
PyTorch, using a simple example classifier on a simple yet well known example: XOR.
Given two
binary inputs 𝑥1 𝑥2 1
and , the label to predict is if either 𝑥1 𝑥2 1
or 0
is while the other is , or the
0
label is in all other cases.
The example became famous by the fact that a single neuron, i.e. a
linear classifier, cannot learn this simple function.
Hence, we will learn how to build a small neural
network that can learn this function.
To make it a little bit more interesting, we move the XOR into
continuous space and introduce some gaussian noise on the binary inputs.
Our desired separation
of an XOR dataset could look as follows:
The model
The package torch.nn defines a series of useful classes like linear networks layers, activation
functions, loss functions etc.
A full list can be found here (https://pytorch.org/docs/stable/nn.html).
In case you need a certain network layer, check the documentation of the package first before
writing the layer yourself as the package likely contains the code for it already.
We import it below:
In [ ]:
In [ ]:
nn.Module
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 14/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
The forward function is where the computation of the module is taken place, and is executed when
you call the module ( nn = MyModule(); nn(x) ).
In the init function, we usually create the
parameters of the module, using nn.Parameter , or defining other modules that are used in the
forward function.
The backward calculation is done automatically, but could be overwritten as well
if wanted.
Simple classifier
We can now make use of the pre-defined modules in the torch.nn package, and define our own
small neural network.
We will use a minimal network with a input layer, one hidden layer with tanh
as activation function, and a output layer.
In other words, our networks should look something like
this:
x1
x2
The input neurons are shown in blue, which represent the coordinates and 𝑥1 𝑥2
of a data point.
The hidden neurons including a tanh activation are shown in white, and the output neuron in red.
In PyTorch, we can define this as follows:
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 15/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
For the examples in this notebook, we will use a tiny neural network with two input neurons and
four hidden neurons.
As we perform binary classification, we will use a single output neuron.
Note
that we do not apply a sigmoid on the output yet.
This is because other functions, especially the
loss, are more efficient and precise to calculate on the original outputs instead of the sigmoid
output.
We will discuss the detailed reason later.
SimpleClassifier(
(act_fn): Tanh()
Each linear layer has a weight matrix of the shape [output, input] , and a bias of the shape
[output] .
The tanh activation function does not have any parameters.
Note that parameters are
only registered for nn.Module objects that are direct object attributes, i.e. self.a = ... .
If you
define a list of modules, the parameters of those are not registered for the outer module and can
cause some issues when you try to optimize your module.
There are alternatives, like
nn.ModuleList , nn.ModuleDict and nn.Sequential , that allow you to have different data
structures of modules.
We will use them in a few later tutorials and explain them there.
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 16/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
The data
PyTorch also provides a few functionalities to load the training and
test data efficiently, summarized
in the package torch.utils.data .
In [ ]:
The data package defines two classes which are the standard interface for handling data in
PyTorch: data.Dataset , and data.DataLoader .
The dataset class provides an uniform
interface to access the
training/test data, while the data loader makes sure to efficiently load
and
stack the data points from the dataset into batches during training.
The dataset class summarizes the basic functionality of a dataset in a natural way.
To define a
dataset in PyTorch, we simply specify two functions: __getitem__ , and __len__ .
The get-item
𝑖
function has to return the -th data point in the dataset, while the len function returns the size of the
dataset.
For the XOR dataset, we can define the dataset class as follows:
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 17/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
In [74]:
class XORDataset(data.Dataset):
def __init__(self, size, std=0.1):
"""
Inputs:
size - Number of data points we want to generate
std - Standard deviation of the noise (see generate_continuous_xor fu
"""
super().__init__()
self.size = size
self.std = std
self.generate_continuous_xor()
def generate_continuous_xor(self):
# Each data point in the XOR dataset has two variables, x and y, that can
# The label is their XOR combination, i.e. 1 if only x or only y is 1 whi
# If x=y, the label is 0.
data = torch.randint(low=0, high=2, size=(self.size, 2), dtype=torch.floa
label = (data.sum(dim=1) == 1).to(torch.long)
# To make it slightly more challenging, we add a bit of gaussian noise to
data += self.std * torch.randn(data.shape)
self.data = data
self.label = label
def __len__(self):
# Number of data point we have. Alternatively self.data.shape[0], or self
return self.size
def __getitem__(self, idx):
# Return the idx-th data point of the dataset
# If we have multiple things to return (data point and label), we can ret
data_point = self.data[idx]
data_label = self.label[idx]
return data_point, data_label
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 18/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 19/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
tensor([[-0.0890, 0.8608],
[ 1.0905, -0.0128],
[ 0.7967, 0.2268],
[-0.0688, 0.0371],
[ 0.8732, -0.2240],
[-0.0559, -0.0282],
[ 0.9277, 0.0978],
[ 1.0150, 0.9689]])
tensor([1, 1, 1, 0, 1, 0, 1, 0])
Optimization
After defining the model and the dataset, it is time to prepare the optimization of the model.
During
training, we will perform the following steps:
4. Backpropagation: calculate the gradients for every parameter with respect to the loss
5. Update the parameters of the model in the direction of the gradients
We have seen how we can do step 1, 2 and 4 in PyTorch. Now, we will look at step 3 and 5.
Loss modules
We can calculate the loss for a batch by simply performing a few tensor operations as those are
automatically added to the computation graph.
For instance, for binary classification, we can use
Binary Cross Entropy (BCE) which is defined as follows:
For updating the parameters, PyTorch provides the package torch.optim that has most popular
optimizers implemented.
We will discuss the specific optimizers and their differences later in the
course, but will for now use the simplest of them: torch.optim.SGD .
Stochastic Gradient
Descent updates parameters by multiplying the gradients with a small constant, called learning
rate, and subtracting those from the parameters (hence minimizing the loss).
Therefore, we slowly
move towards the direction of minimizing the loss.
A good default value of the learning rate for a
small network as ours is 0.1.
In [83]: # Input to the optimizer are the parameters of the model: model.parameters()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 21/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
times in a computation graph, and we need to sum the gradients in this case instead of replacing
them.
Hence, remember to call optimizer.zero_grad() before calculating the gradients of a
batch.
Training
Finally, we are ready to train our model.
As a first step, we create a slightly larger dataset and
specify a data loader with a larger batch size.
Out[85]: SimpleClassifier(
(act_fn): Tanh()
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 22/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
Saving a model
After finish training a model, we save the model to disk so that we can load the same weights at a
later time.
For this, we extract the so-called state_dict from the model which contains all
learnable parameters.
For our simple model, the state dict contains the following entries:
[ 1.3066, -1.8463],
[-1.5089, -0.6550],
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 23/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
In [90]: # torch.save(object, filename). For the filename, any extension can be used
torch.save(state_dict, "our_model.tar")
In [92]: # Load state dict from the disk (make sure it is the same name as above)
state_dict = torch.load("our_model.tar")
# Create a new model and load the state
new_model = SimpleClassifier(num_inputs=2, num_hidden=4, num_outputs=1)
new_model.load_state_dict(state_dict)
# Verify that the parameters are the same
print("Original model\n", model.state_dict())
print("\nLoaded model\n", new_model.state_dict())
Original model
[ 1.3066, -1.8463],
[-1.5089, -0.6550],
Loaded model
[ 1.3066, -1.8463],
[-1.5089, -0.6550],
Evaluation
Once we have trained a model, it is time to evaluate it on a held-out test set.
As our dataset consist
of randomly generated data points, we need to
first create a test set with a corresponding data
loader.
where TP are the true positives, TN true negatives, FP false positives, and FN the fale negatives.
When evaluating the model, we don't need to keep track of the computation graph as we don't
intend to calculate the gradients.
This reduces the required memory and speed up the model.
In
PyTorch, we can deactivate the computation graph using with torch.no_grad(): ... .
Remember to additionally set the model to eval mode.
If we trained our model correctly, we should see a score close to 100% accuracy.
However, this is
only possible because of our simple task, and
unfortunately, we usually don't get such high scores
on test sets of
more complex tasks.
To visualize what our model has learned, we can perform a prediction for every data point in a
range of [−0.5,1.5], and visualize the predicted class as in the sample figure at the beginning of
this section.
This shows where the model has created decision boundaries, and which points
0 1
would be classified as , and which as .
We therefore get a background image out of blue (class
0) and orange (class 1).
The spots where the model is uncertain we will see a blurry overlap.
The
specific code is less relevant compared to the output figure which
should hopefully show us a clear
separation of classes:
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 25/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
C:\Users\Administrator\anaconda3\lib\site-packages\torch\functional.py:478: Use
rWarning: torch.meshgrid: in an upcoming release, it will be required to pass t
he indexing argument. (Triggered internally at C:\actions-runner\_work\pytorch
\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:2895.)
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 26/27
10/14/22, 8:59 AM 01_introduction_to_pytorch - Jupyter Notebook
The decision boundaries might not look exactly as in the figure in the preamble of this section
which can be caused by running it on CPU or a different GPU architecture.
Nevertheless, the result
on the accuracy metric should be the approximately the same.
localhost:8888/notebooks/01_introduction_to_pytorch.ipynb 27/27