0% found this document useful (0 votes)
3 views

Tensors

PyTorch is an open-source machine learning and deep learning framework that allows users to manipulate data and write algorithms using Python. It utilizes tensors as the fundamental building blocks for representing data, which can take various forms such as scalars, vectors, and matrices. The document covers topics including tensor creation, manipulation, and common operations, as well as the importance of tensor shapes and data types in machine learning.

Uploaded by

abdoalsenaweabdo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Tensors

PyTorch is an open-source machine learning and deep learning framework that allows users to manipulate data and write algorithms using Python. It utilizes tensors as the fundamental building blocks for representing data, which can take various forms such as scalars, vectors, and matrices. The document covers topics including tensor creation, manipulation, and common operations, as well as the importance of tensor shapes and data types in machine learning.

Uploaded by

abdoalsenaweabdo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

PyTorch Fundamentals

What is PyTorch?
is an open source machine learning and deep learning framework.

What can PyTorch be used for?


PyTorch allows you to manipulate and process data and write machine learning algorithms using Python code.

Topics
Topic Contents

Introduction to tensors Tensors are the basic building block of all of machine learning and deep learning.

Creating tensors Tensors can represent almost any kind of data (images, words, tables of numbers).

Getting information from


If you can put information into a tensor, you'll want to get it out too.
tensors

Machine learning algorithms (like neural networks) involve manipulating tensors in many different ways such as
Manipulating tensors
adding, multiplying, combining.

One of the most common issues in machine learning is dealing with shape mismatches (trying to mix wrong
Dealing with tensor shapes
shaped tensors with other tensors).

If you've indexed on a Python list or NumPy array, it's very similar with tensors, except they can have far more
Indexing on tensors
dimensions.

Mixing PyTorch tensors PyTorch plays with tensors ( torch.Tensor ), NumPy likes arrays ( np.ndarray ) sometimes you'll want to
and NumPy mix and match these.

Machine learning is very experimental and since it uses a lot of randomness to work, sometimes you'll want that
Reproducibility
randomness to not be so random.

Running tensors on GPU GPUs (Graphics Processing Units) make your code faster, PyTorch makes it easy to run your code on GPUs.

# Importing PyTorch
import torch
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

torch.__version__

'2.5.1+cu121'
Introduction to tensor
tensor

is a mathematical object that generalizes the concepts of scalars, vectors, and matrices to higher dimensions

Tensors are the fundamental building block of machine learning.

Their job is to represent data in a numerical way.

For example, you could represent an image as a tensor with shape [3, 224, 224] which would mean [colour_channels, height, width], as
in the image has 3 colour channels (red, green, blue), a height of 224 pixels and a width of 224 pixels.

creating tensors
types of tensor :
scalar is a single number and in tensor-speak it's a zero dimension tensor.
vector is a single dimension tensor but can contain many numbers.
matrix is a grid of numbers arranged in rows and columns. Matrices are second-order tensors.
Third-Order Tensors and Beyond

PyTorch tensors are created using torch.Tensor()

# Scalar or Zero-Order Tensor


scalar = torch.tensor(7)
scalar

tensor(7)

scalar.ndim

# Get the Python number within a tensor (only works with one-element tensors)
scalar.item()

# Vector or First-Order Tensor


vector = torch.tensor([7, 7])
vector

tensor([7, 7])

vector.ndim

vector.shape

torch.Size([2])
# Matrix or Second-Order Tensor
matrix = torch.tensor([[7, 8],
[8, 9]])
matrix

tensor([[7, 8],
[8, 9]])

matrix.ndim

# accessing matrix element


print(matrix[1]) # which is like a vector
print(matrix[0][1]) #which is like a scalar

tensor([8, 9])
tensor(8)

matrix.shape

torch.Size([2, 2])

# Tensor( Matrix or Second-Order Tensor)


TENSOR = torch.tensor([[[1, 2, 3],
[3, 6, 9],
[2, 4, 5]]])
TENSOR

tensor([[[1, 2, 3],
[3, 6, 9],
[2, 4, 5]]])

TENSOR.ndim

TENSOR.shape

torch.Size([1, 3, 3])

tensor_2 = torch.tensor([[[1, 2, 3],


[4, 5, 6],
[7, 8, 9]],[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]])
tensor_2

tensor([[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]],

[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]])

tensor_2.ndim
3

tensor_2.shape

torch.Size([2, 3, 3])

print(tensor_2[0]) #which is like a matrix


print(tensor_2[0][1]) # which is like a vector
print(tensor_2[0][1][1]) # which is like a scalar

tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
tensor([4, 5, 6])
tensor(5)

Random tensors
Why random tensors?

Random tensors are important because the way many neural networks learn is that they start with tensors full of random numbers and
then adjust those random numbers to better represent the data.

Start with random numbers -> look at data -> update random numbers -> look at data -> update random
numbers

# create a random vector tensor


random_vector = torch.rand(4)
random_vector

tensor([0.0643, 0.6392, 0.7479, 0.4048])

# create a random matrix vector


random_matrix = torch.rand(3,3)
random_matrix

tensor([[2.6138e-01, 6.3623e-01, 7.2503e-01],


[5.3131e-01, 5.6378e-01, 8.9594e-01],
[7.2262e-02, 6.8501e-03, 6.0815e-04]])

# Create a random tensor with similar shape to an image tensor


random_image_tensor = torch.rand(size=(3, 244, 244))
random_image_tensor.shape, random_image_tensor.dtype

(torch.Size([3, 244, 244]), torch.float32)

Zeros and ones


# creatr a tensor of all zeros
zero_tensor = torch.zeros(3,4)
zero_tensor

tensor([[0., 0., 0., 0.],


[0., 0., 0., 0.],
[0., 0., 0., 0.]])

# create a tensorof all oner


ones_tensor = torch.ones(3,4)
ones_tensor

tensor([[1., 1., 1., 1.],


[1., 1., 1., 1.],
[1., 1., 1., 1.]])

Creating a range
you might want a range of numbers, such as 1 to 10 or 0 to 100.

You can use torch.arange(start, end, step) to do so.

Where:

start = start of range (e.g. 0)


end = end of range (e.g. 10)
step = how many steps in between each value (e.g. 1)
one_to_ten = torch.arange(start=1, end=11, step=1)
one_to_ten

tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])

tensors like
Sometimes you might want one tensor of a certain type with the same shape as another tensor.

For example, a tensor of all zeros with the same shape as a previous tensor.

To do so you can use torch.zeros_like(input) or torch.ones_like(input) which return a tensor filled with zeros or ones
in the same shape as the input respectively.

# Creating zeros tensors like


ten_zero = torch.zeros_like(input=one_to_ten)
ten_zero

tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])

# Creating ones tensors like


ten_ones = torch.ones_like(input=one_to_ten)
ten_ones

tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1])

Tensor datatypes

Tensor Datatypes in PyTorch


There are many different tensor datatypes available in PyTorch.
Some are specific for CPU and some are better for GPU.

Generally, if you see torch.cuda anywhere, the tensor is being used for GPU (since Nvidia GPUs use a computing toolkit called
CUDA).
The most common type (and generally the default) is torch.float32 or torch.float .
This is referred to as 32-bit floating point.
There’s also:
16-bit floating point: torch.float16 or torch.half
64-bit floating point: torch.float64 or torch.double
8-bit, 16-bit, 32-bit, and 64-bit integers.

Why So Many Datatypes?


The reason for having multiple datatypes is precision in computing.
Precision refers to the amount of detail used to describe a number:
Higher precision (e.g., 32, 64) provides more detail but requires more compute.
Lower precision (e.g., 8, 16) is faster to compute but sacrifices some accuracy.

This is important in deep learning because you're performing so many operations.


The trade-off is between speed and accuracy:

Lower precision is faster but less accurate.


Higher precision is slower but more accurate.

Common Errors with Tensors in PyTorch


When working with PyTorch and deep learning, you’ll often encounter 3 main types of tensor-related errors:

1. Tensors not of the right datatype


2. Tensors not of the right shape
3. Tensors not on the right device (CPU vs. GPU)

# Default datatype for tensors is float32


float_32_tensor = torch.tensor([3.0, 5.0, 6.0],
dtype=None, # defaults to None, which is torch.float32 or whatever datatype is passed
device=None, # defaults to None, which uses the default tensor type
requires_grad=False) # whether or not to track gradients with this tensors operations

float_32_tensor.dtype
torch.float32

float_16_tensor = torch.tensor([3.0, 5.0, 6.0],


dtype=torch.float16,
device=None,
requires_grad=False)
float_16_tensor

tensor([3., 5., 6.], dtype=torch.float16)

# convert from float32 to float16


float_16_tensor = float_32_tensor.type(torch.float16)
float_16_tensor

tensor([3., 5., 6.], dtype=torch.float16)

Getting information from tensors


1. Tensors not right datatype - to do get datatype from a tensor, can use tensor.dtype
2. Tensors not right shape - to get shape from a tensor, can use tensor.shape
3. Tensors not on the right device - to get device from a tensor, can use tensor.device

# create a tensor
a_tensor = torch.rand(3, 4)
a_tensor

tensor([[0.4235, 0.6891, 0.0505, 0.0402],


[0.4072, 0.5102, 0.7070, 0.8010],
[0.6705, 0.6086, 0.8185, 0.8384]])

# find out a details about the tensors


print(a_tensor)
print(f"datatype or the tensor :{a_tensor.dtype}")
print(f"shape of the tensor :{a_tensor.shape}")
print(f"device the tensor on :{a_tensor.device}")

tensor([[0.4235, 0.6891, 0.0505, 0.0402],


[0.4072, 0.5102, 0.7070, 0.8010],
[0.6705, 0.6086, 0.8185, 0.8384]])
datatype or the tensor :torch.float32
shape of the tensor :torch.Size([3, 4])
device the tensor on :cpu

Manipulating Tensors (tensor operations)


In deep learning, data (images, text, video, audio, protein structures, etc) gets represented as tensors.

A model learns by investigating those tensors and performing a series of operations (could be 1,000,000s+) on tensors to create a
representation of the patterns in the input data.

These operations are often a wonderful dance between:

Addition
Substraction
Multiplication (element-wise)
Division
Matrix multiplication

# create a tensor and add 5 to it


tensor = torch.tensor([3, 2, 4])
tensor + 5

tensor([8, 7, 9])

# multiply tensor by 10
tensor * 10

tensor([30, 20, 40])

# substract 10 from it
tensor - 10

tensor([-7, -8, -6])

PyTorch also has a bunch of built-in functions like torch.mul() (short for multiplication) and torch.add() to perform basic
operations.

# try to multiply with built-in function


torch.mul(tensor, 10)

tensor([30, 20, 40])

# try to add with built-in function


torch.add(tensor, 5)

tensor([8, 7, 9])

tensor # remains the same because we didn't reassigned it

tensor([3, 2, 4])

# Element-wise multiplication (each element multiplies its equivalent, index 0->0, 1->1, 2->2)
print(tensor,'*',tensor)
print(f"Equals:{tensor * tensor }")

tensor([3, 2, 4]) * tensor([3, 2, 4])


Equals:tensor([ 9, 4, 16])

Matrix multiplication
Two main ways of performing multiplication in neural networks and deep learning:

1. Element-wise multiplication
2. Matrix mutliplication (dot product)

More information on multiplying matrices - https://www.mathsisfun.com/algebra/matrix-multiplying.html

There are two main rules that performing matrix mutliplication needs to satisfy:

1. The inner dimensions must match:

(3, 2) @ (3, 2) won't work


(2, 3) @ (3, 2) will work
(3, 2) @ (2, 3) will work

2. The resulting matrix has the shape of the outer dimensions:

(2, 3) @ (3, 2) -> (2, 2)


(3, 2) @ (2, 3) -> (3, 3)

# matrix multiplication
torch.matmul(tensor, tensor) # ----> 1*1 + 2*2 + 3*3

tensor(29)

One of the most common errors in deep learning: shape errors


# Shapes for matrix multiplication
tensor_A = torch.tensor([[1, 2],
[3, 4],
[5, 6]])

tensor_B = torch.tensor([[7, 10],


[8, 11],
[9, 12]])

torch.matmul(tensor_A, tensor_B) # it will preduce an error

---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-43-650f76402467> in <cell line: 1>()
----> 1 torch.matmul(tensor_A, tensor_B) # it will preduce an error

RuntimeError: mat1 and mat2 shapes cannot be multiplied (3x2 and 3x2)

tensor_B.T, tensor_B.T.shape

(tensor([[ 7, 8, 9],
[10, 11, 12]]),
torch.Size([2, 3]))

# The matrix multiplication operation works when tensor_B is transposed


print(f"Original shapes: tensor_A = {tensor_A.shape}, tensor_B = {tensor_B.shape}")
print(f"New shapes: tensor_A = {tensor_A.shape} (same shape as above), tensor_B.T = {tensor_B.T.shape}")
print(f"Multiplying: {tensor_A.shape} @ {tensor_B.T.shape} <- inner dimensions must match")
print("Output:\n")
output = torch.matmul(tensor_A, tensor_B.T)
print(output)
print(f"\nOutput shape: {output.shape}")

Original shapes: tensor_A = torch.Size([3, 2]), tensor_B = torch.Size([3, 2])


New shapes: tensor_A = torch.Size([3, 2]) (same shape as above), tensor_B.T = torch.Size([2, 3])
Multiplying: torch.Size([3, 2]) @ torch.Size([2, 3]) <- inner dimensions must match
Output:

tensor([[ 27, 30, 33],


[ 61, 68, 75],
[ 95, 106, 117]])

Output shape: torch.Size([3, 3])

Neural networks are full of matrix multiplications and dot products.

The torch.nn.Linear() module (we'll see this in action later on), also known as a feed-forward layer or fully connected layer,
implements a matrix multiplication between an input x and a weights matrix A .

y = x ⋅ AT + b

Where:

x is the input to the layer (deep learning is a stack of layers like torch.nn.Linear() and others on top of each other).
A is the weights matrix created by the layer, this starts out as random numbers that get adjusted as a neural network learns to
better represent patterns in the data (notice the " T ", that's because the weights matrix gets transposed).
Note: You might also often see W or another letter like X used to showcase the weights matrix.
b is the bias term used to slightly offset the weights and inputs.
y is the output (a manipulation of the input in the hopes to discover patterns in it).

This is a linear function (you may have seen something like y = mx + b in high school or elsewhere), and can be used to draw a straight
line!

Let's play around with a linear layer.

Try changing the values of in_features and out_features below and see what happens.

Finding the min, max, mean, sum, etc (tensor aggregation)


# create tensor
x = torch.arange(0,100,10)
x

tensor([ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90])

# find the max


print(torch.max(x)) # which is a function on pytorch
print(x.max()) # which is a method of the object(tensor)

tensor(90)
tensor(90)

# Find the mean - note: the torch.mean() function requires a tensor of float32
torch.mean(x.type(torch.float32)) , x.type(torch.float32).mean()

(tensor(45.), tensor(45.))

# find the sum


torch.sum(x), x.sum()

(tensor(450), tensor(450))

Finding the positional min and max


# Find the position in tensor that has the minimum value with argmin() -> returns index position of targt tensor wher
x.argmin()

tensor(0)

x[0]

tensor(0)
# Find the position in tensor that has the maximum value with argmax()
x.argmax()

tensor(9)

x[9]

tensor(90)

Reshaping, stacking, squeezing and unsqueezing tensors


Reshaping - reshapes an input tensor to a defined shape
View - Return a view of an input tensor of certain shape but keep the same memory as the original tensor
Stacking - combine multiple tensors on top of each other (vstack) or side by side (hstack)
Squeeze - removes all 1 dimensions from a tensor
Unsqueeze - add a 1 dimension to a target tensor
Permute - Return a view of the input with dimensions permuted (swapped) in a certain way

x = torch.arange(1., 10.)
x, x.shape

(tensor([1., 2., 3., 4., 5., 6., 7., 8., 9.]), torch.Size([9]))

x_reshaped = x.reshape(1, 9)
x_reshaped, x_reshaped.shape

(tensor([[1., 2., 3., 4., 5., 6., 7., 8., 9.]]), torch.Size([1, 9]))

# Stack tensors on top of each other


x_stacked = torch.stack([x, x, x, x], dim=0)
x_stacked

tensor([[1., 2., 3., 4., 5., 6., 7., 8., 9.],


[1., 2., 3., 4., 5., 6., 7., 8., 9.],
[1., 2., 3., 4., 5., 6., 7., 8., 9.],
[1., 2., 3., 4., 5., 6., 7., 8., 9.]])

# torch.squeeze() - removes all single dimensions from a target tensor


print(f"Previous tensor: {x_reshaped}")
print(f"Previous shape: {x_reshaped.shape}")

# Remove extra dimensions from x_reshaped


x_squeezed = x_reshaped.squeeze()
print(f"\nNew tensor: {x_squeezed}")
print(f"New shape: {x_squeezed.shape}")

Previous tensor: tensor([[1., 2., 3., 4., 5., 6., 7., 8., 9.]])
Previous shape: torch.Size([1, 9])

New tensor: tensor([1., 2., 3., 4., 5., 6., 7., 8., 9.])
New shape: torch.Size([9])

# torch.unsqueeze() - adds a single dimension to a target tensor at a specific dim (dimension)


print(f"Previous target: {x_squeezed}")
print(f"Previous shape: {x_squeezed.shape}")

# Add an extra dimension with unsqueeze


x_unsqueezed = x_squeezed.unsqueeze(dim=0)
print(f"\nNew tensor: {x_unsqueezed}")
print(f"New shape: {x_unsqueezed.shape}")

Previous target: tensor([1., 2., 3., 4., 5., 6., 7., 8., 9.])
Previous shape: torch.Size([9])

New tensor: tensor([[1., 2., 3., 4., 5., 6., 7., 8., 9.]])
New shape: torch.Size([1, 9])

# torch.permute - rearranges the dimensions of a target tensor in a specified order


x_original = torch.rand(size=(224, 224, 3)) # [height, width, colour_channels]

# Permute the original tensor to rearrange the axis (or dim) order
x_permuted = x_original.permute(2, 0, 1) # shifts axis 0->1, 1->2, 2->0

print(f"Previous shape: {x_original.shape}")


print(f"New shape: {x_permuted.shape}") # [colour_channels, height, width]

Previous shape: torch.Size([224, 224, 3])


New shape: torch.Size([3, 224, 224])
Indexing (selecting data from tensors)
Indexing with PyTorch is similar to indexing with NumPy.

# create a tensor
x = torch.arange(1, 10).reshape(1,3,3)
x, x.shape

(tensor([[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]),
torch.Size([1, 3, 3]))

x[0]

tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])

x[0][0]

tensor([1, 2, 3])

x[0][1][1]

tensor(5)

# You can also use ":" to select "all" of a target dimension


x[:, 0]

tensor([[1, 2, 3]])

# Get all values of 0th and 1st dimensions but only index 1 of 2nd dimension
x[:, :, 1]

tensor([[2, 5, 8]])

# Get all values of the 0 dimension but only the 1 index value of 1st and 2nd dimension
x[:, 1, 1]

tensor([5])

# Get index 0 of 0th and 1st dimension and all values of 2nd dimension
x[0, 0, :]

tensor([1, 2, 3])

# Index on x to return 9
print(x[0][2][2])

# Index on x to return 3, 6, 9
print(x[:, :, 2])

tensor(9)
tensor([[3, 6, 9]])

PyTorch tensors & NumPy


NumPy is a popular scientific Python numerical computing library.

And because of this, PyTorch has functionality to interact with it.

Data in NumPy, want in PyTorch tensor -> torch.from_numpy(ndarray)


PyTorch tensor -> NumPy -> torch.Tensor.numpy()

# numpy array to tensor


array = np.arange(1.0, 8.0)
tensor = torch.from_numpy(array ) # warning: when converting from numpy -> pytorch, pytorch reflects numpy's default
array, tensor

(array([1., 2., 3., 4., 5., 6., 7.]),


tensor([1., 2., 3., 4., 5., 6., 7.], dtype=torch.float64))

# Tensor to NumPy array


tensor = torch.ones(7)
numpy_tensor = tensor.numpy()
tensor, numpy_tensor
(tensor([1., 1., 1., 1., 1., 1., 1.]),
array([1., 1., 1., 1., 1., 1., 1.], dtype=float32))

Reproducbility (trying to take random out of random)


In short how a neural network learns:

start with random numbers -> tensor operations -> update random numbers to try and make them better
representations of the data -> again -> again -> again...

To reduce the randomness in neural networks and PyTorch comes the concept of a random seed.

Essentially what the random seed does is "flavour" the randomness.

import torch

# Create two random tensors


random_tensor_A = torch.rand(3, 4)
random_tensor_B = torch.rand(3, 4)

print(random_tensor_A)
print(random_tensor_B)
print(random_tensor_A == random_tensor_B)

tensor([[0.5264, 0.6714, 0.9580, 0.2556],


[0.8588, 0.2442, 0.5977, 0.9406],
[0.1327, 0.3090, 0.3018, 0.6222]])
tensor([[0.9350, 0.4936, 0.6467, 0.6773],
[0.1987, 0.3748, 0.7114, 0.2130],
[0.8556, 0.3266, 0.4071, 0.3194]])
tensor([[False, False, False, False],
[False, False, False, False],
[False, False, False, False]])

# Let's make some random but reproducible tensors


import torch

# Set the random seed


RANDOM_SEED = 42
torch.manual_seed(RANDOM_SEED)
random_tensor_C = torch.rand(3, 4)

torch.manual_seed(RANDOM_SEED)
random_tensor_D = torch.rand(3, 4)

print(random_tensor_C)
print(random_tensor_D)
print(random_tensor_C == random_tensor_D)

tensor([[0.8823, 0.9150, 0.3829, 0.9593],


[0.3904, 0.6009, 0.2566, 0.7936],
[0.9408, 0.1332, 0.9346, 0.5936]])
tensor([[0.8823, 0.9150, 0.3829, 0.9593],
[0.3904, 0.6009, 0.2566, 0.7936],
[0.9408, 0.1332, 0.9346, 0.5936]])
tensor([[True, True, True, True],
[True, True, True, True],
[True, True, True, True]])

Running tensors and PyTorch objects on the GPUs (and making faster
computations)
GPUs = faster computation on numbers, thanks to CUDA + NVIDIA hardware + PyTorch working behind the scenes to make everything
hunky dory (good).

PyTorch can run on both CPUs and GPUs. However, if you plan to work on large-scale projects or complex neural networks, you might
find CPU training slower compared to GPU-accelerated setups.

Check for GPU access with PyTorch


# Check for GPU access with PyTorch
torch.cuda.is_available()

True

# Setup device agnostic code


device = "cuda" if torch.cuda.is_available() else "cpu"
device

'cuda'

Putting tensors (and models) on the GPU


The reason we want our tensors/models on the GPU is because using a GPU results in faster computations.

# Create a tensor (default on the CPU)


tensor = torch.tensor([1, 2, 3])

# Tensor not on GPU


print(tensor, tensor.device)

tensor([1, 2, 3]) cpu

# Move tensor to GPU (if available)


tensor_on_gpu = tensor.to(device)
tensor_on_gpu

tensor([1, 2, 3], device='cuda:0')

Moving tensors back to the CPU


# If tensor is on GPU, can't transform it to NumPy
tensor_on_gpu.numpy()

---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-77-b7da913938a5> in <cell line: 2>()
1 # If tensor is on GPU, can't transform it to NumPy
----> 2 tensor_on_gpu.numpy()

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory
first.

# To fix the GPU tensor with NumPy issue, we can first set it to the CPU
tensor_back_on_cpu = tensor_on_gpu.cpu().numpy()
tensor_back_on_cpu

array([1, 2, 3])
Processing math: 100%

You might also like