1 TensorFlow
1 TensorFlow
Learning
Samatrix Consulting Pvt Ltd
Tensor Flow
Neural Network and Deep Learning
• In this course, we will focus describing the deep learning models.
• By end of this course, you will not only have an understanding of
how the deep neural networks work, but also you will develop the
skill set required to build these models from scratch.
• We will start our understanding of the deep learning models with
implementing some of the basic concepts using a software,
TensorFlow.
What is TensorFlow
TensorFlow is an open-source software library.
It was release by Google in 2015.
It helps the developers design, build, and train deep learning models.
Initially, TensorFlow was used by Google developers to build models in-
house.
TensorFlow is a Python library.
It helps the developers express any computation as a graph of data flows.
In the graph, we use nodes to represent the mathematical operations and
edges to represent the data that is communicated from one node to another
What are Tensors
• We represent the data in TensorFlow as tensors which as multidimensional
Numpy arrays.
• All the machine learning and deep learning systems use tensors are their
basic structure.
• They are so fundamental to the learning systems that Google’s TensorFlow
was named after them.
• The tensors are the containers of data (in most of the cases, numerical
data).
• So, we can call them the containers of numbers also.
• In the previous courses, we have learned about the scalars (0D tensors),
vectors (1D tensors), and matrices (2D tensors).
0D Tensors - Scalars
• Scalars represents only one number.
• They are also called scalar tensor, or 0-dimensional tensor, or 0D
tensor.
• A scalar tensor has 0 axes.
1D Tensors - Vectors
• A vector is an array of numbers. We call it 1D tensors. It has only one
axis.
1
• The example is 2 .
3
• This vector has 3 entries so we can call it 3-dimensional vector.
• It should not be confused with 3D tensors.
• 3D vector has one axis and 3 dimensions along the axis whereas 3D
tensor has three axes.
2D Tensors - Matrix
• Matrix is an array of vectors.
• We call it 2D tensors. It has 2 axes (rows and columns).
1 2 3
• The example is 4 5 6 .
7 8 9
• It is a rectangular grid of numbers.
Tensors – Real World Example
Vectors Data
• Vector data is the most common dataset.
• We can use the single data point to encode as a vector and use the
batch of the data to encode as 2D tensor.
• The example is the dataset of students in a university where we can
characterize student using age, year of admission, and zip code of
home address.
• Hence, each student can be represented by a vector of 3 values and
entire dataset of 5000 students can be stored in a 2D tensor (5000, 3)
Timeseries Data
• We use 3D tensors for storing timeseries data or sequence data with an
explicit time axis.
• In this case, we can encode each sample as a sequence of vectors (2D
tensor) as a 3D tensor.
• As per convention, we should encode time axis as second axis.
• The example is the dataset of stock prices.
• Every minute, the current price of the stock, the highest price in the past
minute and the lowest price in the past minute is stored.
• Each trading day has 390 minutes making it a 2D tensor of shape (390, 3).
The 250 days of data makes it 3D tensor of shape (250, 390, 3).
Image Data
• Generally, the images have 3 dimensions:
• width, height, and color depth.
• By convention the image tensors are always 3D.
• A batch of 128 grayscale images of size 256 x 256 could be stored in a
tensor of shape (128, 256, 256, 1) whereas a batch of 128 color
images could be stored in a tensor of shape (128, 256, 256, 3).
• The TensorFlow framework places the color depth axis at the end, i.e.,
(samples, width, height, color_depth).
Video Data
• For video data, you need 5D tensors.
• A video is basically a sequence of frame whereas each frame is a color
image.
• Each frame can be stored in a 3D tensor (width, height, color_depth).
• A sequence of frames can be stored in a 4D tensor (frames, width,
height, color_depth).
• A batch of different videos can be stored in a 5D tensor of shape
(samples, frames, width, height, color_depth).
Installing TensorFlow
Installing TensorFlow
For a clean Python installation, you get install TensorFlow with the simple pip installation.
TensorFlow for Windows users for GPU-enabled version (assuming you have CUDA 8)
import tensorflow.compat.v1 as tf
print(tf.__version__)
2.5.0
TensorFlow 1.x
Most of the learners would have installed TensorFlow 2.x.
There have been significant changes between version 1 and version 2.
In this section, we are discussing the TensorFlow 1.x, so, we will tensorflow.compat.v1
module and disable the eager execution of version 2 using tf.disable_eager_execution()
In [2]: tf.disable_eager_execution()
TensorFlow 1.x
To start with, we will write a simple program that combines the words “Hello” and
“World!”.
Even though it is a simple program, it introduces many core elements of TensorFlow
and highlight the difference between a regular Python program and TensorFlow
program.
In [3]: he = tf.constant("Hello")
In [4]: wo = tf.constant(" World")
In [5]: hewo = he + wo
In [6]: print(hewo)
Tensor("add:0", shape=(), dtype=string)
TensorFlow 1.x
The output of hewo is not 'Hello World!' but some other details that we did not
expected.
We can compare this operation with a standard python operation where we
initialized two variables and then print the result.
In [7]: phe = "Hello"
In [8]: pwo = " World!"
In [9]: phewo = phe + pwo
In [10]: print(phewo)
Hello World!
In [12]: print(ans)
b'Hello World'
Computation Graph
• A set of interconnected entities is known as a computation graph.
• It has nodes or vertices. The edges connect the nodes.
• In a dataflow graph, the data flows from one node to another in a directed
manner through the edges.
• Each node of the graph represents an operation, which is applied to an input, and
generates an output.
• The output from the previous node is passed to next node as input.
• Simple arithmetic functions such as subtractions and multiplication as well as
complex functions can be represented by graph.
Benefits of Computation Graph
• The computations of TensorFlow are
optimized using graph’s connectivity
and node dependencies.
• If the input of a node, d, is affected by
the output of another node, a, we can
say that the node d is dependent on
node a.
• This is also known as direct
dependency.
• If this is not the case, the dependency
is known as indirect dependency.
Benefits of Computation Graph
• In the example, node e is directly
dependent on node c. Node e is
indirectly dependent on node a
whereas the node e is independent of
node d.
• From the graph we can identify the full
set of dependencies for each node.
• By locating dependencies between the
units, we can distribute the
computations across available
resources and avoid redundant
computations.
Graph Sessions and Fetches
• The TensorFlow required two main steps.
• (1) construct a graph
• (2) execute the graph.
• Let’s try to understand the operations using some examples.
Construct a Graph
The first step for any TensorFlow application is to import TensorFlow.
After the import command, an empty default graph is created and all the nodes are
associated with the graph automatically.
We can create six nodes by using tf.<operator> method and assign them to some variables.
Out of the six nodes, we used the first three nodes to output a constant value and assign
values 5, 2, and 3 to a, b, and c respectively
In [13]: a = tf.constant(5)
In [14]: b = tf.constant(2)
In [15]: c = tf.constant(3)
Construct a Graph
The input of the next three nodes are the two existing variables. These nodes perform
simple arithmetic operations on the input variables.
In [16]: d = tf.multiply(a, b)
In [17]: e = tf.add(c, b)
In [18]: f = tf.subtract(d, e)
The node d multiplies the output of a and b. The node e adds the output of nodes b and c.
Node f subtracts the output of node e from the output of node d.
Please note that in this graph, we could have also used the operators +/-/* instead of
tf.add/tf.subtract/tf.multiply
Sessions
Once the computation graph is ready, we can run the computations. We can create
and run a session by using the following code
In [21]: sess.close()
sess.close()
Constructing and Managing Graph
As soon as the TensorFlow is imported, a default graph is constructed. We can
create additional graphs using tf.Graph()command.
In [23]: print(tf.get_default_graph())
<tensorflow.python.framework.ops.Graph object at 0x7fd98ac671c0>
In [24]: g = tf.Graph()
In [25]: print(g)
<tensorflow.python.framework.ops.Graph object at 0x7fd98af43340>
Constructing and Managing Graph
In [26]: z = tf.constant(8)
In [27]: print(z.graph is g)
False
From the example given above, we can see how the computational graph works.
However, we do not need such programs while working on machine learning or deep
learning problems.
This exercise provides us a good insight into TensorFlow.
Fetches
In the previous example, we executed the sess.run() command for one specific node (node f) by
passing the variable for the node f as an argument.
This argument is known as fetches. The sess.run()method evaluate the tensors in the fetches
parameter.
Our example has node f in fetches.
The sess.run()method will execute every tensor and every operation in the graph that leads to f. We
can also use multiple nodes as fetches.
In [32]: sess.close()
In [34]: a = tf.constant(3)
In [35]: b = tf.Variable(6)
Please note the difference between how we have defined constant and variable.
The tf.constant() function is spelled entirely in lowercase, while the tf.Variable() function is spelled
with capitalized ‘V’.
Constants, Variables, and Placeholders
Now, we find the sum of the two values. We can do so by using either of the following commands.
We will get the same result.
In [37]: sum = a + b
Now, let’s assign a new value to variable b. We can use tf.assign() function to do so. If we try this
function on the constant a, we will get an error. Let’s save this under a new variable assign_val.
In [41]: sess.run(init_ops)
Constants, Variables, and Placeholders
Now the variables have been initialized. We can find the result of adding a and b.
In [42]: print(sess.run(sum))
9
In [43]: sess.run(assign_val)
Out[43]: 4
The value of b has changed to 4. We can see that the value has changed to 7 now.
In [44]: print(sess.run(sum))
7
Placeholders
We can use the placeholders to train huge model with massive amount of data. By using
placeholders, we can access the data in smaller chunks instead of accessing the massive data at
once (which may crash the computer).
We need not provide any initial value in the case of placeholders as we have provided in the case of
constants and variables. But we need to specify the type of value that we want to store in it. We can
initialize the first placeholder as follows
In [45]: ph = tf.placeholder(tf.float32)
With this command, we have specified that our placeholder will hold value of type float32. At the
run time, it will accept a floating-point number into the placeholder.
Placeholders
Now let’s define our new equation which is a basic multiplication.
If we try to execute this equation, we will not get any result because ph does not have any value. We
need to assign some values to ph. We cannot assign the values using tf.assign(), as we did in the case
of variables. In the case of placeholders, we need to use a dictionary.
In [50]: x = tf.placeholder(tf.float32)
In [51]: y = tf.placeholder(tf.float32)
In [52]: sum = x + y
It means that there is no need to create the computational graphs and run separate
sessions for each command.
TensorFlow 1.0 followed lazy execution, where we had to create the computational
graph before executing the line of code in a session.
Introduction to Keras
TensorFlow 2.0 implemented Keras API for model building.
Keras supports eager execution and several other functionalities of TensorFlow.
Keras is an independent package that can be downloaded separately and used with
in the models.
Keras has been added to TensorFlow framework as tf.keras.
Removal of Global Variables
• In TensorFlow 1.0, we had to initialize the variables using
tf.global_variables_initializer() before they could be used in a session.
• In TensorFlow 2.0 all the namespaces and mechanism for tracking the variables
have been removed.
• We need not initialize the variables in TensorFlow2.0.
• We can use them directly.
Enhanced Deployment Capabilities
TensorFlow 2.0 provides an enhanced capability to develop and train models across
several platforms and languages.
We can integrate a fully trained and saved model directly into the application. We
can also deploy using some important libraries that includes:
In [2]: tf.disable_eager_execution()
In [5]: print(sess.run(text))
b'TensorFlow 1.0 Program'
In [6]: sess.close()
TensorFlow 2.0
In [1]: import tensorflow as tf
In [3]: tf.print(text)
TensorFlow 2.0 Program
In [2]: tf.disable_eager_execution()
In [4]: b = tf.Variable(300)
TensorFlow 1.0
In [5]: init_op = tf.global_variables_initializer()
In [7]: print(sess.run(init_op))
None
In [8]: print(sess.run(a))
b'Hello, how are you?'
In [9]: print(sess.run(b))
300
TensorFlow 2.0
In TensorFlow 2.0, the lazy execution was replaced by eager execution. Hence we
can execute the code directly without building a computational graph and running
each node in a session.
In [3]: b = tf.Variable(300)
In [4]: tf.print(a)
Hello, how are you?
TensorFlow 2.0
In [5]: tf.print(b)
300
In [6]: c = 3 + 4
In [7]: tf.print(c)
7
The code for TensorFlow 2.0 is shorter than the TensorFlow 1.0 code.
Removal of tf.global_variables_initializer()
• In the previous code for TensorFlow 2.0, we would have noticed that
we did not use tf.global_variables_initializer()at all.
• In TensorFlow 2.0, there is no need to initialize the variables.
• We can directly start using the variables in our program, immediately
after defining them.
TensorFlow 1.0
In [1]: import tensorflow.compat.v1 as tf
In [2]: tf.disable_eager_execution()
In [6]: sess.run(init_op)
In [7]: print(sess.run(var))
30
TensorFlow 2.0
In [1]: import tensorflow as tf
In [3]: tf.print(var)
30
In [2]: tf.disable_eager_execution()
In [3]: a = tf.constant(5.0)
In [4]: b = tf.placeholder(tf.float32)
In [5]: c = a * b
TensorFlow 1.0
In [6]: sess = tf.Session()
In [8]: sess.close()
TensorFlow 2.0
In [1]: import tensorflow as tf
In [2]: a = tf.constant(6)
In [3]: b = tf.Variable(2)
In [4]: c = a * b
In [5]: tf.print(c)
12
We may notice that the program has become very easy. It is just like a regular
Python program.
Thanks
Samatrix Consulting Pvt Ltd