0% found this document useful (0 votes)
27 views

1 TensorFlow

The document provides an overview of neural networks and deep learning using TensorFlow. It discusses: - TensorFlow is an open-source library for building and training deep learning models. It allows expressing computations as graphs with nodes representing operations. - Data in TensorFlow is represented as multidimensional arrays called tensors. Examples include scalars, vectors, matrices, and higher dimensional tensors for images, video, and more. - A computation graph defines the operations and dependencies between nodes. A session runs the graph to perform the computations. - The document gives examples of adding simple constants in a graph, running it in a session, and outputting the result. This demonstrates the basic process of building and executing graphs in TensorFlow.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

1 TensorFlow

The document provides an overview of neural networks and deep learning using TensorFlow. It discusses: - TensorFlow is an open-source library for building and training deep learning models. It allows expressing computations as graphs with nodes representing operations. - Data in TensorFlow is represented as multidimensional arrays called tensors. Examples include scalars, vectors, matrices, and higher dimensional tensors for images, video, and more. - A computation graph defines the operations and dependencies between nodes. A session runs the graph to perform the computations. - The document gives examples of adding simple constants in a graph, running it in a session, and outputting the result. This demonstrates the basic process of building and executing graphs in TensorFlow.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Neural Network and Deep

Learning
Samatrix Consulting Pvt Ltd
Tensor Flow
Neural Network and Deep Learning
• In this course, we will focus describing the deep learning models.
• By end of this course, you will not only have an understanding of
how the deep neural networks work, but also you will develop the
skill set required to build these models from scratch.
• We will start our understanding of the deep learning models with
implementing some of the basic concepts using a software,
TensorFlow.
What is TensorFlow
TensorFlow is an open-source software library.
It was release by Google in 2015.
It helps the developers design, build, and train deep learning models.
Initially, TensorFlow was used by Google developers to build models in-
house.
TensorFlow is a Python library.
It helps the developers express any computation as a graph of data flows.
In the graph, we use nodes to represent the mathematical operations and
edges to represent the data that is communicated from one node to another
What are Tensors
• We represent the data in TensorFlow as tensors which as multidimensional
Numpy arrays.
• All the machine learning and deep learning systems use tensors are their
basic structure.
• They are so fundamental to the learning systems that Google’s TensorFlow
was named after them.
• The tensors are the containers of data (in most of the cases, numerical
data).
• So, we can call them the containers of numbers also.
• In the previous courses, we have learned about the scalars (0D tensors),
vectors (1D tensors), and matrices (2D tensors).
0D Tensors - Scalars
• Scalars represents only one number.
• They are also called scalar tensor, or 0-dimensional tensor, or 0D
tensor.
• A scalar tensor has 0 axes.
1D Tensors - Vectors
• A vector is an array of numbers. We call it 1D tensors. It has only one
axis.
1
• The example is 2 .
3
• This vector has 3 entries so we can call it 3-dimensional vector.
• It should not be confused with 3D tensors.
• 3D vector has one axis and 3 dimensions along the axis whereas 3D
tensor has three axes.
2D Tensors - Matrix
• Matrix is an array of vectors.
• We call it 2D tensors. It has 2 axes (rows and columns).
1 2 3
• The example is 4 5 6 .
7 8 9
• It is a rectangular grid of numbers.
Tensors – Real World Example
Vectors Data
• Vector data is the most common dataset.
• We can use the single data point to encode as a vector and use the
batch of the data to encode as 2D tensor.
• The example is the dataset of students in a university where we can
characterize student using age, year of admission, and zip code of
home address.
• Hence, each student can be represented by a vector of 3 values and
entire dataset of 5000 students can be stored in a 2D tensor (5000, 3)
Timeseries Data
• We use 3D tensors for storing timeseries data or sequence data with an
explicit time axis.
• In this case, we can encode each sample as a sequence of vectors (2D
tensor) as a 3D tensor.
• As per convention, we should encode time axis as second axis.
• The example is the dataset of stock prices.
• Every minute, the current price of the stock, the highest price in the past
minute and the lowest price in the past minute is stored.
• Each trading day has 390 minutes making it a 2D tensor of shape (390, 3).
The 250 days of data makes it 3D tensor of shape (250, 390, 3).
Image Data
• Generally, the images have 3 dimensions:
• width, height, and color depth.
• By convention the image tensors are always 3D.
• A batch of 128 grayscale images of size 256 x 256 could be stored in a
tensor of shape (128, 256, 256, 1) whereas a batch of 128 color
images could be stored in a tensor of shape (128, 256, 256, 3).
• The TensorFlow framework places the color depth axis at the end, i.e.,
(samples, width, height, color_depth).
Video Data
• For video data, you need 5D tensors.
• A video is basically a sequence of frame whereas each frame is a color
image.
• Each frame can be stored in a 3D tensor (width, height, color_depth).
• A sequence of frames can be stored in a 4D tensor (frames, width,
height, color_depth).
• A batch of different videos can be stored in a 5D tensor of shape
(samples, frames, width, height, color_depth).
Installing TensorFlow
Installing TensorFlow
For a clean Python installation, you get install TensorFlow with the simple pip installation.

$ pip install tensorflow

TensorFlow for Windows users for the CPU version

pip install tensorflow

TensorFlow for Windows users for GPU-enabled version (assuming you have CUDA 8)

pip install tensorflow-gpu


Installing TensorFlow
After successful installation, we check the installation and version of the installation
by using the following code.

import tensorflow.compat.v1 as tf

print(tf.__version__)
2.5.0
TensorFlow 1.x
Most of the learners would have installed TensorFlow 2.x.
There have been significant changes between version 1 and version 2.
In this section, we are discussing the TensorFlow 1.x, so, we will tensorflow.compat.v1
module and disable the eager execution of version 2 using tf.disable_eager_execution()

In [1]: import tensorflow.compat.v1 as tf

In [2]: tf.disable_eager_execution()
TensorFlow 1.x
To start with, we will write a simple program that combines the words “Hello” and
“World!”.
Even though it is a simple program, it introduces many core elements of TensorFlow
and highlight the difference between a regular Python program and TensorFlow
program.
In [3]: he = tf.constant("Hello")
In [4]: wo = tf.constant(" World")
In [5]: hewo = he + wo
In [6]: print(hewo)
Tensor("add:0", shape=(), dtype=string)
TensorFlow 1.x
The output of hewo is not 'Hello World!' but some other details that we did not
expected.
We can compare this operation with a standard python operation where we
initialized two variables and then print the result.
In [7]: phe = "Hello"
In [8]: pwo = " World!"
In [9]: phewo = phe + pwo
In [10]: print(phewo)
Hello World!

In this case we easily get the output.


TensorFlow 1.x
• This difference is due to computation graph model of TensorFlow 1.x which we
will discuss in next section.
• The computational graphs in TensorFlow 1.x first define what the computation
are required and then executes the computations in an external mechanism.
• Hence the operation hewo = he + wo did not compute the sum of he and wo.
• Rather it added a summation operation to a graph of computation which it
completed later.
TensorFlow 1.x
The Session object helps run the parts of the computation graph that we have
already defined.
The statement result = sess.run(hewo) actually computes hewo as the sum of he and wo,
and finally displays the “Hello World!” message.

In [11]: with tf.Session() as sess:


...: ans = sess.run(hewo)
...:

In [12]: print(ans)
b'Hello World'
Computation Graph
• A set of interconnected entities is known as a computation graph.
• It has nodes or vertices. The edges connect the nodes.
• In a dataflow graph, the data flows from one node to another in a directed
manner through the edges.
• Each node of the graph represents an operation, which is applied to an input, and
generates an output.
• The output from the previous node is passed to next node as input.
• Simple arithmetic functions such as subtractions and multiplication as well as
complex functions can be represented by graph.
Benefits of Computation Graph
• The computations of TensorFlow are
optimized using graph’s connectivity
and node dependencies.
• If the input of a node, d, is affected by
the output of another node, a, we can
say that the node d is dependent on
node a.
• This is also known as direct
dependency.
• If this is not the case, the dependency
is known as indirect dependency.
Benefits of Computation Graph
• In the example, node e is directly
dependent on node c. Node e is
indirectly dependent on node a
whereas the node e is independent of
node d.
• From the graph we can identify the full
set of dependencies for each node.
• By locating dependencies between the
units, we can distribute the
computations across available
resources and avoid redundant
computations.
Graph Sessions and Fetches
• The TensorFlow required two main steps.
• (1) construct a graph
• (2) execute the graph.
• Let’s try to understand the operations using some examples.
Construct a Graph
The first step for any TensorFlow application is to import TensorFlow.
After the import command, an empty default graph is created and all the nodes are
associated with the graph automatically.
We can create six nodes by using tf.<operator> method and assign them to some variables.
Out of the six nodes, we used the first three nodes to output a constant value and assign
values 5, 2, and 3 to a, b, and c respectively
In [13]: a = tf.constant(5)

In [14]: b = tf.constant(2)

In [15]: c = tf.constant(3)
Construct a Graph
The input of the next three nodes are the two existing variables. These nodes perform
simple arithmetic operations on the input variables.

In [16]: d = tf.multiply(a, b)

In [17]: e = tf.add(c, b)

In [18]: f = tf.subtract(d, e)

The node d multiplies the output of a and b. The node e adds the output of nodes b and c.
Node f subtracts the output of node e from the output of node d.
Please note that in this graph, we could have also used the operators +/-/* instead of
tf.add/tf.subtract/tf.multiply
Sessions
Once the computation graph is ready, we can run the computations. We can create
and run a session by using the following code

In [19]: sess = tf.Session()

In [20]: res = sess.run(f)

In [21]: sess.close()

In [22]: print("Result = {}".format(res))


Result = 5
Sessions
In this case, we launched the graph in tf.Session. The object Session allocates the
memory for the objects and store the variables.
The commands are executed with the run() method of the Session object. When the
run() method is called, the computation starts with the requested output and then it
works backwards.
It computes the nodes based on the dependencies. In the following command, the
node f is computed and its value is stored in res, which is printed afterwards.
Finally, we closed the session as a good practice.

sess.close()
Constructing and Managing Graph
As soon as the TensorFlow is imported, a default graph is constructed. We can
create additional graphs using tf.Graph()command.

In [23]: print(tf.get_default_graph())
<tensorflow.python.framework.ops.Graph object at 0x7fd98ac671c0>

In [24]: g = tf.Graph()

In [25]: print(g)
<tensorflow.python.framework.ops.Graph object at 0x7fd98af43340>
Constructing and Managing Graph
In [26]: z = tf.constant(8)

In [27]: print(z.graph is g)
False

In [28]: print(z.graph is tf.get_default_graph())


True

From the example given above, we can see how the computational graph works.
However, we do not need such programs while working on machine learning or deep
learning problems.
This exercise provides us a good insight into TensorFlow.
Fetches
In the previous example, we executed the sess.run() command for one specific node (node f) by
passing the variable for the node f as an argument.
This argument is known as fetches. The sess.run()method evaluate the tensors in the fetches
parameter.
Our example has node f in fetches.
The sess.run()method will execute every tensor and every operation in the graph that leads to f. We
can also use multiple nodes as fetches.

In [29]: sess = tf.Session()

In [30]: fetches = [a, b, c, d, e, f]


Fetches
In [31]: res = sess.run(fetches)

In [32]: sess.close()

In [33]: print("Result ={}".format(res))


Result =[5, 2, 3, 10, 5, 5]
Constants, Variables, and Placeholders
• We have been using constants, variables, and placeholders in several other programming
languages. In TensorFlow they plan an important role. We can define them as follows:
• Constants: The values never change
• Variables: The values can change throughout the program
• Placeholders: Empty variables that can be assigned values at a later stage in the program
• The differences between constants, Variables, and Placeholders
Constants Variables Placeholders
Have initial value Have initial value Do not have initial value
The value never changes The value can change The value can change
No need to specify the No need to specify the The type of value needs to be
type of values type of values specifies
a = tf.constant(8) b = tf.Variable(6) c = tf.placeholder(tf.float32)
Constants, Variables, and Placeholders
Let’s understand the features of constants and variables with the help of an example:

First we define one constant and one variable.

In [34]: a = tf.constant(3)

In [35]: b = tf.Variable(6)

Please note the difference between how we have defined constant and variable.
The tf.constant() function is spelled entirely in lowercase, while the tf.Variable() function is spelled
with capitalized ‘V’.
Constants, Variables, and Placeholders
Now, we find the sum of the two values. We can do so by using either of the following commands.
We will get the same result.

In [36]: sum = tf.add(a,b)

In [37]: sum = a + b

Now, let’s assign a new value to variable b. We can use tf.assign() function to do so. If we try this
function on the constant a, we will get an error. Let’s save this under a new variable assign_val.

In [38]: assign_val = tf.assign(b,4)


Constants, Variables, and Placeholders
Please note that the variable assign_val cannot be executed on its own. We need a session to run this
code.
However, while working with variables, we need to initialize them before using them in sessions. If
we try to work with variables without initializing them, we will get an error. We can initialize the
variables as follows:

In [39]: init_ops = tf.global_variables_initializer()


Now we have declared the constants and variables and initialized the variables, it’s the time to run
this within a session. Let’s create the session

In [40]: sess = tf.Session()

In [41]: sess.run(init_ops)
Constants, Variables, and Placeholders
Now the variables have been initialized. We can find the result of adding a and b.

In [42]: print(sess.run(sum))
9

We can assign the new value b to as follows:

In [43]: sess.run(assign_val)
Out[43]: 4

The value of b has changed to 4. We can see that the value has changed to 7 now.

In [44]: print(sess.run(sum))
7
Placeholders
We can use the placeholders to train huge model with massive amount of data. By using
placeholders, we can access the data in smaller chunks instead of accessing the massive data at
once (which may crash the computer).

We need not provide any initial value in the case of placeholders as we have provided in the case of
constants and variables. But we need to specify the type of value that we want to store in it. We can
initialize the first placeholder as follows

In [45]: ph = tf.placeholder(tf.float32)

With this command, we have specified that our placeholder will hold value of type float32. At the
run time, it will accept a floating-point number into the placeholder.
Placeholders
Now let’s define our new equation which is a basic multiplication.

In [46]: eqn = ph*2

If we try to execute this equation, we will not get any result because ph does not have any value. We
need to assign some values to ph. We cannot assign the values using tf.assign(), as we did in the case
of variables. In the case of placeholders, we need to use a dictionary.

We can assign the value to ph as follows.

In [47]: sess.run(eqn, feed_dict={ph:4.0})


Out[47]: 8.0
Placeholders
When we execute the statement, first of all the value that we have provided in feed_dict (4 in this case), is
assigned to placeholder ph. Then the operation eqn is executed.

We can try feeding a list into our placeholder.


In [48]: sess.run(eqn, feed_dict={ph:[1,2,3,4,5]})
Out[48]: array([ 2., 4., 6., 8., 10.], dtype=float32)

We can try feeding a multi-dimensional array as well.


In [49]: sess.run(eqn, feed_dict={ph:[[1,2,3,4,5],[6,7,8,9,8],[7,3,2,1,0]]})
Out[49]:
array([[ 2., 4., 6., 8., 10.],
[12., 14., 16., 18., 16.],
[14., 6., 4., 2., 0.]], dtype=float32)
Placeholders
We can try another program

In [50]: x = tf.placeholder(tf.float32)

In [51]: y = tf.placeholder(tf.float32)

In [52]: sum = x + y

In [53]: prd = sum * 5

In [54]: sess = tf.Session()

In [55]: sess.run(prd, feed_dict = {x:[3], y:[4]})


Out[55]: array([35.], dtype=float32)
Placeholders
In this program, we have done the following:
1. Declared two placeholders, x and y.
2. Variable sum adds the two placeholders.
3. Another variable prd multiplies the value of sum by 5
4. Ran prd in a session and fed the value to x and y.
TensorFlow 2.x
TensorFlow 2.0
• TensorFlow 2.0 was announced in TensorFlow Dev summit of 2019.
• The TensorFlow 2.0 has provided significant improvement from TensorFlow 1.x.
• Hence, TensorFlow 2.0 has become very popular among the developers.
• TensorFlow 2.0 is a machine learning library that is easy to implement on any
platform.
• TensorFlow 2.0 is an enhanced version of TensorFlow 1.0. Hence majority of the
features are the same.
• However, there are few areas where the features differ from each other. The key
features of TensorFlow 2.0 are:
Eager Execution
The TensorFlow official website states:

“TensorFlow's eager execution is an imperative programming environment that


evaluates operations immediately, without building graphs: operations return
concrete values instead of constructing a computational graph to run later.”

It means that there is no need to create the computational graphs and run separate
sessions for each command.
TensorFlow 1.0 followed lazy execution, where we had to create the computational
graph before executing the line of code in a session.
Introduction to Keras
TensorFlow 2.0 implemented Keras API for model building.
Keras supports eager execution and several other functionalities of TensorFlow.

Keras is an independent package that can be downloaded separately and used with
in the models.
Keras has been added to TensorFlow framework as tf.keras.
Removal of Global Variables
• In TensorFlow 1.0, we had to initialize the variables using
tf.global_variables_initializer() before they could be used in a session.
• In TensorFlow 2.0 all the namespaces and mechanism for tracking the variables
have been removed.
• We need not initialize the variables in TensorFlow2.0.
• We can use them directly.
Enhanced Deployment Capabilities
TensorFlow 2.0 provides an enhanced capability to develop and train models across
several platforms and languages.
We can integrate a fully trained and saved model directly into the application. We
can also deploy using some important libraries that includes:

TensorFlow Serving: Used to implement models over HTTP/REST


TensorFlow Lite: Used to implement model for mobile devices such as Android or
iOS, or embedded systems like the Raspberry Pi.
TensorFLow.js: Used to implement models for JavaScript environments.
Code Comparison
Code Comprison
• In this section, we will compare four areas in which there is a significant
difference between TensorFlow 1.0 and TensorFlow 2.0

1. The tf.print() function


2. Lazy execution vs. eager execution
3. Removal of tf.global_variables_initializer()
4. No placeholder
Function – tf.print()
• TensorFlow 2.0 introduced tf.print() command that is similar to print() command in
regular python.
TensorFlow 1.0
• Since we have installed TensorFlow 2.0, we need to import tensorflow.compat.v1 for
compatibility and run the tf.disable_eager_execution() function to disable the eager
execution.
TensorFlow 1.0
In [1]: import tensorflow.compat.v1 as tf

In [2]: tf.disable_eager_execution()

In [3]: text = tf.constant("TensorFlow 1.0 Program")

In [4]: sess = tf.Session()

In [5]: print(sess.run(text))
b'TensorFlow 1.0 Program'

In [6]: sess.close()
TensorFlow 2.0
In [1]: import tensorflow as tf

In [2]: text = tf.constant("TensorFlow 2.0 Program")

In [3]: tf.print(text)
TensorFlow 2.0 Program

We can even use easier statements

In [4]: tf.print("TensorFlow 2.0 Program")


TensorFlow 2.0 Program
Lazy Execution vs Eager Execution
TensorFlow 1.0 follows lazy execution that means that the code will not execute
immediately.
First, it will execute the particular node of the graph within a session. Then only it
will run the code.
TensorFlow 1.0
In [1]: import tensorflow.compat.v1 as tf

In [2]: tf.disable_eager_execution()

In [3]: a = tf.constant("Hello, how are you?")

In [4]: b = tf.Variable(300)
TensorFlow 1.0
In [5]: init_op = tf.global_variables_initializer()

In [6]: sess = tf.Session()

In [7]: print(sess.run(init_op))
None

In [8]: print(sess.run(a))
b'Hello, how are you?'

In [9]: print(sess.run(b))
300
TensorFlow 2.0
In TensorFlow 2.0, the lazy execution was replaced by eager execution. Hence we
can execute the code directly without building a computational graph and running
each node in a session.

In [1]: import tensorflow as tf

In [2]: a = tf.constant("Hello, how are you?")

In [3]: b = tf.Variable(300)

In [4]: tf.print(a)
Hello, how are you?
TensorFlow 2.0
In [5]: tf.print(b)
300

In [6]: c = 3 + 4

In [7]: tf.print(c)
7

The code for TensorFlow 2.0 is shorter than the TensorFlow 1.0 code.
Removal of tf.global_variables_initializer()
• In the previous code for TensorFlow 2.0, we would have noticed that
we did not use tf.global_variables_initializer()at all.
• In TensorFlow 2.0, there is no need to initialize the variables.
• We can directly start using the variables in our program, immediately
after defining them.
TensorFlow 1.0
In [1]: import tensorflow.compat.v1 as tf

In [2]: tf.disable_eager_execution()

In [3]: var = tf.Variable(30)

In [4]: init_op = tf.global_variables_initializer()

In [5]: sess = tf.Session()

In [6]: sess.run(init_op)

In [7]: print(sess.run(var))
30
TensorFlow 2.0
In [1]: import tensorflow as tf

In [2]: var = tf.Variable(30)

In [3]: tf.print(var)
30

The code is much shorter in TensorFlow 2.0


No Placeholders
• In the previous section, we talked about the placeholders which can
be assigned a value at a later stage of the program.
• Placeholders were important in TensorFlow 1.0 to support lazy style
of execution.
• With eager execution of TensorFlow 2.0, there is no need for
placeholders.
• In eager execution, the operations are created and evaluated
immediately.
TensorFlow 1.0
In [1]: import tensorflow.compat.v1 as tf

In [2]: tf.disable_eager_execution()

In [3]: a = tf.constant(5.0)

In [4]: b = tf.placeholder(tf.float32)

In [5]: c = a * b
TensorFlow 1.0
In [6]: sess = tf.Session()

In [7]: print(sess.run(c, feed_dict = {b:3}))


15.0

In [8]: sess.close()
TensorFlow 2.0
In [1]: import tensorflow as tf

In [2]: a = tf.constant(6)

In [3]: b = tf.Variable(2)

In [4]: c = a * b

In [5]: tf.print(c)
12
We may notice that the program has become very easy. It is just like a regular
Python program.
Thanks
Samatrix Consulting Pvt Ltd

You might also like