0% found this document useful (0 votes)
27 views

Unit-3 Aiml

ML

Uploaded by

tanuthota54
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Unit-3 Aiml

ML

Uploaded by

tanuthota54
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT-3

NEURAL NETWORKS
ANATOMY OF NEURAL NETWORKS
1. Explanation of Neural network.
2. MNIST digit classification
Introduction to Keras
Keras is a high-level, deep learning API developed by Google for implementing neural
networks. It is written in Python and is used to make the implementation of neural
networks easy. It also supports multiple backend neural network computation
Keras is a deep learning API for Python, built on top of TensorFlow, that provides a con-
venient way to define and train any kind of deep learning model. Keras was initially
developed for research, with the aim of enabling fast deep learning experimentation.
Through TensorFlow, Keras can run on top of different types of hardware (see fig-
ure 3.1)—GPU, TPU, or plain CPU—and can be seamlessly scaled to thousands of
machines.
Keras is known for prioritizing the developer experience. It’s an API for human
beings, not machines. It follows best practices for reducing cognitive load: it offers
consistent and simple workflows, it minimizes the number of actions required for com-
mon use cases, and it provides clear and actionable feedback upon user error. This
makes Keras easy to learn for a beginner, and highly productive to use for an expert.

Keras user experience


Keras is an API designed for humans
Best practices are followed by Keras to decrease cognitive load, ensures that the models
are consistent, and the corresponding APIs are simple.

1. Not designed for machines


Keras provides clear feedback upon the occurrence of any error that minimizes the
number of user actions for the majority of the common use cases.
2. Easy to learn and use.
3. Highly Flexible
Keras provide high flexibility to all of its developers by integrating low-level deep
learning languages such as TensorFlow or Theano, which ensures that anything
written in the base language can be implemented in Keras.

Keras Backend

Keras being a model-level library helps in developing deep learning models by offering high-
level building blocks. All the low-level computations such as products of Tensor,
convolutions, etc. are not handled by Keras itself, rather they depend on a specialized tensor
manipulation library that is well optimized to serve as a backend engine. Keras has managed
it so perfectly that instead of incorporating one single library of tensor and performing
operations related to that particular library, it offers plugging of different backend engines
into Keras.

Keras consist of three backend engines, which are as follows:

TensorFlow
TensorFlow is a Google product, which is one of the most famous deep learning tools widely
used in the research area of machine learning and deep neural network. It came into the
market on 9th November 2015 under the Apache License 2.0. It is built in such a way that it
can easily run on multiple CPUs and GPUs as well as on mobile operating systems. It
consists of various wrappers in distinct languages such as Java, C++, or Python.

TensorFlow is the better library for all because it is accessible to everyone. TensorFlow
library integrates different API to create a scale deep learning architecture like CNN
(Convolutional Neural Network) or RNN (Recurrent Neural Network).
TensorFlow is based on graph computation; it can allow the developer to create the
construction of the neural network with Tensorboard. This tool helps debug our program. It
runs on CPU (Central Processing Unit) and GPU (Graphical Processing Unit

o It is mainly used for deep learning or machine learning problems such


as Classification, Perception, Understanding, Discovering Prediction,
and Creation.

o Voice/Sound Recognition

o Voice and sound recognition applications are the most-known use cases of deep-
learning. If the neural networks have proper input data feed, neural networks are
capable of understanding audio signals.

For example:

Voice recognition is used in the Internet of Things, automotive, security, and UX/UI.

Sentiment Analysis is mostly used in customer relationship management (CRM).

Flaw Detection (engine noise) is mostly used in automotive and Aviation.

Voice search is mostly used in customer relationship management (CRM)

2. Image Recognition

Image recognition is the first application that made deep learning and machine learning
popular. Telecom, Social Media, and handset manufacturers mostly use image recognition. It
is also used for face recognition, image search, motion detection, machine vision, and photo
clustering.

For example, image recognition is used to recognize and identify people and objects in from
of images. Image recognition is used to understand the context and content of any image.
For object recognition, TensorFlow helps to classify and identify arbitrary objects within
larger images.

This is also used in engineering application to identify shape for modeling purpose
(3d reconstruction from 2d image) and by Facebook for photo tagging.

For example, deep learning uses TensorFlow for analyzing thousands of photos of cats. So a
deep learning algorithm can learn to identify a cat because this algorithm is used to find
general features of objects, animals, or people.

3. Time Series

Deep learning is using Time Series algorithms for examining the time series data to extract
meaningful statistics. For example, it has used the time series to predict the stock market.

A recommendation is the most common use case for Time


Series. Amazon, Google, Facebook, and Netflix are using deep learning for the suggestion.
So, the deep learning algorithm is used to analyze customer activity and compare it to
millions of other users to determine what the customer may like to purchase or watch.

For example, it can be used to recommend us TV shows or movies that people like based on
TV shows or movies we already watched.

4. Video Detection

The deep learning algorithm is used for video detection. It is used for motion detection, real-
time threat detection in gaming, security, airports, and UI/UX field.

For example, NASA is developing a deep learning network for object clustering of asteroids
and orbit classification. So, it can classify and predict NEOs (Near Earth Objects).

5. Text-Based Applications

Text-based application is also a popular deep learning algorithm. Sentimental analysis, social
media, threat detection, and fraud detection, are the example of Text-based applications.

For example, Google Translate supports over 100 languages.

Some companies who are currently using TensorFlow are Google, AirBnb, eBay, Intel,
DropBox, Deep Mind, Airbus, CEVA, Snapchat, SAP, Uber, Twitter, Coca-Cola, and IBM.

Features of TensorFlow

TensorFlow has an interactive multiplatform programming interface which is scalable and


reliable compared to other deep learning libraries which are available.

These features of TensorFlow will tell us about the popularity of TensorFlow.


o

Responsive Construct

We can visualize each part of the graph, which is not an option while
using Numpy or SciKit. To develop a deep learning application, firstly, there are two or three
components that are required to create a deep learning application and need a programming
language.

2. Flexible

It is one of the essential TensorFlow Features according to its operability. It has modularity
and parts of it which we want to make standalone.

3. Easily Trainable

It is easily trainable on CPU and for GPU in distributed computing.

4. Parallel Neural Network Training

TensorFlow offers to the pipeline in the sense that we can train multiple neural networks and
various GPUs, which makes the models very efficient on large-scale systems.
5. Large Community

Google has developed it, and there already is a large team of software engineers who work on
stability improvements continuously.

6. Open Source

The best thing about the machine learning library is that it is open source so anyone can use it
as much as they have internet connectivity. So, people can manipulate the library and come
up with a fantastic variety of useful products. And it has become another DIY community
which has a massive forum for people getting started with it and those who find it hard to use
it.

7. Feature Columns

TensorFlow has feature columns which could be thought of as intermediates between raw
data and estimators; accordingly, bridging input data with our model.

The feature below describes how the feature column is implemented.


8. Availability of Statistical Distributions

This library provides distributions functions including Bernoulli, Beta, Chi2, Uniform,
Gamma, which are essential, especially where considering probabilistic approaches such as
Bayesian models.

9. Layered Components

TensorFlow produces layered operations of weight and biases from the function such as
tf.contrib.layers and also provides batch normalization, convolution layer, and dropout layer.
So tf.contrib.layers.optimizers have optimizers such as Adagrad, SGD, Momentum which
are often used to solve optimization problems for numerical analysis.

10. Visualizer (With TensorBoard)

We can inspect a different representation of a model and make the changed necessary while
debugging it with the help of TensorBoard.

11.Event Logger (With TensorBoard)

It is just like UNIX, where we use tail - f to monitor the output of tasks at the cmd. It checks,
logging events and summaries from the graph and production with the TensorBoard.

o Theano
Theano was developed at the University of Montreal, Quebec, Canada, by the MILA
group. It is an open-source python library that is widely used for performing
mathematical operations on multi-dimensional arrays by incorporating scipy and
numpy. It utilizes GPUs for faster computation and efficiently computes the gradients
by building symbolic graphs automatically. It has come out to be very suitable for
unstable expressions, as it first observes them numerically and then computes them
with more stable algorithms.

o CNTK
Microsoft Cognitive Toolkit is deep learning's open-source framework. It consists of
all the basic building blocks, which are required to form a neural network. The models
are trained using C++ or Python, but it incorporates C# or Java to load the model for
making predictions.

Setting up Deep Learning Work Stations

Before you can get started developing deep learning applications, you need to set up
your development environment. It’s highly recommended, although not strictly nec-
essary, that you run deep learning code on a modern NVIDIA GPU rather than your
computer’s CPU.
Some applications—in particular, image processing with convolutional networks—
will be excruciatingly slow on CPU, even a fast multicore CPU. And even for
applications that can realistically be run on CPU, you’ll generally see the speed
increase by a factor of 5 or 10 by using a recent Gpu.

Colaboratory is the easiest way to get started, as it requires no hardware purchase and
no software installation—just open a tab in your browser and start coding. It’s the
option we recommend for running the code examples in this book. However, the free
version of Colaboratory is only suitable for small workloads. If you want to scale up,
you’ll have to use the first or second option.
Using Colaboratory
Colaboratory (or Colab for short) is a free Jupyter notebook service that requires no
installation and runs entirely in the cloud. Effectively, it’s a web page that lets you
write and execute Keras scripts right away. It gives you access to a free (but limited)
GPU runtime and even a TPU runtime, so you don’t have to buy your own GPU.
Colaboratory is what we recommend for running the code examples in this book.

FIRST STEPS WITH COLABORATORY


To get started with Colab, go to https://colab.research.google.com and click the New
Notebook button. You’ll see the standard Notebook interface shown in figure 3.2.
INSTALLING PACKAGES WITH PIP
The default Colab environment already comes with TensorFlow and Keras installed,
so you can start using it right away without any installation steps required. But if you
ever need to install something with pip, you can do so by using the following syntax
in a code cell (note that the line starts with ! to indicate that it is a shell command
rather than Python code):
!pip install package_name

First steps with TensorFlow


As you saw in the previous chapters, training a neural network revolves around the
following concepts:
ƒFirst, low-level tensor manipulation—the infrastructure that underlies all modern
machine learning. This translates to TensorFlow APIs:
– Tensors, including special tensors that store the network’s state (variables)
– Tensor operations such as addition, relu, matmul
--Backpropagation, a way to compute the gradient of mathematical expressions
(handled in TensorFlow via the GradientTape object)
ƒ Second, high-level deep learning concepts. This translates to Keras APIs:
– Layers, which are combined into a model
– A loss function, which defines the feedback signal used for learning
– An optimizer, which determines how learning proceeds
– Metrics to evaluate model performance, such as accuracy
– A training loop that performs mini-batch stochastic gradient descent

You might also like