Unit-3 Aiml
Unit-3 Aiml
NEURAL NETWORKS
ANATOMY OF NEURAL NETWORKS
1. Explanation of Neural network.
2. MNIST digit classification
Introduction to Keras
Keras is a high-level, deep learning API developed by Google for implementing neural
networks. It is written in Python and is used to make the implementation of neural
networks easy. It also supports multiple backend neural network computation
Keras is a deep learning API for Python, built on top of TensorFlow, that provides a con-
venient way to define and train any kind of deep learning model. Keras was initially
developed for research, with the aim of enabling fast deep learning experimentation.
Through TensorFlow, Keras can run on top of different types of hardware (see fig-
ure 3.1)—GPU, TPU, or plain CPU—and can be seamlessly scaled to thousands of
machines.
Keras is known for prioritizing the developer experience. It’s an API for human
beings, not machines. It follows best practices for reducing cognitive load: it offers
consistent and simple workflows, it minimizes the number of actions required for com-
mon use cases, and it provides clear and actionable feedback upon user error. This
makes Keras easy to learn for a beginner, and highly productive to use for an expert.
Keras Backend
Keras being a model-level library helps in developing deep learning models by offering high-
level building blocks. All the low-level computations such as products of Tensor,
convolutions, etc. are not handled by Keras itself, rather they depend on a specialized tensor
manipulation library that is well optimized to serve as a backend engine. Keras has managed
it so perfectly that instead of incorporating one single library of tensor and performing
operations related to that particular library, it offers plugging of different backend engines
into Keras.
TensorFlow
TensorFlow is a Google product, which is one of the most famous deep learning tools widely
used in the research area of machine learning and deep neural network. It came into the
market on 9th November 2015 under the Apache License 2.0. It is built in such a way that it
can easily run on multiple CPUs and GPUs as well as on mobile operating systems. It
consists of various wrappers in distinct languages such as Java, C++, or Python.
TensorFlow is the better library for all because it is accessible to everyone. TensorFlow
library integrates different API to create a scale deep learning architecture like CNN
(Convolutional Neural Network) or RNN (Recurrent Neural Network).
TensorFlow is based on graph computation; it can allow the developer to create the
construction of the neural network with Tensorboard. This tool helps debug our program. It
runs on CPU (Central Processing Unit) and GPU (Graphical Processing Unit
o Voice/Sound Recognition
o Voice and sound recognition applications are the most-known use cases of deep-
learning. If the neural networks have proper input data feed, neural networks are
capable of understanding audio signals.
For example:
Voice recognition is used in the Internet of Things, automotive, security, and UX/UI.
2. Image Recognition
Image recognition is the first application that made deep learning and machine learning
popular. Telecom, Social Media, and handset manufacturers mostly use image recognition. It
is also used for face recognition, image search, motion detection, machine vision, and photo
clustering.
For example, image recognition is used to recognize and identify people and objects in from
of images. Image recognition is used to understand the context and content of any image.
For object recognition, TensorFlow helps to classify and identify arbitrary objects within
larger images.
This is also used in engineering application to identify shape for modeling purpose
(3d reconstruction from 2d image) and by Facebook for photo tagging.
For example, deep learning uses TensorFlow for analyzing thousands of photos of cats. So a
deep learning algorithm can learn to identify a cat because this algorithm is used to find
general features of objects, animals, or people.
3. Time Series
Deep learning is using Time Series algorithms for examining the time series data to extract
meaningful statistics. For example, it has used the time series to predict the stock market.
For example, it can be used to recommend us TV shows or movies that people like based on
TV shows or movies we already watched.
4. Video Detection
The deep learning algorithm is used for video detection. It is used for motion detection, real-
time threat detection in gaming, security, airports, and UI/UX field.
For example, NASA is developing a deep learning network for object clustering of asteroids
and orbit classification. So, it can classify and predict NEOs (Near Earth Objects).
5. Text-Based Applications
Text-based application is also a popular deep learning algorithm. Sentimental analysis, social
media, threat detection, and fraud detection, are the example of Text-based applications.
Some companies who are currently using TensorFlow are Google, AirBnb, eBay, Intel,
DropBox, Deep Mind, Airbus, CEVA, Snapchat, SAP, Uber, Twitter, Coca-Cola, and IBM.
Features of TensorFlow
Responsive Construct
We can visualize each part of the graph, which is not an option while
using Numpy or SciKit. To develop a deep learning application, firstly, there are two or three
components that are required to create a deep learning application and need a programming
language.
2. Flexible
It is one of the essential TensorFlow Features according to its operability. It has modularity
and parts of it which we want to make standalone.
3. Easily Trainable
TensorFlow offers to the pipeline in the sense that we can train multiple neural networks and
various GPUs, which makes the models very efficient on large-scale systems.
5. Large Community
Google has developed it, and there already is a large team of software engineers who work on
stability improvements continuously.
6. Open Source
The best thing about the machine learning library is that it is open source so anyone can use it
as much as they have internet connectivity. So, people can manipulate the library and come
up with a fantastic variety of useful products. And it has become another DIY community
which has a massive forum for people getting started with it and those who find it hard to use
it.
7. Feature Columns
TensorFlow has feature columns which could be thought of as intermediates between raw
data and estimators; accordingly, bridging input data with our model.
This library provides distributions functions including Bernoulli, Beta, Chi2, Uniform,
Gamma, which are essential, especially where considering probabilistic approaches such as
Bayesian models.
9. Layered Components
TensorFlow produces layered operations of weight and biases from the function such as
tf.contrib.layers and also provides batch normalization, convolution layer, and dropout layer.
So tf.contrib.layers.optimizers have optimizers such as Adagrad, SGD, Momentum which
are often used to solve optimization problems for numerical analysis.
We can inspect a different representation of a model and make the changed necessary while
debugging it with the help of TensorBoard.
It is just like UNIX, where we use tail - f to monitor the output of tasks at the cmd. It checks,
logging events and summaries from the graph and production with the TensorBoard.
o Theano
Theano was developed at the University of Montreal, Quebec, Canada, by the MILA
group. It is an open-source python library that is widely used for performing
mathematical operations on multi-dimensional arrays by incorporating scipy and
numpy. It utilizes GPUs for faster computation and efficiently computes the gradients
by building symbolic graphs automatically. It has come out to be very suitable for
unstable expressions, as it first observes them numerically and then computes them
with more stable algorithms.
o CNTK
Microsoft Cognitive Toolkit is deep learning's open-source framework. It consists of
all the basic building blocks, which are required to form a neural network. The models
are trained using C++ or Python, but it incorporates C# or Java to load the model for
making predictions.
Before you can get started developing deep learning applications, you need to set up
your development environment. It’s highly recommended, although not strictly nec-
essary, that you run deep learning code on a modern NVIDIA GPU rather than your
computer’s CPU.
Some applications—in particular, image processing with convolutional networks—
will be excruciatingly slow on CPU, even a fast multicore CPU. And even for
applications that can realistically be run on CPU, you’ll generally see the speed
increase by a factor of 5 or 10 by using a recent Gpu.
Colaboratory is the easiest way to get started, as it requires no hardware purchase and
no software installation—just open a tab in your browser and start coding. It’s the
option we recommend for running the code examples in this book. However, the free
version of Colaboratory is only suitable for small workloads. If you want to scale up,
you’ll have to use the first or second option.
Using Colaboratory
Colaboratory (or Colab for short) is a free Jupyter notebook service that requires no
installation and runs entirely in the cloud. Effectively, it’s a web page that lets you
write and execute Keras scripts right away. It gives you access to a free (but limited)
GPU runtime and even a TPU runtime, so you don’t have to buy your own GPU.
Colaboratory is what we recommend for running the code examples in this book.