0% found this document useful (0 votes)
104 views15 pages

INFORMATION MANAGEMENT Unit 5

This document discusses new IT initiatives taught in a management studies course, including deep learning, big data, quantum computing, and other advanced technologies. It provides details on deep learning such as how it works by building hierarchies of knowledge like humans, common deep learning methods, and applications. Big data is defined as large, growing data that cannot be processed by traditional tools, and the document outlines the types, characteristics, and advantages of big data processing.

Uploaded by

Thirumal Azhagan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views15 pages

INFORMATION MANAGEMENT Unit 5

This document discusses new IT initiatives taught in a management studies course, including deep learning, big data, quantum computing, and other advanced technologies. It provides details on deep learning such as how it works by building hierarchies of knowledge like humans, common deep learning methods, and applications. Big data is defined as large, growing data that cannot be processed by traditional tools, and the document outlines the types, characteristics, and advantages of big data processing.

Uploaded by

Thirumal Azhagan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

BA 4106

INFORMATION
MANAGEMENT

Dr.C.THIRUMAL AZHAGAN
ASSISTANT PROFESSOR
MANAGEMENT STUDIES
ANNA UNIVERSITY BIT CAMPUS
TIRUCHIRAPPALLI
UNIT V NEW IT INITIATIVES

1. INTRODUCTION TO DEEP LEARNING, BIG DATA,


PERVASIVE COMPUTING, CLOUD COMPUTING – Lecture
method
2. ADVANCEMENTS IN AI, IoT, BLOCK CHAIN, CRYPTO
CURRENCY
3. QUANTUM COMPUTING – Case study
1. DEEP LEARNING

Deep learning is a type of machine learning and Artificial Intelligence (


AI) that imitates the way humans gain certain types of knowledge.
Deep learning is an important element of data science, which includes
statistics and predictive modeling. It is extremely beneficial to data
scientists who are tasked with collecting, analyzing and interpreting
large amounts of data; deep learning makes this process faster and
easier.
Example
Preschool kid whose first word is dog.
The preschool kid learns what a dog is -- and is not -- by pointing to objects and
saying the word dog.
The parent says, "yes, that is a dog," or, "no, that is not a dog."
As the preschool kid continues to point to objects, he becomes more aware of the
features that all dogs possess. What the kid does, without knowing it, is clarify a
complex abstraction -- the concept of dog -- by building a hierarchy in which each
level of abstraction is created with knowledge that was gained from the preceding
layer of the hierarchy.
HOW DEEP LEARNING WORKS
• Computer programs that use deep learning go through much the same
process as the preschool kid learning to identify the dog.
• Each algorithm in the hierarchy applies a nonlinear transformation to
its input and uses what it learns to create a statistical model as
output. Iterations continue until the output has reached an acceptable
level of accuracy. The number of processing layers through which
data must pass is what inspired the label deep.
DEEP LEARNING METHODS

• Learning rate decay. The learning rate is a hyper parameter -- a factor that defines the system or
set conditions for its operation prior to the learning process -- that controls how much change the
model experiences in response to the estimated error every time the model weights are altered.
Learning rates that are too high may result in unstable training processes or the learning of a
suboptimal set of weights. Learning rates that are too small may produce a lengthy training
process that has the potential to get stuck.
• The learning rate decay method -- also called learning rate annealing or adaptive learning
rates -- is the process of adapting the learning rate to increase performance and reduce training
time. The easiest and most common adaptations of learning rate during training include
techniques to reduce the learning rate over time.
• Transfer learning. This process involves perfecting a previously trained model; it requires an
interface to the internals of a preexisting network. First, users feed the existing network new data
containing previously unknown classifications. Once adjustments are made to the network, new
tasks can be performed with more specific categorizing abilities. This method has the advantage
of requiring much less data than others, thus reducing computation time to minutes or hours.
• Training from scratch. This method requires a developer to collect a large
labeled data set and configure a network architecture that can learn the features
and model. This technique is especially useful for new applications, as well as
applications with a large number of output categories. However, overall, it is a
less common approach, as it requires inordinate amounts of data, causing
training to take days or weeks.
• Dropout. This method attempts to solve the problem of over fitting in networks
with large amounts of parameters by randomly dropping units and their
connections from the neural network during training. It has been proven that the
dropout method can improve the performance of neural networks on supervised
learning tasks in areas such as speech recognition, document classification and
computational biology.
DEEP LEARNING APPLICATIONS

• Deep learning is currently used in most common image recognition tools, natural language processing (NLP) and speech
recognition software. These tools are starting to appear in applications as diverse as self-driving cars and language
translation services.

• Medical diagnosis, stock market trading signals, network security and image recognition.

• Customer experience (cx). Deep learning models are already being used for Chabot's. And, as it continues to mature,
deep learning is expected to be implemented in various businesses to improve CX and increase customer satisfaction.

• Text generation. Machines are being taught the grammar and style of a piece of text and are then using this model to
automatically create a completely new text matching the proper spelling, grammar and style of the original text.

• Aerospace and military. Deep learning is being used to detect objects from satellites that identify areas of interest, as
well as safe or unsafe zones for troops.

• Industrial automation. Deep learning is improving worker safety in environments like factories and warehouses by
providing services that automatically detect when a worker or object is getting too close to a machine.

• Adding color. Color can be added to black-and-white photos and videos using deep learning models. In the past, this was
an extremely time-consuming, manual process.

• Medical research. Cancer researchers have started implementing deep learning into their practice as a way to
automatically detect cancer cells.
2. BIG DATA
• Big data is a collection of data that is huge in volume, yet growing exponentially
with time.
• It is a data with so large size and complexity that none of traditional data
management tools can store it or process it efficiently.
• Big data is also a data but with huge size.
• Examples
• The New York Stock Exchange is an example of big data that generates about one terabyte of new trade
data per day.
• Social media
• The statistic shows that 500+terabytes of new data get ingested into the databases of social media
site Facebook, every day. This data is mainly generated in terms of photo and video uploads, message
exchanges, putting comments etc.
• A single jet engine can generate 10+terabytes of data in 30 minutes of flight time. With many thousand
flights per day, generation of data reaches up to many petabytes.
TYPES OF BIG DATA
• Structured
• Any data that can be stored, accessed and processed in the form of fixed format is termed as a
‘structured’ data. Over the period of time, talent in computer science has achieved greater success
in developing techniques for working with such kind of data (where the format is well known in
advance) and also deriving value out of it. However, nowadays, we are foreseeing issues when a
size of such data grows to a huge extent, typical sizes are being in the rage of multiple zettabytes.

An ‘Employee’ table in a database is an example of Structured Data

Employee_Na Salary_In_lac
Employee_ID Gender Department
me s
Rajesh
2365  Male  Finance 650000
Kulkarni 
3398  Pratibha Joshi  Female  Admin  650000
7465  Shushil Roy  Male  Admin  500000
Shubhojit
7500  Male  Finance  500000
Das 
• Unstructured
• Any data with unknown form or the structure is classified as unstructured data. In addition to the size being
huge, un-structured data poses multiple challenges in terms of its processing for deriving value out of it. A
typical example of unstructured data is a heterogeneous data source containing a combination of simple text
files, images, videos etc. Now day organizations have wealth of data available with them but unfortunately,
they don’t know how to derive value out of it since this data is in its raw form or unstructured format.
• Examples of un-structured data
• The output returned by ‘Google search’
• Semi-structured
• Semi-structured data can contain both the forms of data. We can see semi-structured data as a
structured in form but it is actually not defined with e.g. A table definition in relational DBMS.
Example of semi-structured data is a data represented in an XML file.
• Examples of semi-structured data
• Personal data stored in an XML file-
CHARACTERISTICS OF BIG DATA

Big data can be described by the following characteristics:


• Volume
• Variety
• Velocity
• Variability
Advantages of Big Data processing
• Businesses can utilize outside intelligence while taking decisions
Access to social data from search engines and sites like Facebook, twitter are enabling
organizations to fine tune their business strategies.
• Improved customer service
Traditional customer feedback systems are getting replaced by new systems designed with big data
technologies. In these new systems, big data and natural language processing technologies are
being used to read and evaluate consumer responses.
• Early identification of risk to the product/services
• Better operational efficiency
• Big data technologies can be used for creating a staging area or landing zone for new data before
identifying what data should be moved to the data warehouse. In addition, such integration of big
data technologies and data warehouse helps an organization to offload infrequently accessed
data.

You might also like