INFORMATION MANAGEMENT Unit 5
INFORMATION MANAGEMENT Unit 5
INFORMATION
MANAGEMENT
Dr.C.THIRUMAL AZHAGAN
ASSISTANT PROFESSOR
MANAGEMENT STUDIES
ANNA UNIVERSITY BIT CAMPUS
TIRUCHIRAPPALLI
UNIT V NEW IT INITIATIVES
• Learning rate decay. The learning rate is a hyper parameter -- a factor that defines the system or
set conditions for its operation prior to the learning process -- that controls how much change the
model experiences in response to the estimated error every time the model weights are altered.
Learning rates that are too high may result in unstable training processes or the learning of a
suboptimal set of weights. Learning rates that are too small may produce a lengthy training
process that has the potential to get stuck.
• The learning rate decay method -- also called learning rate annealing or adaptive learning
rates -- is the process of adapting the learning rate to increase performance and reduce training
time. The easiest and most common adaptations of learning rate during training include
techniques to reduce the learning rate over time.
• Transfer learning. This process involves perfecting a previously trained model; it requires an
interface to the internals of a preexisting network. First, users feed the existing network new data
containing previously unknown classifications. Once adjustments are made to the network, new
tasks can be performed with more specific categorizing abilities. This method has the advantage
of requiring much less data than others, thus reducing computation time to minutes or hours.
• Training from scratch. This method requires a developer to collect a large
labeled data set and configure a network architecture that can learn the features
and model. This technique is especially useful for new applications, as well as
applications with a large number of output categories. However, overall, it is a
less common approach, as it requires inordinate amounts of data, causing
training to take days or weeks.
• Dropout. This method attempts to solve the problem of over fitting in networks
with large amounts of parameters by randomly dropping units and their
connections from the neural network during training. It has been proven that the
dropout method can improve the performance of neural networks on supervised
learning tasks in areas such as speech recognition, document classification and
computational biology.
DEEP LEARNING APPLICATIONS
• Deep learning is currently used in most common image recognition tools, natural language processing (NLP) and speech
recognition software. These tools are starting to appear in applications as diverse as self-driving cars and language
translation services.
• Medical diagnosis, stock market trading signals, network security and image recognition.
• Customer experience (cx). Deep learning models are already being used for Chabot's. And, as it continues to mature,
deep learning is expected to be implemented in various businesses to improve CX and increase customer satisfaction.
• Text generation. Machines are being taught the grammar and style of a piece of text and are then using this model to
automatically create a completely new text matching the proper spelling, grammar and style of the original text.
• Aerospace and military. Deep learning is being used to detect objects from satellites that identify areas of interest, as
well as safe or unsafe zones for troops.
• Industrial automation. Deep learning is improving worker safety in environments like factories and warehouses by
providing services that automatically detect when a worker or object is getting too close to a machine.
• Adding color. Color can be added to black-and-white photos and videos using deep learning models. In the past, this was
an extremely time-consuming, manual process.
• Medical research. Cancer researchers have started implementing deep learning into their practice as a way to
automatically detect cancer cells.
2. BIG DATA
• Big data is a collection of data that is huge in volume, yet growing exponentially
with time.
• It is a data with so large size and complexity that none of traditional data
management tools can store it or process it efficiently.
• Big data is also a data but with huge size.
• Examples
• The New York Stock Exchange is an example of big data that generates about one terabyte of new trade
data per day.
• Social media
• The statistic shows that 500+terabytes of new data get ingested into the databases of social media
site Facebook, every day. This data is mainly generated in terms of photo and video uploads, message
exchanges, putting comments etc.
• A single jet engine can generate 10+terabytes of data in 30 minutes of flight time. With many thousand
flights per day, generation of data reaches up to many petabytes.
TYPES OF BIG DATA
• Structured
• Any data that can be stored, accessed and processed in the form of fixed format is termed as a
‘structured’ data. Over the period of time, talent in computer science has achieved greater success
in developing techniques for working with such kind of data (where the format is well known in
advance) and also deriving value out of it. However, nowadays, we are foreseeing issues when a
size of such data grows to a huge extent, typical sizes are being in the rage of multiple zettabytes.
Employee_Na Salary_In_lac
Employee_ID Gender Department
me s
Rajesh
2365 Male Finance 650000
Kulkarni
3398 Pratibha Joshi Female Admin 650000
7465 Shushil Roy Male Admin 500000
Shubhojit
7500 Male Finance 500000
Das
• Unstructured
• Any data with unknown form or the structure is classified as unstructured data. In addition to the size being
huge, un-structured data poses multiple challenges in terms of its processing for deriving value out of it. A
typical example of unstructured data is a heterogeneous data source containing a combination of simple text
files, images, videos etc. Now day organizations have wealth of data available with them but unfortunately,
they don’t know how to derive value out of it since this data is in its raw form or unstructured format.
• Examples of un-structured data
• The output returned by ‘Google search’
• Semi-structured
• Semi-structured data can contain both the forms of data. We can see semi-structured data as a
structured in form but it is actually not defined with e.g. A table definition in relational DBMS.
Example of semi-structured data is a data represented in an XML file.
• Examples of semi-structured data
• Personal data stored in an XML file-
CHARACTERISTICS OF BIG DATA