0% found this document useful (0 votes)
35 views

IET - Chapter 2

Introduction to Emerging Technology Chapter 2 - lecture note

Uploaded by

chala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

IET - Chapter 2

Introduction to Emerging Technology Chapter 2 - lecture note

Uploaded by

chala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Chapter 2: Data Science

Chala B. (MTech) CSE


College of Computing
Madda Walabu University
Introduction
In this chapter, you are going to learn more about
 Data Science
 Data vs. Information
 Data Types and Representation
 Data Value Chain
 Basic concepts of Big Data.
2.1. An Overview of Data Science

Activity 2.1
 What is data science? Can you describe the role of
data in emerging technology?
 What are data and information?
2
 What is big data?
An Overview of Data Science ….Cont’d
Data science is:
 A multi-disciplinary field that uses scientific methods,
processes, algorithms and systems to extract knowledge
and insights from structured, semi-structured and
unstructured data.
 is much more than simply analyzing data.
 It offers a range of roles and requires a range of skills.

3
An Overview of Data Science ….Cont’d
• Let’s consider this idea by thinking about some of the data

involved in buying a box of cereal from the store or

supermarket:

 Whatever your cereal preferences—teff, wheat, or burly—

you prepare for the purchase by writing “cereal” in your

notebook. (This planned purchase is a piece of data. )

 When you get to the store, you use your data as a reminder

to grab the item and put it in your cart. 4


An Overview of Data Science …Cont’d
 In addition to the computers, lots of other pieces of
hardware—such as
 the barcode scanner — were involved in collecting,
manipulating, transmitting, and storing the data.
 In addition, many different pieces of software were used to
organize, aggregate, visualize, and present the data.
 Finally, many different human systems were involved in
working with the data.
5
Activity 2.2
• Describe in some detail the main disciplines that contribute
to data science.
• Let the teacher explain the role of data scientists and
students may write a small report on the same.

6
What are Data and Information?
Data
 Is a representation of facts, concepts, or instructions in a
formalized manner.
 is represented with the help of characters such as alphabets
(A-Z, a-z), digits (0-9) or special characters (+, -, /, *, <,>, =,
etc.).
Whereas information
 is the processed data on which decisions and actions are
based.
 is data that has been processed into a form that is
meaningful.
 is interpreted data; created from organized, structured, and
7
processed data in a particular context.
Data Processing Cycle

• Data processing is the re-structuring or re-ordering of data by


people or machines to increase their usefulness and add
values for a particular purpose.
• Data processing consists of the following basic steps –
 input, processing, and output. These three steps constitute
the data processing cycle.

8
Figure 2.1 Data Processing Cycle
Data Processing Cycle….Cont’d
• Input − data is prepared in some convenient form for
processing. The form will depend on the processing machine.
• Processing − data is changed to produce data in a more
useful form.
• Output − the result of the proceeding processing step is
collected. The particular form of the output data depends on
the use of the data.

Activity 2.3
 Discuss the main differences between data and information with
examples.
 Can we process data manually using a pencil and paper? Discuss the
differences with data processing using the computer. 9
Data types and their representation
• A data type is simply an attribute of data that tells the compiler
or interpreter how the programmer intends to use the data.
Data types from Computer programming perspective
Common data types include:
 Integers(int)- is used to store whole numbers
 Booleans(bool)- is used to represent either true or false.
 Characters(char)- is used to store a single character
 Floating-point numbers(float)- is used to store real
numbers
 Alphanumeric strings(string)- used to store a combination 10

of characters and numbers


Data types and their representation ….Cont’d
Data types from Data Analytics perspective
From a data analytics point of view, it is important to
understand that there are three common types of data
types or structures:
 Structured,
 Semi-structured, and
 Unstructured data types.
Fig. 2.2 below describes the three types of data and
metadata.
Structured Unstructured Semi-structured

11

Figure 2.2 Data types from a data analytics perspective


Structured Data
• a pre-defined data model and is therefore
straightforward to analyze.
• conforms to a tabular format with a relationship
between the different rows and columns.
 Examples include Excel files or SQL databases
Semi-structured Data
• is data that does not conform with the formal
structure of data models.
• however, contains tags or other markers
• known as a self-describing structure
 Examples include JSON and XML are forms 12
Unstructured Data
• is information that either does not have a
predefined data model or is not organized in a pre-
defined manner.
• Unstructured information is typically text-heavy
but may contain data such as dates, numbers, and
facts as well.
 Common examples of unstructured data include
 audio,
 video files or
 No-SQL databases.
13
Metadata – Data about Data
• is data about data -> It provides additional information
about a specific set of data.
 In a set of photographs, for example, metadata could
describe when and where the photos were taken.
• Provides fields for dates and locations which, by
themselves, can be considered structured data.
• Because of this reason, metadata is frequently used by Big
Data solutions for initial analysis.

14
Activity 2.4
Discuss data types from programing and analytics
perspectives.
Compare metadata with structured, unstructured and
semi-structured data
Given at least one example of structured, unstructured
and semi-structured data types

15
Data value Chain
• is introduced to describe the information flow
within a big data system as a series of steps needed
to generate value and useful insights from data.
• The Big Data Value Chain identifies the following
key high-level activities:

16
Data Acquisition
• process of gathering, filtering, and cleaning data before it is
put in a data warehouse on which data analysis can be carried
out.
• is one of the major big data challenges in terms of infrastructure
requirements.
Data Analysis
• is concerned with making the raw data acquired amenable to
use in decision-making.
• involves exploring, transforming, and modeling data with the
goal of highlighting relevant data, synthesizing and extracting
useful hidden information with high potential from a business
point of view.
17
• Related areas include data mining, business intelligence, and
machine learning.
Data Curation
• active management of data over its life cycle to
ensure it meets the necessary data quality
requirements.
• It can be categorized into different activities such
as content creation, selection, classification,
transformation, validation, and preservation.
• is performed by expert curators.
• The curators have the responsibility of ensuring
that data are trustworthy, discoverable, 18

accessible, reusable and fit their purpose.


Data Storage
• is the persistence and management of data in a scalable way.

• Relational Database Management Systems (RDBMS) have been the main,

and almost unique, solution to the storage paradigm for nearly 40 years.

• However, the ACID (Atomicity, Consistency, Isolation, and Durability)

properties that guarantee database transactions lack flexibility with

regard to schema changes and the performance and fault tolerance when

data volumes and complexity grow.

• NoSQL technologies have been designed with the scalability goal in mind

and present a wide range of solutions based on alternative data models.


19
Data Usage
• It covers the data-driven business activities that
need access to data, its analysis, and the tools
needed to integrate the data analysis within the
business activity.
• Data usage in business decision-making can
enhance competitiveness through the reduction of
costs, increased added value, or any other parameter
that can be measured against existing performance
criteria.

20
Activity 2.5
 Which information flow step in the data value chain you
think is labor-intensive? Why?
 What are the different data types and their value chain?

21
Basic concepts of big data

Big data:
• is a blanket term for the non-traditional strategies and

technologies needed to gather, organize, process, and gather

insights from large datasets.

• The pervasiveness, scale, and value of this type of

computing have greatly expanded in recent years.

22
Basic concepts of big data …
• is the term for a collection of data sets so large and complex

that it becomes difficult to process using on-hand database

management tools or traditional data processing applications.

• a “large dataset” means a dataset too large to reasonably

process or store.

• scale of big datasets is constantly shifting and may vary


23
significantly from organization to organization.
Basic concepts of big data …
Big data is characterized by 3V and more:
 Volume: large amounts of data Zeta bytes/Massive
datasets
 Velocity: Data is live streaming or in motion
 Variety: data comes in many different forms from diverse
sources
 Veracity: can we trust the data? How accurate is it? etc.

24
Clustered Computing and Hadoop Ecosystem
 Clustered Computing
 individual computers are often inadequate for handling the
data at most stages.
 To better address this issue computer clusters are a better fit.
 Big data clustering software combines the resources of many
smaller machines, seeking to provide a number of benefits:
• Resource Pooling: Combining the available storage space
• High Availability: fault tolerance and availability
guarantees
• Easy Scalability: Clusters make it easy to scale horizontally
• This means the system can react to changes in resource
requirements without expanding the physical resources on
a machine.
25
 Clustered Computing… cont’d

• Using clusters requires a solution for managing cluster


membership, coordinating resource sharing, and scheduling
actual work on individual nodes.
• Cluster membership and resource allocation can be handled
by software like Hadoop’s YARN (which stands for Yet Another
Resource Negotiator).

26
Activity 2.6
 List and discuss the characteristics of big data
 Describe the big data life cycle. Which step you think most
useful and why?
 List and describe each technology or tool used in the big data
life cycle.
 Discuss the three methods of computing over a large dataset.

27
Hadoop and its Ecosystem
 Hadoop:
• is an open-source framework.
• intended to make interaction with big data easier.
• allows for the distributed processing of large datasets across
clusters of computers.
• It is inspired by a technical document published by Google.
 The four key characteristics of Hadoop are:
• Economical: ordinary computers can be used for data
processing.
• Reliable: stores copies of the data on different machines
and is resistant to hardware failure.
• Scalable: A few extra nodes help in scaling up the
framework.
28
• Flexible: store as much structured and unstructured data
as you need to and decide to use them later.
Hadoop and its Ecosystem …. Cont’d
• Hadoop has an ecosystem that has evolved from its four core
components: data management, access, processing, and storage.
• It comprises the following components and many others:
 HDFS: Hadoop Distributed File System
 YARN: Yet Another Resource Negotiator
 MapReduce: Programming based Data Processing
 Spark: In-Memory data processing
 PIG, HIVE: Query-based processing of data services
 HBase: NoSQL Database
 Mahout, Spark MLLib: Machine Learning algorithm
Libraries
 Solar, Lucene: Searching and Indexing
 Zookeeper: Managing cluster 29
 Oozie: Job Scheduling
Figure 2.5 Hadoop Ecosystem

30
Activity 2.5
 Students in a group shall discuss the purpose of each Hadoop Ecosystem components?
Big Data Life Cycle with Hadoop
 Ingesting data into the system
 first stage of Big Data
 The data is ingested or transferred to Hadoop from various sources
 Sqoop transfers data from RDBMS to HDFS, whereas Flume transfers event
data.
 Processing the data in storage
 the data is stored and processed.
 is stored in the distributed file system, HDFS, and the NoSQL distributed
data, HBase.
 Spark and MapReduce perform data processing.
 Computing and analyzing data
 data is analyzed by processing frameworks such as Pig, Hive, and Impala.
 Pig converts the data using a map and reduce and then analyzes it.
 Hive is also based on the map and reduce programming and is most
suitable for structured data.
 Visualizing the results
 is performed by tools such as Hue and Cloudera Search. 31
 In this stage, the analyzed data can be accessed by users.
Chapter Two Review Questions (Assignment 10%)
1. Define data science; what are the roles of a data scientist?

2. Discuss data and its types from computer programming


and data analytics perspectives?

3. Discuss a series of steps needed to generate value and


useful insights from data?

4. What is the principal goal of data science?

5. List out and discuss the characteristics of Big Data?

6. How we ingest streaming data into Hadoop Cluster? 32

You might also like