Chapter 2 Data Science (4)
Chapter 2 Data Science (4)
Data science is a multi-disciplinary field that uses scientific methods, processes, algorithms, and
systems to extract knowledge and insights from structured, semi-structured and unstructured data.
Data science is much more than simply analyzing data. It offers a range of roles and requires a
range of skills.
Input − in this step, the input data is prepared in some convenient form for processing. The
form will depend on the processing machine. For example, when electronic computers are
used, the input data can be recorded on any one of the several types of storage medium,
such as hard disk, CD, flash disk and so on.
1
Processing − in this step, the input data is changed to produce data in a more useful form.
For example, interest can be calculated on deposit to a bank, or a summary of sales for the
month can be calculated from the sales orders.
Output − at this stage, the result of the proceeding processing step is collected. The
particular form of the output data depends on the use of the data. For example, output data
may be payroll for employees.
2
i. Structured Data
Structured data is data that adheres to a pre-defined data model and is therefore straightforward to
analyze. Structured data conforms to a tabular format with a relationship between the different
rows and columns. Common examples of structured data are Excel files or SQL databases. Each
of these has structured rows and columns that can be sorted.
3
2.4.1. Data Acquisition
It is the process of gathering, filtering, and cleaning data before it is put in a data warehouse or any
other storage solution on which data analysis can be carried out. Data acquisition is one of the
major big data challenges in terms of infrastructure requirements. The infrastructure required to
support the acquisition of big data must deliver low, predictable latency in both capturing data and
in executing queries; be able to handle very high transaction volumes, often in a distributed
environment; and support flexible and dynamic data structures.
2.4.2. Data Analysis
It is concerned with making the raw data acquired amenable to use in decision- making as well as
domain-specific usage. Data analysis involves exploring, transforming, and modeling data with
the goal of highlighting relevant data, synthesizing and extracting useful hidden information with
high potential from a business point of view. Related areas include data mining, business
intelligence, and machine learning.
2.4.3. Data Curation
It is the active management of data over its life cycle to ensure it meets the necessary data quality
requirements for its effective usage. Data curation processes can be categorized into different
activities such as content creation, selection, classification, transformation, validation, and
preservation.
Data curation is performed by expert curators that are responsible for improving the accessibility
and quality of data. Data curators (also known as scientific curators or data annotators) hold the
responsibility of ensuring that data are trustworthy, discoverable, accessible, reusable and fit their
purpose. A key trend for the duration of big data utilizes community and crowdsourcing
approaches.
2.4.4. Data Storage
It is the persistence and management of data in a scalable way that satisfies the needs of
applications that require fast access to the data. Relational Database Management Systems
(RDBMS) have been the main, and almost unique, a solution to the storage paradigm for nearly 40
years. However, the ACID (Atomicity, Consistency, Isolation, and Durability) properties that
guarantee database transactions lack flexibility with regard to schema changes and the
performance and fault tolerance when data volumes and complexity grow, making them unsuitable
for big data scenarios. NoSQL technologies have been designed with the scalability goal in mind
and present a wide range of solutions based on alternative data models.
4
2.4.5. Data Usage
It covers the data-driven business activities that need access to data, its analysis, and the tools
needed to integrate the data analysis within the business activity. Data usage in business decision-
making can enhance competitiveness through the reduction of costs, increased added value, or any
other parameter that can be measured against existing performance criteria.
5
2.5.2. Clustered Computing and Hadoop Ecosystem
2.5.2.1. Clustered Computing
Because of the qualities of big data, individual computers are often inadequate for handling the
data at most stages. To better address the high storage and computational needs of big data,
computer clusters are a better fit.
Big data clustering software combines the resources of many smaller machines, seeking to provide
a number of benefits:
Resource Pooling: Combining the available storage space to hold data is a clear benefit,
but CPU and memory pooling are also extremely important. Processing large datasets
requires large amounts of all three of these resources.
High Availability: Clusters can provide varying levels of fault tolerance and availability
guarantees to prevent hardware or software failures from affecting access to data and
processing. This becomes increasingly important as we continue to e mphasize the
importance of real-time analytics.
Easy Scalability: Clusters make it easy to scale horizontally by adding additional machines
to the group. This means the system can react to changes in resource requirements without
expanding the physical resources on a machine.
Using clusters requires a solution for managing cluster membership, coordinating resource sharing,
and scheduling actual work on individual nodes. Cluster membership and resource allocation can
be handled by software like Hadoop’s YARN (which stands for Yet Another Resource
Negotiator).
The assembled computing cluster often acts as a foundation that other software interfaces with to
process the data. The machines involved in the computing cluster are also typically involved with
the management of a distributed storage system, which we will talk about when we discuss data
persistence.
6
Scalable: It is easily scalable both, horizontally and vertically. A few extra nodes help in
scaling up the framework.
Flexible: It is flexible and you can store as much structured and unstructured data as you
need to and decide to use them later.
Hadoop has an ecosystem that has evolved from its four core components: data management,
access, processing, and storage. It is continuously growing to meet the needs of Big Data. It
comprises the following components and many others:
HDFS: Hadoop Distributed File System
YARN: Yet Another Resource Negotiator
MapReduce: Programming based Data Processing
Spark: In-Memory data processing
PIG, HIVE: Query-based processing of data services
HBase: NoSQL Database
Mahout, Spark MLLib: Machine Learning algorithm libraries
Solar, Lucene: Searching and Indexing
Zookeeper: Managing cluster
Oozie: Job Scheduling
2.5.3. Big Data Life Cycle with Hadoop
The first stage of Big Data processing is Ingest. The data is ingested or transferred to Hadoop from
various sources such as relational databases, systems, or local files. Sqoop transfers data from
RDBMS to HDFS, whereas Flume transfers event data.
7
2.5.3.2. Processing the data in storage
The second stage is processing. In this stage, the data is stored and processed. The data is stored in
the distributed file system, HDFS, and the NoSQL distributed data, HBase. Spark and MapReduce
perform data processing.
2.5.3.3. Computing and analyzing data
The third stage is to Analyze. Here, the data is analyzed by processing frameworks such as Pig,
Hive, and Impala. Pig converts the data using a map and reduce and then analyzes it. Hive is also
based on the map and reduces programming and is most suitable for structured data.
2.5.3.4. Visualizing the results
The fourth stage is Access, which is performed by tools such as Hue and Cloudera Search. In this
stage, the analyzed data can be accessed by users.