0% found this document useful (0 votes)
58 views11 pages

3.1.How Map Reduce Works & 3.2 Anatomy

3.1.How Map Reduce Works & 3.2 Anatomy

Uploaded by

sec22it109
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views11 pages

3.1.How Map Reduce Works & 3.2 Anatomy

3.1.How Map Reduce Works & 3.2 Anatomy

Uploaded by

sec22it109
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

UNIT III MAP REDUCE FRAMEWORK 9

Developing a Map Reduce Application-How Map Reduce Works-Anatomy of a MapReduce Job Run-Failures-Job
Scheduling-Shuffle and Sort – Task execution - MapReduce Types and Formats- Map Reduce Features-Hadoop
Environment. YARN – Failures in Classic MapReduce and YARN – job scheduling – Shuffle and sort – Task
execution – MapReduce types – Input formats – Output formats.

How Map Reduce Works


Understanding MapReduce in Hadoop

MapReduce is a component of the Apache Hadoop ecosystem, a framework that enhances massive data processing.
Other components of Apache Hadoop include Hadoop Distributed File System (HDFS), Yarn, and Apache Pig.

The MapReduce component enhances the processing of massive data using dispersed and parallel algorithms in the
Hadoop ecosystem. This programming model is applied in social platforms and e-commerce to analyze huge data
collected from online users.

This article provides an understanding of MapReduce in Hadoop. It will enable readers to gain insights on how vast
volumes of data is simplified and how MapReduce is used in real-life applications.

Introduction to MapReduce in Hadoop

MapReduce is a Hadoop framework used for writing applications that can process vast amounts of data on large
clusters. It can also be called a programming model in which we can process large datasets across computer clusters.
This application allows data to be stored in a distributed form. It simplifies enormous volumes of data and large scale
computing.

There are two primary tasks in MapReduce: map and reduce. We perform the former task before the latter. In the
map job, we split the input dataset into chunks. Map task processes these chunks in parallel. The map we use outputs
as inputs for the reduced tasks. Reducers process the intermediate data from the maps into smaller tuples, which
reduces the tasks, leading to the final output of the framework.

The MapReduce framework enhances the scheduling and monitoring of tasks. The failed tasks are re-executed by
the framework. This framework can be used easily, even by programmers with little expertise in distributed
processing. MapReduce can be implemented using various programming languages such as Java, Hive, Pig, Scala,
and Python.

How MapReduce works in Hadoop


An overview of MapReduce Architecture and MapReduce’s phases will help us understand how MapReduce in
Hadoop works.

MapReduce architecture

The following diagram shows a MapReduce architecture.


Fig: MapReduce architecture

Components of MapReduce architecture

Job: This is the actual work that needs to be executed or processed

Task: This is a piece of the actual work that needs to be executed or processed. A MapReduce job comprises many
small tasks that need to be executeUsed.

Job Tracker: This tracker plays the role of scheduling jobs and tracking all jobs assigned to the task tracker.

Task Tracker: This tracker plays the role of tracking tasks and reporting the status of tasks to the job tracker.

Input data: This is the data used to process in the mapping phase.
Output data: This is the result of mapping and reducing.

Client: This is a program or Application Programming Interface (API) that submits jobs to MapReduce. MapReduce
can accept jobs from many clients.

Hadoop MapReduce Master: This plays the role of dividing jobs into job-parts.
Job-parts: These are sub-jobs that result from the division of the main job.

In the MapReduce architecture, clients submit jobs to the MapReduce Master. This master will then sub-divide the
job into equal sub-parts. The job-parts will be used for the two main tasks in MapReduce:
1. Mapping and
2. Reducing.
The developer will write logic that satisfies the requirements of the organization or company. The input data will be
split and mapped.

The intermediate data will then be sorted and merged. The reducer that will generate a final output stored in the
HDFS will process the resulting output.

Fig: Data flow in MapReduce program:

How JobTracker and TaskTrackers work?


For every job submitted for execution in the system, there is one Jobtracker that resides on Namenode and there are
multiple tasktrackers which reside on Datanode.
● A job is divided into multiple tasks which are then run onto multiple data nodes in a cluster.
● It is the responsibility of job tracker to coordinate the activity by scheduling tasks to run on different data
nodes.
● Execution of individual tasks is then to be looked after by task tracker, which resides on every data node
executing part of the job.
● Task tracker’s responsibility is to send the progress report to the job tracker.
● In addition, the task tracker periodically sends a ‘heartbeat‘ signal to the Jobtracker so as to notify him of
the current state of the system.
● Thus the job tracker keeps track of the overall progress of each job. In the event of task failure, the job
tracker can reschedule it on a different task tracker.
Fig: Jobtracker and task trackers work

Components of Task tracker(TT): It consists of a map task and reduces the task. Task trackers report the status
of each assigned job to the job tracker. The following diagram summarizes how jobtracker and task trackers work.

Fig: Jobtracker and task trackers work

3.3. Anatomy of a MapReduce program?


Input data is split into small subsets of data. Map tasks work on these data splits. The intermediate input data from
Map tasks is then submitted to Reduce task after an intermediate process called 'shuffle'. The Reduce task(s) works
on this intermediate data to generate the result of a MapReduce Job.

Hadoop MapReduce jobs are divided into a set of map tasks and reduce tasks that run in a distributed fashion on a
cluster of computers. Each task works on a small subset of the data it has been assigned so that the load is spread
across the cluster.

The input to a MapReduce job is a set of files in the data store that are spread out over the HDFS. In Hadoop, these
files are split with an input format, which defines how to separate files into input split. You can assume that input
split is a byte-oriented view of a chunk of the files to be loaded by a map task.

The map task generally performs loading, parsing, transformation and filtering operations, whereas the reduce
task is responsible for grouping and aggregating the data produced by map tasks to generate final output. This is
the way a wide range of problems can be solved with such a straightforward paradigm, from simple numerical
aggregation to complex join operations and cartesian products.

Phases of MapReduce: The MapReduce program is executed in three main phases:


1. Mapping,
2. Shuffling,
3. Reducing.
4. Combiner phase (Optional phase)

1. Mapping Phase
● This is the first phase of the program.
● There are two steps in this phase:
● Splitting
● Mapping.

A dataset is split into equal units called chunks (input splits) in the splitting step. Hadoop consists of a
RecordReader that uses TextInputFormat to transform input splits into key-value pairs. The key-value pairs are
then used as inputs in the mapping step.

This is the only data format that a mapper can read or understand. The mapping step contains a coding logic that
is applied to these data blocks. In this step, the mapper processes the key-value pairs and produces an output of the
same form (key-value pairs).

2. Shuffling phase

This is the second phase that takes place after the completion of the Mapping phase. It consists of two main steps:

1. Sorting
2. Merging.
● Sorting step: The key-value pairs are sorted using the keys.
● Merging step: It ensures that key-value pairs are combined.

The shuffling phase facilitates the removal of duplicate values and the grouping of values. Different values with
similar keys are grouped. The output of this phase will be keys and values, just like in the Mapping phase.

3. Reducer phase

In the reducer phase, the output of the shuffling phase is used as the input. The reducer processes this input further
to reduce the intermediate values into smaller values. It provides a summary of the entire dataset. The output from
this phase is stored in the HDFS.

Example of a MapReduce with the three main phases. Splitting is often included in the mapping stage.

Combiner phase

This is an optional phase that’s used for optimizing the MapReduce process. It’s used for reducing the app outputs
at the node level. In this phase, duplicate outputs from the map outputs can be into a single output. The combiner
phase increases speed in the Shuffling phase by improving the performance of Jobs.
<gobal, 50000>
Output Format:

The output format translates the final key/value pair from the reduce function and writes it out to a file by a record
writer. By default, it will separate the key and value with a tab and separate record with a new line character. We
will discuss in our future articles about how to write your own customized output format.

The following diagram shows how all the four phases of MapReduce have been applied.

Benefits of Hadoop MapReduce


● Speed: MapReduce can process huge unstructured data in a short time.
● Fault-tolerance: The MapReduce framework can handle failures.
● Cost-effective: Hadoop has a scale-out feature that enables users to process or store data in a cost-effective
manner.
● Scalability: Hadoop provides a highly scalable framework. MapReduce allows users to run applications from
many nodes.
● Data availability: Replicas of data are sent to various nodes within the network. This ensures copies of the
data are available in the event of failure.
● Parallel Processing: In MapReduce, multiple job-parts of the same dataset can be processed in a parallel
manner. This reduces the time taken to complete a task.

Applications of Hadoop MapReduce


The following are some of the practical applications of the MapReduce program.

E-commerce : E-commerce companies such as Walmart, E-Bay, and Amazon use MapReduce to analyze buying
behavior. MapReduce provides meaningful information that is used as the basis for developing product
recommendations. Some of the information used include site records, e-commerce catalogs, purchase history, and
interaction logs.

Social networks: The MapReduce programming tool can evaluate certain information on social media platforms
such as Facebook, Twitter, and LinkedIn. It can evaluate important information such as who liked your status and
who viewed your profile.

Entertainment: Netflix uses MapReduce to analyze the clicks and logs of online customers. This information helps
the company suggest movies based on customers’ interests and behavior.

Conclusion: MapReduce is a crucial processing component of the Hadoop framework. It’s a quick, scalable, and
cost-effective program that can help data analysts and developers process huge data.

This programming model is a suitable tool for analyzing usage patterns on websites and e-commerce platforms.
Companies providing online services can utilize this framework to improve their marketing strategies.

Big Data Assignment - III

1. Explain architecture and components of Map Reduce Framework with its phases by an example process.(20)
2. Explain How Job Tracker(JT) and Task Trackers(TT) work in Map Reduce Framework?(20)
3. Explain Shuffling and Sorting and the types of I/O Formats in MapReduce process.(20)
4. Explain Task Execution in Hadoop YARN with its workflow architecture.(20)
5. Explain job scheduling types in YARN Framework.(20)

You might also like