0% found this document useful (0 votes)
111 views

Hadoop Interview Questions

Big data is characterized by volume, velocity, and variety. Volume refers to the large size of big data, such as hundreds of terabytes or petabytes. Velocity refers to the fast rate at which data flows. Variety means big data can be in many formats like structured, unstructured, text, images, audio, and video. Facebook generates over 500 terabytes of data per day as an example of big data's volume, velocity, and variety. Effective analysis of big data provides business advantages by helping organizations learn where to focus and make better decisions.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views

Hadoop Interview Questions

Big data is characterized by volume, velocity, and variety. Volume refers to the large size of big data, such as hundreds of terabytes or petabytes. Velocity refers to the fast rate at which data flows. Variety means big data can be in many formats like structured, unstructured, text, images, audio, and video. Facebook generates over 500 terabytes of data per day as an example of big data's volume, velocity, and variety. Effective analysis of big data provides business advantages by helping organizations learn where to focus and make better decisions.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

BIG DATA

Big data is nothing but an assortment of such a huge and complex data that it becomes very tedious
to capture, store, process, retrieve and analyze it with the help of on-hand database management
tools or traditional data processing techniques. In fact, the concept of BIG DATA may vary from
company to company depending upon its size, capacity, competence, human resource, techniques
and so on. For some companies it may be a cumbersome job to manage a few gigabytes and for
others it may be some terabytes creating a hassle in the entire organization.

Big Data is characterized by: Volume, Velocity and Variety!


1. Volume: BIG DATA depends upon how gigantic it is. It could amount to hundreds of terabytes or
even petabytes of information. For instance, 15 terabytes of facebook posts or 400 billion annual
medical records could mean Big Data!
2. Velocity:Velocity means the rate at which data is flowing in the companies. Big data requires fast
processing. Time factor plays a very crucial role in several organizations. For instance, processing 2
million records at share market or evaluating results of lakhs of students applied for competitive
exams could mean Big Data!
3. Variety: Big Data may not belong to a specific format. It could be in any form such as structured,
unstructured, text, images, audio, video, log files, emails, simulations, 3D models, etc. New research
shows that a substantial amount of an organizations data is not numeric; however, such data is
equally important for decision-making process. So, organizations need to think beyond stock
records, documents, personnel files, finances, etc.

Big Data Opportunities


Why is it important to harness Big Data?
Data had never been as crucial before as it is today! In fact, we can see a transition from the
old sayingCustomer is King to Data is king! This is because for an efficient decision making,
it is very important to analyze the right amount and the right type of data! Companies whether
healthcare, banking, public sector, pharmaceutical, or IT, all need to look beyond the concrete data
stored in their databases and study the intangible data in the form of sensors, images, weblogs, etc.
In fact, what set apart smart organizations from others is their ability to scan data effectively to
allocate resources properly, increase productivity and inspire innovation!

Some points why Big Data analysis is crucial:


1. Just like labor and capital, data has become one of the factors of production in almost all the
industries.
2. Big data can unveil some really useful and crucial information which can change decision making
process entirely to a more fruitful one.
3. Big data makes customer segmentation easier and more visible, enabling the companies to focus
on more profitable and loyal customers.
4. Big data can be an important criterion to decide upon the next line of products and services
required by the future customers. Thus, companies can follow proactive approach at every step.
5. The way in which big data is explored and used can directly impact the growth and development
of the organizations and give a tough competition to others in the row! Data driven strategies are
soon becoming the latest trend at the Management level!

How to Harness Big Data?


As the name suggests, it is not an easy task to capture, store, process and do big data analysis.
Optimizing big data is a daunting affair that requires a robust infrastructure and state of art
technology which should take care of the privacy, security, intellectual property, and even liability
issues related to Big Data. Big data will help you answer those questions that were lingering for a
long time! It is not the amount of big data that matters the most, it is what you are able to do with it
that draws a line between the achievers and the losers.
Some Recent Technologies:
Companies are relying on the following technologies to do Big data analysis:
Speedy and efficient processors.
Modern storage and processing technologies, especially for unstructured data
Robust server processing capacities
Cloud computing
Clustering, high connectivity, parallel processing, MPP
Apache Hadoop/ Hadoop Big Data

HDFS
What is BIG DATA?
Big Data is nothing but an assortment of such a huge and complex data that it becomes very
tedious to capture, store, process, retrieve and analyze it with the help of on-hand database
management tools or traditional data processing techniques.

Can you give some examples of Big Data?


There are many real life examples of Big Data! Facebook is generating 500+ terabytes of
data per day, NYSE (New York Stock Exchange) generates about 1 terabyte of new trade data per
day, a jet airline collects 10 terabytes of censor data for every 30 minutes of flying time. All these are
day to day examples of Big Data!

Can you give a detailed overview about the Big Data being generated by
Facebook?
As of December 31, 2012, there are 1.06 billion monthly active users on facebook and 680
million mobile users. On an average, 3.2 billion likes and comments are posted every day on
Facebook. 72% of web audience is on Facebook. And why not! There are so many activities going
on facebook from wall posts, sharing images, videos, writing comments and liking posts, etc. In fact,
Facebook started using Hadoop in mid-2009 and was one of the initial users of Hadoop.

According to IBM, what are the three characteristics of Big Data?


According to IBM, the three characteristics of Big Data are: Volume: Facebook generating
500+ terabytes of data per day. Velocity: Analyzing 2 million records each day to identify the reason
for losses. Variety: images, audio, video, sensor data, log files, etc.

How Big is Big Data?


With time, data volume is growing exponentially. Earlier we used to talk about Megabytes or
Gigabytes. But time has arrived when we talk about data volume in terms of terabytes, petabytes
and also zettabytes! Global data volume was around 1.8ZB in 2011 and is expected to be 7.9ZB in
2015. It is also known that the global information doubles in every two years!

How analysis of Big Data is useful for organizations?


Effective analysis of Big Data provides a lot of business advantage as organizations will
learn which areas to focus on and which areas are less important. Big data analysis provides some
early key indicators that can prevent the company from a huge loss or help in grasping a great
opportunity with open hands! A precise analysis of Big Data helps in decision making! For instance,
nowadays people rely so much on Facebook and Twitter before buying any product or service. All
thanks to the Big Data explosion.

Who are Data Scientists?


Data scientists are soon replacing business analysts or data analysts. Data scientists are
experts who find solutions to analyze data. Just as web analysis, we have data scientists who have
good business insight as to how to handle a business challenge. Sharp data scientists are not only
involved in dealing business problems, but also choosing the relevant issues that can bring valueaddition to the organization.

What is Hadoop?
Hadoop is a framework that allows for distributed processing of large data sets across
clusters of commodity computers using a simple programming model
Technically speaking, Hadoop is an open source software framework that supports data-intensive
distributed applications. Hadoop is licensed under the Apache v2 license. It is therefore generally
known as Apache Hadoop. Hadoop has been developed, based on a paper originally written by
Google on MapReduce system and applies concepts of functional programming. Hadoop is written
in the Java programming language and is the highest-level Apache project being constructed and
used by a global community of contributors. Hadoop was developed by Doug Cutting and Michael J.
Cafarella. And the charming yellow elephant you see is basically named after Dougs sons toy
elephant!

Hadoop Ecosystem:
Once you are familiar with What is Hadoop, lets probe into its ecosystem. Hadoop Ecosystem is
nothing but various components that make up Hadoop so powerful, among which HDFS and
MapReduce are the core components!

1. HDFS:
The Hadoop Distributed File System (HDFS) is a very robust feature of Apache Hadoop.
HDFS is designed to amass gigantic amount of data unfailingly, and to transfer the data at an
amazing speed among nodes and facilitates the system to continue working smoothly even if any of
the nodes fail to function. HDFS is very competent in writing programs, handling their allocation,
processing the data and generating the final outcomes. In fact, HDFS manages around 40 petabytes
of data at Yahoo! The key components of HDFS are NameNode, DataNodes and Secondary
NameNode.

2. MapReduce:
It all started with Google applying the concept of functional programming to solve the
problem of how to manage large amounts of data on the internet. Google named it as the
MapReduce system and was penned down in a paper published by Google. With the ever
increasing amount of data generated on the web, MapReduce was created in 2004 and Yahoo
stepped in to develop Hadoop in order to implement the MapReduce technique in Hadoop. The
function of MapReduce is to help Google in searching and indexing the large quantity of web pages
in matter of a few seconds or even in a fraction of a second. The key components of MapReduce
are JobTracker, TaskTrackers and JobHistoryServer.

3. Apache Pig:
Apache Pig is another component of Hadoop, which is used to evaluate huge data sets
made up of high-level language. In fact, Pig was initiated with the idea of creating and executing
commands on Big Data sets. The basic attribute of Pig programs is parallelization which helps
them to manage large data sets. Apache Pig consists of a compiler that generates a series of
MapReduce program and a Pig Latin language layer that facilitates SQL-like queries to be run on
distributed databases in Hadoop.

4. Apache Hive:
As the name suggests, Hive is Hadoops data warehouse system that enables quick data
summarization for Hadoop, handle queries and evaluate huge data sets which are located in
Hadoops file systems and also maintains full support for map/reduce. Another striking feature of
Apache Hive is to provide indexes such as bitmap indexes in order to speed up queries. Apache
Hive was originally developed by Facebook, but now it is developed and used by other companies
too, including Netflix.

5. Apache HCatalog
Apache HCatalog is another important component of Apache Hadoop which provides a table
and storage management service for data created with the help of Apache Hadoop. HCatalog offers
features like a shared schema and data type mechanism, a table abstraction for users and smooth

functioning across other components of Hadoop such as such as Pig, Map Reduce, Streaming, and
Hive.

6. Apache HBase
HBase is acronym for Hadoop DataBase. HBase is a distributed, column oriented database
that uses HDFS for storage purposes. On one hand it manages batch style computations using
MapReduce and on the other hand it handles point queries (random reads). The key components of
Apache HBase are HBase Master and the RegionServer.

7. Apache Zookeeper
Apache ZooKeeper is another significant part of Hadoop ecosystem. Its major funciton is to keep a
record of configuration information, naming, providing distributed synchronization, and providing
group services which are immensely crucial for various distributed systems. Infact, HBase is
dependent upon ZooKeeper for its functioning.

Why the name Hadoop?


Hadoop doesnt have any expanding version like oops. The charming yellow elephant you
see is basically named after Dougs sons toy elephant!

Why do we need Hadoop?


Everyday a large amount of unstructured data is getting dumped into our machines. The
major challenge is not to store large data sets in our systems but to retrieve and analyze the big data
in the organizations, that too data present in different machines at different locations. In this situation
a necessity for Hadoop arises. Hadoop has the ability to analyze the data present in different
machines at different locations very quickly and in a very cost effective way. It uses the concept of
MapReduce which enables it to divide the query into small parts and process them in parallel. This is
also known as parallel computing. The link Why Hadoop gives you a detailed explanation about
why Hadoop is gaining so much popularity!

What are some of the characteristics of Hadoop framework?


Hadoop framework is written in Java. It is designed to solve problems that involve analyzing
large data (e.g. petabytes). The programming model is based on Googles MapReduce. The
infrastructure is based on Googles Big Data and Distributed File System. Hadoop handles large
files/data throughput and supports data intensive distributed applications. Hadoop is scalable as
more nodes can be easily added to it.

Give a brief overview of Hadoop history.


In 2002, Doug Cutting created an open source, web crawler project. In 2004, Google
published MapReduce, GFS papers. In 2006, Doug Cutting developed the open source, Mapreduce
and HDFS project. In 2008, Yahoo ran 4,000 node Hadoop cluster and Hadoop won terabyte sort
benchmark. In 2009, Facebook launched SQL support for Hadoop.

Give examples of some companies that are using Hadoop structure?


A lot of companies are using the Hadoop structure such as Cloudera, EMC, MapR,
Hortonworks, Amazon, Facebook, eBay, Twitter, Google and so on.

What is the basic difference between traditional RDBMS and Hadoop?


Traditional RDBMS is used for transactional systems to report and archive the data,
whereas Hadoop is an approach to store huge amount of data in the distributed file system and
process it. RDBMS will be useful when you want to seek one record from Big data, whereas,
Hadoop will be useful when you want Big data in one shot and perform analysis on that later.

What is structured and unstructured data?


Structured data is the data that is easily identifiable as it is organized in a structure. The most
common form of structured data is a database where specific information is stored in tables, that
is, rows and columns. Unstructured data refers to any data that cannot be identified easily. It could
be in the form of images, videos, documents, email, logs and random text. It is not in the form of
rows and columns.

What are the core components of Hadoop?


Core components of Hadoop are HDFS and MapReduce. HDFS is basically used to store
large data sets and MapReduce is used to process such large data sets.

What is HDFS?
HDFS is a file system designed for storing very large files with streaming data access
patterns, running clusters on commodity hardware.

What are the key features of HDFS?


HDFS is highly fault-tolerant, with high throughput, suitable for applications with large data
sets, streaming access to file system data and can be built out of commodity hardware.

What is Fault Tolerance?


Suppose you have a file stored in a system, and due to some technical problem that file gets
destroyed. Then there is no chance of getting the data back present in that file. To avoid such
situations, Hadoop has introduced the feature of fault tolerance in HDFS. In Hadoop, when we store

a file, it automatically gets replicated at two other locations also. So even if one or two of the
systems collapse, the file is still available on the third system.

Replication causes data redundancy then why is is pursued in HDFS?


HDFS works with commodity hardware (systems with average configurations) that has high
chances of getting crashed any time. Thus, to make the entire system highly fault-tolerant, HDFS
replicates and stores data in different places. Any data on HDFS gets stored at atleast 3 different
locations. So, even if one of them is corrupted and the other is unavailable for some time for any
reason, then data can be accessed from the third one. Hence, there is no chance of losing the data.
This replication factor helps us to attain the feature of Hadoop called Fault Tolerant.

Since the data is replicated thrice in HDFS, does it mean that any calculation
done on one node will also be replicated on the other two?
Since there are 3 nodes, when we send the MapReduce programs, calculations will be done
only on the original data. The master node will know which node exactly has that particular data. In
case, if one of the nodes is not responding, it is assumed to be failed. Only then, the required
calculation will be done on the second replica.

What is throughput? How does HDFS get a good throughput?


Throughput is the amount of work done in a unit time. It describes how fast the data is
getting accessed from the system and it is usually used to measure performance of the system. In
HDFS, when we want to perform a task or an action, then the work is divided and shared among
different systems. So all the systems will be executing the tasks assigned to them independently and
in parallel. So the work will be completed in a very short period of time. In this way, the HDFS gives
good throughput. By reading data in parallel, we decrease the actual time to read data tremendously.

What is streaming access?


As HDFS works on the principle of Write Once, Read Many, the feature of streaming access
is extremely important in HDFS. HDFS focuses not so much on storing the data but how to retrieve it
at the fastest possible speed, especially while analyzing logs. In HDFS, reading the complete data is
more important than the time taken to fetch a single record from the data.

What is a commodity hardware? Does commodity hardware include RAM?


Commodity hardware is a non-expensive system which is not of high quality or highavailability. Hadoop can be installed in any average commodity hardware. We dont need super
computers or high-end hardware to work on Hadoop. Yes, Commodity hardware includes RAM
because there will be some services which will be running on RAM.

What is a Namenode?
Namenode is the master node on which job tracker runs and consists of the metadata. It
maintains and manages the blocks which are present on the datanodes. It is a high-availability
machine and single point of failure in HDFS.

Is Namenode also a commodity?


No. Namenode can never be a commodity hardware because the entire HDFS rely on it. It is
the single point of failure in HDFS. Namenode has to be a high-availability machine.

What is a metadata?
Metadata is the information about the data stored in datanodes such as location of the file,
size of the file and so on.

What is a Datanode?
Datanodes are the slaves which are deployed on each machine and provide the actual
storage. These are responsible for serving read and write requests for the clients.

Why do we use HDFS for applications having large data sets and not when there
are lot of small files?
HDFS is more suitable for large amount of data sets in a single file as compared to small
amount of data spread across multiple files. This is because Namenode is a very expensive high
performance system, so it is not prudent to occupy the space in the Namenode by unnecessary
amount of metadata that is generated for multiple small files. So, when there is a large amount of
data in a single file, name node will occupy less space. Hence for getting optimized performance,
HDFS supports large data sets instead of multiple small files.

What is a daemon?
Daemon is a process or service that runs in background. In general, we use this word in
UNIX environment. The equivalent of Daemon in Windows is services and in Dos is TSR.

What is a job tracker?


Job tracker is a daemon that runs on a namenode for submitting and tracking MapReduce
jobs in Hadoop. It assigns the tasks to the different task tracker. In a Hadoop cluster, there will be
only one job tracker but many task trackers. It is the single point of failure for Hadoop and
MapReduce Service. If the job tracker goes down all the running jobs are halted. It receives
heartbeat from task tracker based on which Job tracker decides whether the assigned task is
completed or not.

What is a task tracker?


Task tracker is also a daemon that runs on datanodes. Task Trackers manage the execution
of individual tasks on slave node. When a client submits a job, the job tracker will initialize the job
and divide the work and assign them to different task trackers to perform MapReduce tasks. While
performing this action, the task tracker will be simultaneously communicating with job tracker by
sending heartbeat. If the job tracker does not receive heartbeat from task tracker within specified
time, then it will assume that task tracker has crashed and assign that task to another task tracker in
the cluster.

Is Namenode machine same as datanode machine as in terms of hardware?


It depends upon the cluster you are trying to create. The Hadoop VM can be there on the
same machine or on another machine. For instance, in a single node cluster, there is only one
machine, whereas in the development or in a testing environment, Namenode and datanodes are on
different machines.

What is a heartbeat in HDFS?


A heartbeat is a signal indicating that it is alive. A datanode sends heartbeat to Namenode
and task tracker will send its heart beat to job tracker. If the Namenode or job tracker does not
receive heart beat then they will decide that there is some problem in datanode or task tracker is
unable to perform the assigned task.

Are Namenode and job tracker on the same host?


No, in practical environment, Namenode is on a separate host and job tracker is on a
separate host.

What is a block in HDFS?


A block is the minimum amount of data that can be read or written. In HDFS, the default
block size is 64 MB as contrast to the block size of 8192 bytes in Unix/Linux. Files in HDFS are
broken down into block-sized chunks, which are stored as independent units. HDFS blocks are large
as compared to disk blocks, particularly to minimize the cost of seeks. If a particular file is 50 mb,
will the HDFS block still consume 64 mb as the default size? No, not at all! 64 mb is just a unit
where the data will be stored. In this particular situation, only 50 mb will be consumed by an HDFS
block and 14 mb will be free to store something else. It is the MasterNode that does data allocation
in an efficient manner.

What are the benefits of block transfer?


A file can be larger than any single disk in the network. Theres nothing that requires the
blocks from a file to be stored on the same disk, so they can take advantage of any of the disks in
the cluster. Making the unit of abstraction a block rather than a file simplifies the storage

subsystem. Blocks provide fault tolerance and availability. To insure against corrupted blocks and
disk and machine failure, each block is replicated to a small number of physically separate machines
(typically three). If a block becomes unavailable, a copy can be read from another location in a way
that is transparent to the client.

If we want to copy 10 blocks from one machine to another, but another machine
can copy only 8.5 blocks, can the blocks be broken at the time of replication?
In HDFS, blocks cannot be broken down. Before copying the blocks from one machine to
another, the Master node will figure out what is the actual amount of space required, how many
block are being used, how much space is available, and it will allocate the blocks accordingly.

How indexing is done in HDFS?


Hadoop has its own way of indexing. Depending upon the block size, once the data is
stored, HDFS will keep on storing the last part of the data which will say where the next part of the
data will be. In fact, this is the base of HDFS.

If a data Node is full how its identified?


When data is stored in datanode, then the metadata of that data will be stored in the
Namenode. So Namenode will identify if the data node is full.

If datanodes increase, then do we need to upgrade Namenode?


While installing the Hadoop system, Namenode is determined based on the size of the
clusters. Most of the time, we do not need to upgrade the Namenode because it does not store the
actual data, but just the metadata, so such a requirement rarely arise.

Are job tracker and task trackers present in separate machines?


Yes, job tracker and task tracker are present in different machines. The reason is job tracker
is a single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are
halted.

When we send a data to a node, do we allow settling in time, before sending


another data to that node?
Yes, we do.

Does hadoop always require digital data to process?


Yes. Hadoop always require digital data to be processed.

On what basis Namenode will decide which datanode to write on?


As the Namenode has the metadata (information) related to all the data nodes, it knows
which datanode is free.

Doesnt Google have its very own version of DFS?


Yes, Google owns a DFS known as Google File System (GFS) developed by Google Inc.
for its own use.

Who is a user in HDFS?


A user is like you or me, who has some query or who needs some kind of data.

Is client the end user in HDFS?


No, Client is an application which runs on your machine, which is used to interact with the
Namenode (job tracker) or datanode (task tracker).

What is the communication channel between client and namenode/datanode?


The mode of communication is SSH.

What is a rack?
Rack is a storage area with all the datanodes put together. These datanodes can be
physically located at different places. Rack is a physical collection of datanodes which are stored at
a single location. There can be multiple racks in a single location.

On what basis data will be stored on a rack?


When the client is ready to load a file into the cluster, the content of the file will be divided
into blocks. Now the client consults the Namenode and gets 3 datanodes for every block of the file
which indicates where the block should be stored. While placing the datanodes, the key rule followed
is for every block of data, two copies will exist in one rack, third copy in a different rack. This rule is
known as Replica Placement Policy.

Do we need to place 2nd and 3rd data in rack 2 only?


Yes, this is to avoid datanode failure.

What if rack 2 and datanode fails?


If both rack2 and datanode present in rack 1 fails then there is no chance of getting data
from it. In order to avoid such situations, we need to replicate that data more number of times
instead of replicating only thrice. This can be done by changing the value in replication factor which
is set to 3 by default.

What is a Secondary Namenode? Is it a substitute to the Namenode?


The secondary Namenode constantly reads the data from the RAM of the Namenode and
writes it into the hard disk or the file system. It is not a substitute to the Namenode, so if the
Namenode fails, the entire Hadoop system goes down.

What is the difference between Gen1 and Gen2 Hadoop with regards to the
Namenode?
In Gen 1 Hadoop, Namenode is the single point of failure. In Gen 2 Hadoop, we have what is
known as Active and Passive Namenodes kind of a structure. If the active Namenode fails, passive
Namenode takes over the charge.

What is MapReduce?
Map Reduce is the heart of Hadoop that consists of two parts map and reduce. Maps
and reduces are programs for processing data. Map processes the data first to give some
intermediate output which is further processed by Reduce to generate the final output.
Thus, MapReduce allows for distributed processing of the map and reduction operations.

Can you explain how do map and reduce work?


Namenode takes the input and divide it into parts and assign them to data nodes. These
datanodes process the tasks assigned to them and make a key-value pair and returns the
intermediate output to the Reducer. The reducer collects this key value pairs of all the datanodes
and combines them and generates the final output.

What is Key value pair in HDFS?


Key value pair is the intermediate data generated by maps and sent to reduces for
generating the final output.

What is the difference between MapReduce engine and HDFS cluster?


HDFS cluster is the name given to the whole configuration of master and slaves where data
is stored. Map Reduce Engine is the programming module which is used to retrieve and analyze
data.

Is map like a pointer?


No, Map is not like a pointer.

Do we require two servers for the Namenode and the datanodes?


Yes, we need two different servers for the Namenode and the datanodes. This is because
Namenode requires highly configurable system as it stores information about the location details of

all the files stored in different datanodes and on the other hand, datanodes require low configuration
system.

Why are the number of splits equal to the number of maps?


The number of maps is equal to the number of input splits because we want the key and
value pairs of all the input splits.

Is a job split into maps?


No, a job is not split into maps. Spilt is created for the file. The file is placed on datanodes in
blocks. For each split, a map is needed.

Which are the two types of writes in HDFS?


There are two types of writes in HDFS: posted and non-posted write. Posted Write is when
we write it and forget about it, without worrying about the acknowledgement. It is similar to our
traditional Indian post. In a Non-posted Write, we wait for the acknowledgement. It is similar to the
todays courier services. Naturally, non-posted write is more expensive than the posted write. It is
much more expensive, though both writes are asynchronous.

Why Reading is done in parallel and Writing is not in HDFS?


Reading is done in parallel because by doing so we can access the data fast. But we do not
perform the write operation in parallel. The reason is that if we perform the write operation in parallel,
then it might result in data inconsistency. For example, you have a file and two nodes are trying to
write data into the file in parallel, then the first node does not know what the second node has written
and vice-versa. So, this makes it confusing which data to be stored and accessed.

Can Hadoop be compared to NOSQL database like Cassandra?


Though NOSQL is the closet technology that can be compared to Hadoop, it has its own
pros and cons. There is no DFS in NOSQL. Hadoop is not a database. Its a filesystem (HDFS) and
distributed programming framework (MapReduce).

HADOOP CLUSTER
Which are the three modes in which Hadoop can be run?
The three modes in which Hadoop can be run are:
1. standalone (local) mode
2. Pseudo-distributed mode
3. Fully distributed mode

What are the features of Standalone (local) mode?


In stand-alone mode there are no daemons, everything runs on a single JVM. It has no DFS
and utilizes the local file system. Stand-alone mode is suitable only for running MapReduce
programs during development. It is one of the least used environments.

What are the features of Pseudo mode?


Pseudo mode is used both for development and in the QA environment. In the Pseudo mode
all the daemons run on the same machine.

Can we call VMs as pseudos?


No, VMs are not pseudos because VM is something different and pesudo is very specific to
Hadoop.

What are the features of Fully Distributed mode?


Fully Distributed mode is used in the production environment, where we have n number of
machines forming a Hadoop cluster. Hadoop daemons run on a cluster of machines. There is one
host onto which Namenode is running and another host on which datanode is running and then there
are machines on which task tracker is running. We have separate masters and separate slaves in
this distribution.

Does Hadoop follows the UNIX pattern?


Yes, Hadoop closely follows the UNIX pattern. Hadoop also has the conf directory as in the
case of UNIX.

In which directory Hadoop is installed?


Cloudera and Apache has the same directory structure. Hadoop is installed in cd
/usr/lib/hadoop-0.20/.

What are the port numbers of Namenode, job tracker and task tracker?
The port number for Namenode is 70, for job tracker is 30 and for task tracker is 60.

What is the Hadoop-core configuration?


Hadoop core is configured by two xml files:
1. hadoop-default.xml which was renamed to 2. hadoop-site.xml.
These files are written in xml format. We have certain properties in these xml files, which
consist of name and value. But these files do not exist now.

What are the Hadoop configuration files at present?


There are 3 configuration files in Hadoop:
1. core-site.xml
2. hdfs-site.xml
3. mapred-site.xml
These files are located in the conf/ subdirectory.

How to exit the Vi editor?


To exit the Vi Editor, press ESC and type :q and then press enter.

What is a spill factor with respect to the RAM?


Spill factor is the size after which your files move to the temp file. Hadoop-temp directory is
used for this.

Is fs.mapr.working.dir a single directory?


Yes, fs.mapr.working.dir it is just one directory.

Which are the three main hdfs-site.xml properties?


The three main hdfs-site.xml properties are:
1. dfs.name.dir which gives you the location on which metadata will be stored and where
DFS is located on disk or onto the remote.
2. dfs.data.dir which gives you the location where the data is going to be stored.
3. fs.checkpoint.dir which is for secondary Namenode.

How to come out of the insert mode?


To come out of the insert mode, press ESC, type :q (if you have not written anything) OR
type :wq (if you have written anything in the file) and then press ENTER.

What is Cloudera and why it is used?


Cloudera is the distribution of Hadoop. It is a user created on VM by default. Cloudera
belongs to Apache and is used for data processing.

What happens if you get a connection refused java exception when you type
hadoop fsck /?
It could mean that the Namenode is not working on your VM.

We are using Ubuntu operating system with Cloudera, but from where we can
download Hadoop or does it come by default with Ubuntu?
This is a default configuration of Hadoop that you have to download from Cloudera or from
Edurekas dropbox and the run it on your systems. You can also proceed with your own configuration
but you need a Linux box, be it Ubuntu or Red hat. There are installation steps present at the
Cloudera location or in Edurekas Drop box. You can go either ways.

What does jps command do?


This command checks whether your Namenode, datanode, task tracker, job tracker, etc are
working or not.

How can I restart Namenode?


1. Click on stop-all.sh and then click on start-all.sh OR
2. Write sudo hdfs (press enter), su-hdfs (press enter), /etc/init.d/ha (press enter) and
then /etc/init.d/hadoop-0.20-namenode start (press enter).

What is the full form of fsck?


Full form of fsck is File System Check.

How can we check whether Namenode is working or not?


To check whether Namenode is working or not, use the command /etc/init.d/hadoop-0.20namenode status or as simple as jps.

What does the command mapred.job.tracker do?


The command mapred.job.tracker lists out which of your nodes is acting as a job tracker.

What does /etc /init.d do?


/etc /init.d specifies where daemons (services) are placed or to see the status of these
daemons. It is very LINUX specific, and nothing to do with Hadoop.

How can we look for the Namenode in the browser?


If you have to look for Namenode in the browser, you dont have to give localhost:8021, the
port number to look for Namenode in the brower is 50070.

How to change from SU to Cloudera?


To change from SU to Cloudera just type exit.

Which files are used by the startup and shutdown commands?


Slaves and Masters are used by the startup and the shutdown commands.

What do slaves consist of?


Slaves consist of a list of hosts, one per line, that host datanode and task tracker servers.

What do masters consist of?


Masters contain a list of hosts, one per line, that are to host secondary namenode servers.

What does hadoop-env.sh do?


hadoop-env.sh provides the environment for Hadoop to run. JAVA_HOME is set over here.

Can we have multiple entries in the master files?


Yes, we can have multiple entries in the Master files.

Where is hadoop-env.sh file present?


hadoop-env.sh file is present in the conf location.

In Hadoop_PID_DIR, what does PID stands for?


PID stands for Process ID.

What does /var/hadoop/pids do?


It stores the PID.

What does hadoop-metrics.properties file do?


hadoop-metrics.properties is used for Reporting purposes. It controls the reporting for
Hadoop. The default status is not to report.

What are the network requirements for Hadoop?


The Hadoop core uses Shell (SSH) to launch the server processes on the slave nodes. It
requires password-less SSH connection between the master and all the slaves and the secondary
machines.

Why do we need a password-less SSH in Fully Distributed environment?


We need a password-less SSH in a Fully-Distributed environment because when the
cluster is LIVE and running in Fully
Distributed environment, the communication is too frequent. The job tracker should be able
to send a task to task tracker quickly.

Does this lead to security issues?


No, not at all. Hadoop cluster is an isolated cluster. And generally it has nothing to do with an
internet. It has a different kind of a configuration. We neednt worry about that kind of a security
breach, for instance, someone hacking through the internet, and so on. Hadoop has a very secured
way to connect to other machines to fetch and to process data.

On which port does SSH work?


SSH works on Port No. 22, though it can be configured. 22 is the default Port number.

Can you tell us more about SSH?


SSH is nothing but a secure shell communication, it is a kind of a protocol that works on a
Port No. 22, and when you do an SSH, what you really require is a password.

Why password is needed in SSH localhost?


Password is required in SSH for security and in a situation where passwordless communication is not set.

Do we need to give a password, even if the key is added in SSH?


Yes, password is still required even if the key is added in SSH.

What if a Namenode has no data?


If a Namenode has no data it is not a Namenode. Practically, Namenode will have some
data.

What happens to job tracker when Namenode is down?


When Namenode is down, your cluster is OFF, this is because Namenode is the single point
of failure in HDFS.

What happens to a Namenode, when job tracker is down?


When a job tracker is down, it will not be functional but Namenode will be present. So,
cluster is accessible if Namenode is working, even if the job tracker is not working.

Can you give us some more details about SSH communication between Masters
and the Slaves?
SSH is a password-less secure communication where data packets are sent across the
slave. It has some format into which data is sent across. SSH is not only between masters and
slaves but also between two hosts.

What is formatting of the DFS?


Just like we do for Windows, DFS is formatted for proper structuring. It is not usually done as
it formats the Namenode too.

Does the HDFS client decide the input split or Namenode?


No, the Client does not decide. It is already specified in one of the configurations through
which input split is already configured.

In Cloudera there is already a cluster, but if I want to form a cluster on Ubuntu


can we do it?
Yes, you can go ahead with this! There are installation steps for creating a new cluster. You
can uninstall your present cluster and install the new cluster.

Can we create a Hadoop cluster from scratch?


Yes we can do that also once we are familiar with the Hadoop environment.

Can we use Windows for Hadoop?


Actually, Red Hat Linux or Ubuntu are the best Operating Systems for Hadoop. Windows
is not used frequently for installing Hadoop as there are many support problems attached with
Windows. Thus, Windows is not a preferred environment for Hadoop.

MAP REDUCE
What is MapReduce?
It is a framework or a programming model that is used for processing large data sets over
clusters of computers using distributed programming.

What are maps and reduces?


Maps and Reduces are two phases of solving a query in HDFS. Map is responsible to
read data from input location, and based on the input type, it will generate a key value pair, that is,
an intermediate output in local machine. Reducer is responsible to process
the intermediate output received from the mapper and generate the final output.

What are the four basic parameters of a mapper?


The four basic parameters of a mapper are LongWritable, text, text and IntWritable.
The first two represent input parameters and the second two represent intermediate output
parameters.

What are the four basic parameters of a reducer?


four basic parameters of a reducer are text, IntWritable, text, IntWritable. The first two
represent intermediate output parameters and the second two represent final output parameters.

What do the master class and the output class do?


Master is defined to update the Master or the job tracker and the output class is defined to
write data onto the output location.

What is the input type/format in MapReduce by default?


By default the type input type in MapReduce is text.

Is it mandatory to set input and output type/format in MapReduce?


No, it is not mandatory to set the input and output type/format in MapReduce. By default, the
cluster takes the input and the output type as text.

What does the text input format do?


In text input format, each line will create a line object, that is an hexa-decimal number. Key is
considered as a line object and value is considered as a whole line text. This is how the data gets
processed by a mapper. The mapper will receive the key as a LongWritable parameter and value
as a text parameter.

What does job conf class do?


MapReduce needs to logically separate different jobs running on the same cluster. Job conf
class helps to do job level settings such as declaring a job in real environment. It is recommended
that Job name should be descriptive and represent the type of job that is being executed.

What does conf.setMapper Class do?


Conf.setMapper class sets the mapper class and all the stuff related to map job such as
reading a data and generating a key-value pair out of the mapper.

What do sorting and shuffling do?


Sorting and shuffling are responsible for creating a unique key and a list of values. Making
similar keys at one location is known as Sorting. And the process by which the intermediate output
of the mapper is sorted and sent across to the reducers is known as Shuffling.

What does a split do?


Before transferring the data from hard disk location to map method, there is a phase or
method called the Split Method. Split method pulls a block of data from HDFS to the framework.
The Split class does not write anything, but reads data from the block and pass it to the mapper. Be
default, Split is taken care by the framework. Split method is equal to the block size and is used to
divide block into bunch of splits.

How can we change the split size if our commodity hardware has less
storage space?
If our commodity hardware has less storage space, we can change the split size by writing
the custom splitter. There is a feature of customization in Hadoop which can be called from the
main method.

What does a MapReduce partitioner do?


A MapReduce partitioner makes sure that all the value of a single key goes to the same
reducer, thus allows evenly distribution of the map output over the reducers. It redirects the mapper
output to the reducer by determining which reducer is responsible for a particular key.

How is Hadoop different from other data processing tools?


In Hadoop, based upon your requirements, you can increase or decrease the number of
mappers without bothering about the volume of data to be processed. this is the beauty of
parallel processing in contrast to the other data processing tools available.

Can we rename the output file?


Yes we can rename the output file by implementing multiple format output class.

Why we cannot do aggregation (addition) in a mapper? Why we require


reducer for that?
We cannot do aggregation (addition) in a mapper because, sorting is not done in a mapper.
Sorting happens only on the reducer side. Mapper method initialization depends upon each input
split. While doing aggregation, we will lose the value of the previous instance. For each row, a new
mapper will get initialized. For each row, input split again gets divided into mapper, thus we do not
have a track of the previous row value.

What is Streaming?
Streaming is a feature with Hadoop framework that allows us to do programming using
MapReduce in any programming language which can accept standard input and can produce
standard output. It could be Perl, Python, Ruby and not necessarily be Java. However, customization
in MapReduce can only be done using Java and not any other programming language.

What is a Combiner?
A Combiner is a mini reducer that performs the local reduce task. It receives the input from
the mapper on a particular node and sends the output to the reducer. Combiners help
in enhancing the efficiency of MapReduce by reducing the quantum of data that is required to be
sent to the reducers.

What is the difference between an HDFS Block and Input Split?


HDFS Block is the physical division of the data and Input Split is the logical division of the
data.

What happens in a textinputformat?


In textinputformat, each line in the text file is a record. Key is the byte offset of the line
and value is the content of the line. For instance, Key: longWritable, value: text.

What do you know about keyvaluetextinputformat?


In keyvaluetextinputformat, each line in the text file is a record. The first separator
character divides each line. Everything before the separator is the key and everything after the
separator is the value. For instance, Key: text, value: text.

What do you know about Sequencefileinputformat?


Sequencefileinputformat is an input format for reading in sequence
files. Key and valueare user defined. It is a specific compressed binary file format which is
optimized for passing the data between the output of one MapReduce job to the input of some other
MapReduce job.

What do you know about Nlineoutputformat?


Nlineoutputformat splits n lines of input as one split.

PIG
Can you give us some examples how Hadoop is used in real time environment?
Let us assume that the we have an exam consisting of 10 Multiple-choice questions and 20
students appear for that exam. Every student will attempt each question. For each question and
each answer option, a key will be generated. So we have a set of key-value pairs for all the
questions and all the answer options for every student. Based on the options that the students have
selected, you have to analyze and find out how many students have answered correctly. This isnt an
easy task. Here Hadoop comes into picture! Hadoop helps you in solving these problems quickly
and without much effort. You may also take the case of how many students have wrongly attempted
a particular question.

What is BloomMapFile used for?


The BloomMapFile is a class that extends MapFile. So its functionality is similar to
MapFile. BloomMapFile uses dynamic Bloom filters to provide quick membership test for the keys. It
is used in Hbase table format.

What is PIG?
PIG is a platform for analyzing large data sets that consist of high level language for
expressing data analysis programs, coupled with infrastructure for evaluating these programs. PIGs
infrastructure layer consists of a compiler that produces sequence of MapReduce Programs.

What is the difference between logical and physical plans?


Pig undergoes some steps when a Pig Latin Script is converted into MapReduce jobs. After
performing the basic parsing and semantic checking, it produces a logical plan. The logical
plan describes the logical operators that have to be executed by Pig during execution. After this, Pig

produces a physical plan. The physical plan describes the physical operators that are needed to
execute the script.

Does ILLUSTRATE run MR job?


No, illustrate will not pull any MR, it will pull the internal data. On the console, illustrate will
not do any job. It just shows output of each stage and not the final output.

Is the keyword DEFINE like a function name?


Yes, the keyword DEFINE is like a function name. Once you have registered, you have to define it.
Whatever logic you have written in Java program, you have an exported jar and also a jar registered
by you. Now the compiler will check the function in exported jar. When the function is not present in
the library, it looks into your jar.

Is the keyword FUNCTIONAL a User Defined Function (UDF)?


No, the keyword FUNCTIONAL is not a User Defined Function (UDF). While using UDF, we
have to override some functions. Certainly you have to do your job with the help of these functions
only. But the keyword FUNCTIONAL is a built-in function i.e a pre-defined function, therefore it does
not work as a UDF.

Why do we need MapReduce during Pig programming?


Pig is a high-level platform that makes many Hadoop data analysis issues easier to execute.
The language we use for this platform is: Pig Latin. A program written in Pig Latin is like a query
written in SQL, where we need an execution engine to execute the query. So, when a program is
written

in

Pig

Latin,

Pig

compiler

will

convert

the

program

into

MapReduce

jobs.

Here, MapReduce acts as the execution engine.

Are there any problems which can only be solved by MapReduce and cannot be
solved by PIG? In which kind of scenarios MR jobs will be more useful than PIG?
Let us take a scenario where we want to count the population in two cities. I have a data set
and sensor list of different cities. I want to count the population by using one mapreduce for two
cities. Let us assume that one is Bangalore and the other is Noida. So I need to consider key of
Bangalore city similar to Noida through which I can bring the population data of these two cities to
one reducer. The idea behind this is some how I have to instruct map reducer program whenever
you find city with the name Bangalore and city with the name Noida, you create the alias name

which will be the common name for these two cities so that you create a common key for both the
cities and it get passed to the same reducer. For this, we have to write custom partitioner.
In mapreduce when you create a key for city, you have to consider city as the key. So,
whenever the framework comes across a different city, it considers it as a different key. Hence, we
need to use customized partitioner. There is a provision in mapreduce only, where you can write your
custom partitioner and mention if city = bangalore or noida then pass similar hashcode. However,
we cannot create custom partitioner in Pig. As Pig is not a framework, we cannot direct execution
engine to customize the partitioner. In such scenarios, MapReduce works better than Pig.

Does Pig give any warning when there is a type mismatch or missing field?
No, Pig will not show any warning if there is no matching field or a mismatch. If you assume
that Pig gives such a warning, then it is difficult to find in log file. If any mismatch is found, it assumes
a null value in Pig.

What co-group does in Pig?


Co-group joins the data set by grouping one particular data set only. It groups the elements
by their common field and then returns a set of records containing two separate bags. The first bag
consists of the record of the first data set with the common data set and the second bag consists of
the records of the second data set with the common data set.

Can we say cogroup is a group of more than 1 data set?


Cogroup is a group of one data set. But in the case of more than one data sets, cogroup will
group all the data sets and join them based on the common field. Hence, we can say that cogroup is
a group of more than one data set and join of that data set as well.

What does FOREACH do?


FOREACH is used to apply transformations to the data and to generate new data items. The
name itself is indicating that for each element of a data bag, the respective action will be performed.
Syntax

: FOREACH

bagname

GENERATE

expression1,

expression2,

..

The meaning of this statement is that the expressions mentioned after GENERATE will be applied to
the current record of the data bag.

What is bag?

A bag is one of the data models present in Pig. It is an unordered collection of tuples with
possible duplicates. Bags are used to store collections while grouping. The size of bag is the size of
the local disk, this means that the size of the bag is limited. When the bag is full, then Pig will spill
this bag into local disk and keep only some parts of the bag in memory. There is no necessity that
the complete bag should fit into memory. We represent bags with {}.

You might also like