Features of HDFS
Features of HDFS
It
is run on commodity hardware.
Unlike other distributed systems, HDFS is highly fault over and designed
using low cost hardware.
HDFS holds very large amount of data and provides easier access to store
such huge data, the files are store across multiple manacles. These files
are stored in redundant fashion to rescue the systems from possible data
losses in case of failure.
Features of HDFS
Following elements. HDFS follows the master-slave relation and it has the
name node
The nane node is the commodity hardware that contains the GWV/ Linux
operating system and the nane node software. It is a software that can be
run on data hardware. The system having the nanenode gets as the faster
servers and it does the following tasks.
Data nodes perform read write operation on the file systems, as per client
request. They also perform operations such as block creation, detection
and replication according to the instruction of the name node block.
Generally, the user data is stored in the files of HDFS. The file in a file
system will be divided into one on move segments and/or stored in
individual data nodes. These file segments are called as blocks in other
words, the minimum amount of data that HDFS can read or write is called
a block. The default block size is 64 MB, but it can be increased as per the
need to change in HDFS configuration goals of HDFS.
Huge data sets : HDFS should have hundreds of nodes per cluster to
move the applications having huge data sets.
Hardware at data : A requested tasks can be done efficiently, when the
computation takes place near the data. Especially where huge data base
are involved it reduces the network traffic and increase the through PD