Various Filesystems in Hadoop Last Updated : 10 Sep, 2020 Summarize Comments Improve Suggest changes Share Like Article Like Report Hadoop is an open-source software framework written in Java along with some shell scripting and C code for performing computation over very large data. Hadoop is utilized for batch/offline processing over the network of so many machines forming a physical cluster. The framework works in such a manner that it is capable enough to provide distributed storage and processing over the same cluster. It is designed to work on cheaper systems commonly known as commodity hardware where each system offers its local storage and computation power. Hadoop is capable of running various file systems and HDFS is just one single implementation that out of all those file systems. The Hadoop has a variety of file systems that can be implemented concretely. The Java abstract class org.apache.hadoop.fs.FileSystem represents a file system in Hadoop. Filesystem URI scheme Java implementation (all under org.apache.hadoop) Description Localfilefs.LocalFileSystemThe Hadoop Local filesystem is used for a locally connected disk with client-side checksumming. The local filesystem uses RawLocalFileSystem with no checksums.HDFShdfshdfs.DistributedFileSystemHDFS stands for Hadoop Distributed File System and it is drafted for working with MapReduce efficiently. HFTPhftphdfs.HftpFileSystem The HFTP filesystem provides read-only access to HDFS over HTTP. There is no connection of HFTP with FTP. This filesystem is commonly used with distcp to share data between HDFS clusters possessing different versions. HSFTPhsftphdfs.HsftpFileSystemThe HSFTP filesystem provides read-only access to HDFS over HTTPS. This file system also does not have any connection with FTP.HARharfs.HarFileSystemThe HAR file system is mainly used to reduce the memory usage of NameNode by registering files in Hadoop HDFS. This file system is layered on some other file system for archiving purposes.KFS (Cloud-Store)kfsfs.kfs.KosmosFileSystemcloud store or KFS(KosmosFileSystem) is a file system that is written in c++. It is very much similar to a distributed file system like HDFS and GFS(Google File System).FTPftpfs.ftp.FTPFileSystemThe FTP filesystem is supported by the FTP server.S3 (native)s3nfs.s3native.NativeS3FileSystemThis file system is backed by AmazonS3.S3 (block-based)s3fs.s3.S3FileSystemS3 (block-based) file system which is supported by Amazon s3 stores files in blocks(similar to HDFS) just to overcome S3's file system 5 GB file size limit. Hadoop gives numerous interfaces to its various filesystems, and it for the most part utilizes the URI plan to pick the right filesystem example to speak with. You can use any of this filesystem for working with MapReduce while processing very large datasets but distributed file systems with data locality features are preferable like HDFS and KFS(KosmosFileSystem). Comment More infoAdvertise with us Next Article Various Filesystems in Hadoop D dikshantmalidev Follow Improve Article Tags : Data Engineering Similar Reads Hadoop - Pros and Cons Big Data has become necessary as industries are growing, the goal is to congregate information and finding hidden facts behind the data. Data defines how industries can improve their activity and affair. A large number of industries are revolving around the data, there is a large amount of data that 5 min read Hadoop Version 3.0 - What's New? Hadoop is a framework written in Java used to solve Big Data problems. The initial version of Hadoop is released in April 2006. Apache community has made many changes from the day of the first release of Hadoop in the market. The journey of Hadoop started in 2005 by Doug Cutting and Mike Cafarella. 6 min read Hadoop - Daemons and Their Features In Hadoop, daemons are background Java processes that run continuously to manage storage, resource allocation, and task coordination across a distributed system. These daemons form the backbone of the Hadoop framework, enabling efficient data processing and fault tolerance at scale.Hadoop's architec 4 min read Hadoop - Daemons and Their Features In Hadoop, daemons are background Java processes that run continuously to manage storage, resource allocation, and task coordination across a distributed system. These daemons form the backbone of the Hadoop framework, enabling efficient data processing and fault tolerance at scale.Hadoop's architec 4 min read Introduction to Hadoop Hadoop is an open-source software framework that is used for storing and processing large amounts of data in a distributed computing environment. It is designed to handle big data and is based on the MapReduce programming model, which allows for the parallel processing of large datasets. Its framewo 3 min read Like