Hadoop 2.x Administration Cookbook
上QQ阅读APP看书,第一时间看更新

Introduction

In this chapter, we will take a look at the storage layer, which is HDFS, and how it can be configured for storing data. It is important to ensure the good health of this distributed filesystem, and make sure that the data it contains is available, even in the case of failures. In this chapter, we will take a look at the replication, quota setup, and balanced distribution of data across nodes, as well as covering recipes on rack awareness and heartbeat for communication with the master.

The recipes in this chapter assume that you already have a running cluster and have completed the steps given in Chapter 1, Hadoop Architecture and Deployment.

Note

While the recipes in this chapter will give you an overview of a typical configuration, we encourage you to adapt this proposal according to your needs. The block size plays an important role in the performance and the amount of data that is worked on by a mapper. It is good practice to set up passphrase less access between nodes, so that the user does not need to enter a password while doing operations across nodes.

Overview of HDFS

Hadoop distributed file system (HDFS)is inspired from the Google File system (GFS). The fundamental idea is to split the files into smaller chunks called blocks and distribute them across nodes in the cluster. HDFS is not the only filesystem used in Hadoop, but there are other filesystems as well such as MapR-FS, ISILON, and so on.

HDFS is a pseudo filesystem that is created on top of other filesystems, such as ext3, ext4, and so on. An important thing to keep in mind is that to store data in Hadoop, we cannot directly write to native filesystems such as ext3, ext4, or xfs. In this chapter, we will cover recipes to configure properties of HDFS.