How much memory your Namenode need? - Big Data In Real World

How much memory your Namenode need?

Hadoop Archives (HAR)
October 5, 2015
Is Hive Good At Everything?
October 13, 2015
Hadoop Archives (HAR)
October 5, 2015
Is Hive Good At Everything?
October 13, 2015

How much memory your Namenode need?

This is going to be a very short post. When you are building a cluster from scratch, Hadoop developers and admins often debate the amount of memory that is needed to be allocated to the Namenode.

Here is a rule of thumb – allocate 1,000 MB to the Namenode per million blocks stored in HDFS.

This means, if the block size of your cluster is 128 MB then million blocks would equate to

128 MB * 1,000,000 blocks = 128, 000, 000 MB = 128 TB

So 1,000 MB allocated to Namenode is what is required to manage a cluster with 128 TB of raw disk space. Please note the 1,000 MB is to be used just by the Namenode process for holding the block metadata in memory . The node itself will need additional memory to cater OS and other services running on the node.

For more details refer this Hadoop JIRA.

Big Data In Real World
Big Data In Real World
We are a group of Big Data engineers who are passionate about Big Data and related Big Data technologies. We have designed, developed, deployed and maintained Big Data applications ranging from batch to real time streaming big data platforms. We have seen a wide range of real world big data problems, implemented some innovative and complex (or simple, depending on how you look at it) solutions.

Comments are closed.

How much memory your Namenode need?
This website uses cookies to improve your experience. By using this website you agree to our Data Protection Policy.

Hadoop In Real World is now Big Data In Real World!

X