Data node block size in HDFS, why 64MB?
Answer / Dharmendra Meena
The block size in HDFS is set to 64MB by default for several reasons: efficiency (it balances between disk I/O and network bandwidth), fault tolerance (large blocks reduce the number of blocks per file, reducing the likelihood of losing multiple blocks in a single failure), and scalability (larger block sizes enable HDFS to handle larger files more efficiently).
| Is This Answer Correct ? | 0 Yes | 0 No |
What is the difference between MapReduce engine and HDFS cluster?
Why HDFS performs replication, although it results in data redundancy in Hadoop?
What does heartbeat in hdfs means?
Does hdfs enable a customer to peruse a record, which is already opened for writing?
How to transfer data from Hive to HDFS?
What alternate way does HDFS provides to recover data in case a Namenode, without backup, fails and cannot be recovered?
How to create Users in hadoop HDFS?
How much Metadata will be created on NameNode in Hadoop?
What is the difference between namenode, backup node and checkpoint namenode?
Can we have different replication factor of the existing files in hdfs?
What is a block in HDFS? what is the default size in Hadoop 1 and Hadoop 2? Can we change the block size?
Explain the key features of hdfs?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)