What is a block in HDFS, why block size 64MB?
Answer / Rita Devi
A block in HDFS (Hadoop Distributed File System) is the smallest unit of data storage. Each file is divided into multiple blocks, which are then distributed across DataNodes for parallel processing and fault tolerance. The default block size in HDFS is 64MB, chosen to balance between providing enough space for large files and reducing network traffic when transferring small files.
| Is This Answer Correct ? | 0 Yes | 0 No |
Replication causes data redundancy then why is is pursued in HDFS?
What is secondary namenode?
How will you perform the inter cluster data copying work in hdfs?
What is the command for archiving a group of files in hdfs.
How much Metadata will be created on NameNode in Hadoop?
Can you explain heartbeat in hdfs?
How does hdfs give great throughput?
Is the hdfs block size reduced to achieve faster query results?
Explain how are file systems checked in hdfs?
What is hdfs block size?
How is indexing done in Hadoop HDFS?
How to change the replication factor of data which is already stored in HDFS?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)