Suppose there is file of size 514 mb stored in hdfs (hadoop 2.x) using default block size configuration and default replication factor. Then, how many blocks will be created in total and what will be the size of each block?
Answer / Vicky Sharma
The number of blocks created depends on the block size configuration, which is by default 128 MB for Hadoop 2.x. Since the file size is 514 MB, there will be approximately 4 blocks created (rounded down due to the default block size). The size of each block will be 128 MB.
| Is This Answer Correct ? | 0 Yes | 0 No |
What is the difference between an input split and hdfs block?
What do you mean by metadata in Hadoop?
What is Fault Tolerance in HDFS?
How much Metadata will be created on NameNode in Hadoop?
Describe HDFS Federation?
Does HDFS allow a client to read a file which is already opened for writing?
How HDFS client divide the file into the block while storing inside HDFS?
What is active and passive NameNode in HDFS?
What do you mean by the High Availability of a NameNode in Hadoop HDFS?
Can multiple clients write into a Hadoop HDFS file concurrently?
How to copy file from HDFS to local?
How one can change Replication factor when Data is already stored in HDFS
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)