If a particular file is 50 mb, will the hdfs block still consume 64 mb as the default size?
Answer / Amit Kanawjia
No, by default HDFS blocks are 128 MB in size. Since your file is only 50 MB, it would occupy one block instead of two if the default block size was 64 MB.
| Is This Answer Correct ? | 0 Yes | 0 No |
While processing data from hdfs, does it execute code near data?
What is throughput? How does HDFS provide good throughput?
What is the procedure to create users in HDFS and how to allocate Quota to them?
What are the different file permissions in the HDFS for files or directory levels?
Clarify the difference between nas and hdfs.
How is hdfs block size different from traditional file system block size?
Can multiple clients write into an HDFS file concurrently?
What alternate way does HDFS provides to recover data in case a Namenode, without backup, fails and cannot be recovered?
What is Fault Tolerance in Hadoop HDFS?
What are the difference between of the “HDFS Block” and “Input Split”?
Why do we need hdfs?
Which one is the master node in HDFS? Can it be commodity hardware?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)