Can you change the block size of hdfs files?
Answer / Manish Jha
Yes, you can change the block size of HDFS files. The default block size is 128 MB, but it can be adjusted using the dfs.blocksize configuration property.
| Is This Answer Correct ? | 0 Yes | 0 No |
HDFS is used for applications with large data sets, not why Many small files?
Explain what happens if, during the PUT operation, HDFS block is assigned a replication factor 1 instead of the default value 3?
Describe HDFS Federation?
If a particular file is 50 mb, will the hdfs block still consume 64 mb as the default size?
Can you change the block size of hdfs files?
What is the throughput? How does hdfs give great throughput?
What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it?
Difference Between Hadoop and HDFS?
Explain the process that overwrites the replication factors in HDFS?
What is the difference between an input split and hdfs block?
Replication causes data redundancy then why is pursued in hdfs?
How to create Users in hadoop HDFS?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)