How can one copy a file into HDFS with a different block size to that of existing block size configuration?
Answer / Ram Saheli
To copy a file into HDFS with a different block size, you can use the -D dfs.blocksize.size command when running the hadoop fs -copyFromLocal command. For example: `hadoop fs -copyFromLocal inputfile hdfs://namenode/outputdir -D dfs.blocksize.size=<desired_block_size>`
| Is This Answer Correct ? | 0 Yes | 0 No |
What is the throughput? How does hdfs give great throughput?
Why do we need hdfs?
What is the optimal block size in HDFS?
If I create a folder in HDFS, will there be metadata created corresponding to the folder? If yes, what will be the size of metadata created for a directory?
Why HDFS performs replication, although it results in data redundancy in Hadoop?
What do you mean by metadata in HDFS? Where is it stored in Hadoop?
Explain the difference between mapreduce engine and hdfs cluster?
How do you do a file system check in hdfs?
How to create directory in HDFS?
What is secondary namenode?
Can multiple clients write into an HDFS file concurrently in hadoop?
Explain what is a difference between an input split and hdfs block?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)