How one can change Replication factor when Data is already stored in HDFS
Answer / Satya Prakash
To change the replication factor of a file that already exists in HDFS, you can use the following command: n$ hadoop dfs -setrep -w <number_of_replicas> /path/to/your/file
| Is This Answer Correct ? | 0 Yes | 0 No |
HDFS is used for applications with large data sets, not why Many small files?
How is indexing done in Hadoop HDFS?
What is hdfs in big data?
Distinguish HDFS Block and Input Unit?
Write the command to copy a file from linux to hdfs?
How can one copy a file into HDFS with a different block size to that of existing block size configuration?
Is the hdfs block size reduced to achieve faster query results?
What is a task tracker?
Can multiple clients write into an HDFS file concurrently in hadoop?
How does hdfs get a good throughput?
How NameNode tackle Datanode failures in HDFS?
How to use hdfs put command for data transfer from flume to hdfs?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)