What happens if the block in HDFS is corrupted?
Answer / Ashish Kumar Pankaj
If a block in HDFS is corrupted, the NameNode marks it as corrupt and doesn't allow data nodes to serve that block. DataNodes will periodically replicate the blocks to other DataNodes for redundancy, so if one block is lost or corrupted, another copy can be used to recover the data.
| Is This Answer Correct ? | 0 Yes | 0 No |
File permissions in HDFS?
If a particular file is 50 mb, will the hdfs block still consume 64 mb as the default size?
Tell me two most commonly used commands in HDFS?
How to create Users in hadoop HDFS?
What are the different file permissions in the HDFS for files or directory levels?
Clarify the difference between nas and hdfs.
How to Delete file from HDFS?
What is the difference between an hdfs block and input split?
What is a task tracker?
If the source data gets updated every now and then, how will you synchronize the data in hdfs that is imported by sqoop?
What is throughput? How does HDFS provide good throughput?
Which one is the master node in HDFS? Can it be commodity hardware?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)