What do you mean by block scanner in hdfs?
Answer / Rohit Kumar Kukreti
In HDFS, a block scanner is a component responsible for reading and checking the integrity of data blocks within a file. It verifies that the data read from DataNodes matches with the data written originally. This process helps ensure the data integrity in HDFS.
| Is This Answer Correct ? | 0 Yes | 0 No |
Difference Between Hadoop and HDFS?
How does hdfs provides good throughput?
Why HDFS performs replication, although it results in data redundancy?
Explain how HDFS communicates with Linux native file system?
Why rack awareness algorithm is used in hadoop?
How to keep files in HDFS?
What is the command for archiving a group of files in hdfs.
What do you mean by metadata in HDFS?
File permissions in HDFS?
Which one is the master node in HDFS? Can it be commodity hardware?
What are the key features of HDFS?
What is throughput? How does HDFS get a good throughput?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)