How data or file is read in HDFS?
Answer / Saurabh Saran Diwakar
When a client reads a file from HDFS, it contacts the NameNode to retrieve the file's metadata (such as block location and replication factor). The client then contacts the appropriate DataNodes to download the blocks and reconstruct the file.
| Is This Answer Correct ? | 0 Yes | 0 No |
What are tools available to send the streaming data to hdfs?
Define HDFS and talk about their respective components?
How data or file is read in HDFS?
HDFS is used for applications with large data sets, not why Many small files?
Why HDFS performs replication, although it results in data redundancy in Hadoop?
When NameNode enter in Safe Mode?
What do you mean by metadata in Hadoop?
What is a block in HDFS? what is the default size in Hadoop 1 and Hadoop 2? Can we change the block size?
How data or a file is written into hdfs?
Explain what is a difference between an input split and hdfs block?
Tell me two most commonly used commands in HDFS?
What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)