How data or a file is written into hdfs?
Answer / Dharmendra Kumar Kannaujiya
Data or a file is written into HDFS (Hadoop Distributed File System) in the following steps: 1. The client application creates a stream of data to be written. 2. The client divides this stream into blocks, typically 128MB in size. 3. Each block is then sent to DataNodes for storage. 4. The NameNode keeps track of which blocks belong to which files and their locations on DataNodes.
| Is This Answer Correct ? | 0 Yes | 0 No |
What is a block in HDFS? what is the default size in Hadoop 1 and Hadoop 2? Can we change the block size?
Does the HDFS go wrong? If so, how?
Is namenode also a commodity?
What is HDFS?
Since the data is replicated thrice in hdfs, does it mean that any calculation done on one node will also be replicated on the other two?
What are the difference between of the “HDFS Block” and “Input Split”?
What are the main hdfs-site.xml properties?
How to remove safemode of namenode forcefully in HDFS?
What is throughput? How does HDFS get a good throughput?
Compare hbase vs hdfs?
What do you mean by meta information in hdfs? List the documents related to metadata.
What are file permissions in HDFS? how does HDFS check permissions for files/directory?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)