Explain HDFS “Write once Read many” pattern?
Answer / Deep Prakash
The Write Once Read Many (WORM) pattern in HDFS is a mechanism that ensures files can be written only once and read multiple times. It's useful for archiving data, as it prevents the accidental or intentional modification of the data. This is achieved by restricting certain operations like modification and deletion on specific files or directories.
| Is This Answer Correct ? | 0 Yes | 0 No |
Why rack awareness algorithm is used in hadoop?
What are problems with small files and hdfs?
What do you mean by the High Availability of a NameNode in Hadoop HDFS?
Describe HDFS Federation?
When NameNode enter in Safe Mode?
How does hdfs give great throughput?
Mention what is the difference between hdfs and nas?
How to split single hdfs block into partitions rdd?
How does a client read/write data in HDFS?
Why is block size set to 128 MB in HDFS?
How to use hdfs put command for data transfer from flume to hdfs?
What is the difference between input split and hdfs block?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)