How hdfs is different from traditional file systems?
Answer / Shilpi Goswami
HDFS is a distributed file system designed to run on commodity hardware. It provides high throughput access to application data by distributing the data across many nodes in a cluster. HDFS also offers fault tolerance, as it replicates data blocks across multiple machines to ensure availability.
| Is This Answer Correct ? | 0 Yes | 0 No |
What is a rack awareness algorithm and why is it used in hadoop?
How does hdfs ensure information integrity of data blocks squares kept in hdfs?
How does HDFS ensure Data Integrity of data blocks stored in HDFS?
How data or file is read in HDFS?
How HDFS client divide the file into the block while storing inside HDFS?
Does hdfs enable a customer to peruse a record, which is already opened for writing?
What is secondary namenode?
What is the command for archiving a group of files in hdfs.
What is Block in HDFS?
In HDFS, how Name node determines which data node to write on?
What is the throughput? How does hdfs give great throughput?
Does the HDFS go wrong? If so, how?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)