Why do we use HDFS for applications having large data sets and not when there are lot of small files?



Why do we use HDFS for applications having large data sets and not when there are lot of small files..

Answer / chitti

for replication factor

Is This Answer Correct ?    1 Yes 0 No

Post New Answer

More Apache Hadoop Interview Questions

What is a 'block' in HDFS?

0 Answers  


Explain what is sqoop in Hadoop ?

0 Answers  


What is 'Key value pair' in HDFS?

0 Answers  


What is the difference between rdbms and hadoop?

0 Answers  


Mention what is the number of default partitioner in Hadoop?

0 Answers  






Explain the features of stand alone (local) mode?

0 Answers  


How would you use Map/Reduce to split a very large graph into smaller pieces and parallelize the computation of edges according to the fast/dynamic change of data?

0 Answers   Twitter,


How can one increase replication factor to a desired value in Hadoop?

0 Answers  


How to keep HDFS cluster balanced?

0 Answers  


Have you ever used Counters in Hadoop. Give us an example scenario?

0 Answers  


What is a heartbeat in HDFS?

0 Answers  


What is Apache Hadoop? Why is Hadoop essential for every Big Data application?

0 Answers  


Categories