Why do we use HDFS for applications having large data sets and not when there are lot of small files?



Why do we use HDFS for applications having large data sets and not when there are lot of small files..

Answer / chitti

for replication factor

Is This Answer Correct ?    1 Yes 0 No

Post New Answer

More Apache Hadoop Interview Questions

What is safe mode in Hadoop?

0 Answers  


what factors the block size takes before creation?

0 Answers  


Explain the features of fully distributed mode?

0 Answers  


In cloudera there is already a cluster, but if I want to form a cluster on ubuntu can we do it?

0 Answers  


Why we cannot do aggregation (addition) in a mapper? Why we require reducer for that?

0 Answers  






Explain the use of .mecia class?

0 Answers  


What are the port numbers of namenode, job tracker and task tracker?

0 Answers  


Explain InputFormat?

0 Answers  


What is Apache Hadoop?

0 Answers  


Explain how is hadoop different from other data processing tools?

0 Answers  


Did you ever built a production process in hadoop ? If yes then what was the process when your hadoop job fails due to any reason?

0 Answers  


What are combiners and its purpose?

0 Answers  


Categories