How would you use Map/Reduce to split a very large graph into smaller pieces and parallelize the computation of edges according to the fast/dynamic change of data?


No Answer is Posted For this Question
Be the First to Post Answer

Post New Answer

More Apache Hadoop Interview Questions

What is Hadoop Custom partitioner ?

0 Answers  


What are different types of filesystem?

0 Answers  


Where do you specify the Mapper Implementation?

0 Answers  


What is the use of combiners in the hadoop framework?

0 Answers  


What is a spill factor with respect to the ram?

0 Answers  


How to write a Custom Key Class?

0 Answers  


What is InputSplit and RecordReader?

0 Answers  


What are the different types of Znodes?

0 Answers  


How would you tackle counting words in several text documents?

0 Answers  


What will you do when NameNode is down?

0 Answers  


Is secondary namenode a substitute to the namenode?

0 Answers  


What is salary of hadoop developer?

0 Answers  


Categories