Why Hadoop MapReduce?
What is Reduce only jobs?
What is a "reducer" in Hadoop?
Where sorting is done in Hadoop MapReduce Job?
How does Hadoop Classpath plays a vital role in stopping or starting in Hadoop daemons?
When the reducers are are started in a mapreduce job?
A number of combiners can be changed or not in MapReduce?
Does mapreduce programming model provide a way for reducers to communicate with each other? In a mapreduce job can a reducer communicate with another reducer?
What are combiners? When should I use a combiner in my MapReduce Job?
Whether the output of mapper or output of partitioner written on local disk?
Mention what is the hadoop mapreduce apis contract for a key and value class?
How will you submit extra files or data ( like jars, static files, etc. ) For a mapreduce job during runtime?
What platform and Java version is required to run Hadoop?