Explain the sequence of execution of all the components of MapReduce like a map, reduce, recordReader, split, combiner, partitioner, sort, shuffle.
How to specify more than one directory as input to the MapReduce Job?
Where sorting is done in Hadoop MapReduce Job?
How to set mappers and reducers for MapReduce jobs?
What is the relation between MapReduce and Hive?
What is mapper in map reduce?
Mention what are the main configuration parameters that user need to specify to run mapreduce job?
For a job in Hadoop, is it possible to change the number of mappers to be created?
Detail description of the Reducer phases?
Explain what does the conf.setMapper Class do in MapReduce?
What is the default value of map and reduce max attempts?
Define Writable data types in Hadoop MapReduce?
Is it legal to set the number of reducer task to zero? Where the output will be stored in this case?