Answer Posted / Manvendra Kumar
"Spark can run on top of a Hadoop cluster by using YARN (Yet Another Resource Negotiator) as the resource manager. In this setup, YARN manages resources for both Hadoop MapReduce and Spark jobs. When a Spark job is submitted to the cluster, it is executed by a Spark executor launched on worker nodes that communicate with the YARN ResourceManager."n
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers