Answer Posted / Neeraj Sahu
The runtime architecture of Apache Spark consists of three main components: (1) Driver Program: The main process that coordinates the application, creates executors, and manages tasks. (2) Executor: A JVM process responsible for executing tasks on worker nodes and communicating with other executors and the driver program. (3) Cluster Manager: The resource manager responsible for managing resources (such as CPU, memory, and storage) across worker nodes. Spark can run on various cluster managers like Hadoop YARN, Apache Mesos, or Spark Standalone.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers