Answer Posted / Supriya Suman
"RDD Fault Tolerance" in Apache Spark ensures that data processing can continue even if one or more worker nodes fail. By default, Spark stores each RDD partition on multiple replicas across different nodes to ensure high availability. When a task fails, the system restores the data from replicas and recomputes the failed task.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers