Answer Posted / Apurav Garg
Apache Spark achieves fault tolerance by storing multiple copies of the same data across different nodes in a cluster. When a task fails, it can be re-executed on another node that has a copy of the data. Additionally, Spark maintains lineage information, allowing it to recalculate dependencies if needed.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers