Answer Posted / Ankur Mittal
RDD (Resilient Distributed Datasets) in Apache Spark are designed to be immutable for fault tolerance. Once an RDD is created, its data cannot be altered directly to ensure that when a task fails, the computation can be re-executed on another node with consistent data.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers