Answer Posted / Dharm Prakash Pawan
store() in PySpark is used to persist the RDD (Resilient Distributed Datasets) on disk, allowing you to reuse the data later without recomputing it. On the other hand, continue() is a keyword used within closure-based transformations (such as map(), reduce()) to chain multiple functions together and avoid repetitive code.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers