What is the job of store() and continue()?
Answer / Dharm Prakash Pawan
store() in PySpark is used to persist the RDD (Resilient Distributed Datasets) on disk, allowing you to reuse the data later without recomputing it. On the other hand, continue() is a keyword used within closure-based transformations (such as map(), reduce()) to chain multiple functions together and avoid repetitive code.
| Is This Answer Correct ? | 0 Yes | 0 No |
What record frameworks does Spark support?
What is parallelize in pyspark?
Why do we use pyspark?
What are the enhancements that engineer can make while working with flash?
What is pyspark sql?
Is pyspark faster than pandas?
What is Lazy Evaluation?
What is pyspark rdd?
What is YARN?
What is rdd in pyspark?
What is map in pyspark?
What is the difference between spark and pyspark?