Explain the process to trigger automatic clean-up in Spark to manage accumulated metadata.
Answer Posted / Sandeep Nigam
In Apache Spark, you can configure automatic cleanup of temporary storage (such as RDD checkpoints and cached data) using the configuration options `spark.cleaner.enabled`, `spark.eventLog.enabled`, and `spark.eventLog.maximumEventLogEntries`. Enabling these options will trigger Spark to clean up accumulated metadata periodically or based on storage limits.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers