How can you trigger automatic clean-ups in Spark to handle accumulated metadata?
Answer Posted / Mukesh Kumar Gangwar
To trigger automatic clean-ups in Apache Spark, you can use the "spark.sql.execution.arrow.enableHadoopMetadataCleaner" configuration property. This property enables a metadata cleaner that cleans up Hadoop Distributed File System (HDFS) blocks when they are no longer needed. By default, it is set to false.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers