By Default, how many partitions are created in RDD in Apache Spark?
Answer Posted / Deepak Diwakar
By default, when an RDD is created from a local file using `textFile()` or `wholeTextFiles()`, Spark creates 20 partitions for the RDD. When reading data from HDFS, Spark creates as many partitions as there are blocks in the file on the HDFS cluster.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers