Answer Posted / Vivek Prasad
RDDs in Spark are distributed across the cluster nodes using a partitioning scheme. By default, Spark uses HashPartitioner to divide the RDD into equal-sized partitions.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers