Answer Posted / Shadab Faisal
"AggregateByKey is a Spark function that allows you to perform a custom aggregation operation on each key of an RDD, combining values for the same key. It returns a new RDD with the result of the aggregation. For example, you can use AggregateByKey to count the occurrences of each word in a text file like so: `textRDD.flatMapValues(_.split(" ")).aggregateByKey(0)(_+1, _+_)`"n
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers