Answer Posted / Awashesh Kumar Tiwari
Transformations in Apache Spark are operations that return a new RDD, allowing you to manipulate data without immediately executing it on the cluster. Examples include map(), filter(), flatMap(), and reduceByKey(). Transformations are lazy and need to be followed by an action (e.g., count(), collect()) for the computation to be executed.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers