Answer Posted / Raza Hussain Khan
To process big data with Apache Spark, you first create a Spark Context or Session, then load your data into a Resilient Distributed Dataset (RDD), and apply transformations and actions to it. Transformations return new datasets, and actions return a value.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers