Answer Posted / Anuj Mishra
A Spark program reads data, transforms it using various operations such as map, filter, reduce, and joins, and then writes the results back to a storage system like HDFS. Resilient Distributed Datasets (RDD) are used for fault-tolerant distributed storage in Spark.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers