Answer Posted / Rishu Chaudhary
Spark is designed to handle large amounts of data quickly. It achieves this through in-memory caching, lazy evaluation (only processing the necessary data), and resilient distributed datasets (RDDs) that allow fault tolerance.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers