Answer Posted / Mukesh Chaurasia
There are multiple ways to create RDDs (Resilient Distributed Datasets) in Spark. Some common methods include:nt* parallelize(iterable): Creates an RDD from a local iterable collection.nt* textFile(path): Reads text files and converts them into an RDD of lines.nt* wholeTextFiles(path): Reads text files as key-value pairs, where keys are file paths and values are the content of the files.n
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers