What are the transformations in spark?
Answer / Awashesh Kumar Tiwari
Transformations in Apache Spark are operations that return a new RDD, allowing you to manipulate data without immediately executing it on the cluster. Examples include map(), filter(), flatMap(), and reduceByKey(). Transformations are lazy and need to be followed by an action (e.g., count(), collect()) for the computation to be executed.
| Is This Answer Correct ? | 0 Yes | 0 No |
What is executor memory in a spark application?
What are the various libraries available on top of Apache Spark?
Do we need scala for spark?
What are the ways in which Apache Spark handles accumulated Metadata?
What is stage and task in spark?
Explain pipe() operation. How it writes the result to the standard output?
How do we create rdds in spark?
How you can use Akka with Spark?
What is spark reducebykey?
Which language is not supported by spark?
List the advantage of Parquet files?
What is meant by in-memory processing in Spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)