What are transformations in spark?
Answer / Mitesh Shandilaya
Transformations in Apache Spark are operations that create a new Dataset or RDD from an existing one. Transformations do not trigger computation; instead, they return a new object containing the desired transformation. Once a series of transformations has been specified, the user can call the action method to actually execute the transformation and get the results.
| Is This Answer Correct ? | 0 Yes | 0 No |
What is Spark MLlib?
How does pipe operation writes the result to standard output in Apache Spark?
Different Running Modes of Apache Spark
In a given spark program, how will you identify whether a given operation is Transformation or Action ?
Is spark secure?
What is a tuple in spark?
What are the file formats supported by spark?
What is difference between spark and kafka?
Is spark based on hadoop?
What is the difference between persist() and cache()?
Can rdd be shared between sparkcontexts?
Define Partition and Partitioner in Apache Spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)