How apache spark works?
Answer / Abhishek Kumar Rai
Apache Spark works by creating a resilient distributed dataset (RDD), which is an immutable distributed collection of data. The RDD serves as the basic processing unit in Apache Spark. Transformations are applied to create new RDDs, and actions are performed to compute results from the RDDs. Spark provides high-level APIs for various programming languages like Scala, Java, Python, and R.
| Is This Answer Correct ? | 0 Yes | 0 No |
How is spark different from hadoop?
What is difference between map and flatmap?
How is streaming implemented in spark?
What are the components of spark?
What is the task of Spark Engine
What is data skew and how do you fix it?
How does Apache Spark handles accumulated Metadata?
What is lazy evaluation in Spark?
What is the difference between spark and python?
What is spark lineage?
What are the disadvantages of using Spark?
What is dag – directed acyclic graph?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)