What is difference between spark and mapreduce?
Answer / Navneet Sahu
While both are frameworks for large-scale data processing, Apache Spark differs from MapReduce in several ways. Spark provides a more user-friendly API, supports iterative computation, offers in-memory processing (RDDs), and has faster performance due to its ability to run on the same JVM as batch jobs and real-time applications.
| Is This Answer Correct ? | 0 Yes | 0 No |
Explain how can you minimize data transfers when working with spark?
Can you explain apache spark?
What is a spark standalone cluster?
Explain about the popular use cases of Apache Spark
Explain sum(), max(), min() operation in Apache Spark?
What is spark reducebykey?
What is javardd?
Name some internal daemons used in spark?
What do you know about schemardd?
What is the bottom layer of abstraction in the Spark Streaming API ?
How does spark run hadoop?
Name the Spark Library which allows reliable file sharing at memory speed across different cluster frameworks.
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)