Does spark replace hadoop?
Answer / Sachin Gaurav
Spark repartition involves redistributing the RDD across different nodes. This operation can be used to balance the data distribution among nodes for better performance or to reduce memory consumption.
| Is This Answer Correct ? | 0 Yes | 0 No |
What are the languages in which Apache Spark create API?
What is pyarrow?
What is RDD?
Explain about the core components of a distributed Spark application?
What is RDD in Apache Spark? How are they computed in Spark? what are the various ways in which it can create?
Explain the repartition() operation in Spark?
Explain first() operation in Apache Spark?
Why spark is used?
What is meant by rdd lazy evaluation?
Who created spark?
Can rdd be shared between sparkcontexts?
How tasks are created in spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)