How is spark fault tolerance?
Answer / Apurav Garg
Apache Spark achieves fault tolerance by storing multiple copies of the same data across different nodes in a cluster. When a task fails, it can be re-executed on another node that has a copy of the data. Additionally, Spark maintains lineage information, allowing it to recalculate dependencies if needed.
| Is This Answer Correct ? | 0 Yes | 0 No |
What causes breaker to spark?
Define Spark Streaming.
How to create RDD?
Why is Spark RDD immutable?
Explain the flatMap operation on Apache Spark RDD?
What are the benefits of using Spark with Apache Mesos?
Is databricks a database?
How many ways we can create rdd in spark?
What is Apache Spark Streaming?
What is executor cores in spark?
What are the features of spark rdd?
What are Actions? Give some examples.
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)