What are the different levels of persistence in Spark?
Answer / Dharmveer Singh Saini
Spark offers three levels of data persistence: Memory (RDD.cache() and RDD.count()), Disk (RDD.persist() without memory or disk parameter), and Checkpointing (configuring the checkpoint directory for resilient distributed datasets).
| Is This Answer Correct ? | 0 Yes | 0 No |
Explain about the different cluster managers in Apache Spark
What is coarsegrainedexecutorbackend?
What is hadoop technology?
Explain the concept of resilient distributed dataset (rdd).
Is spark streaming real time?
Is spark a special attack?
Explain lineage graph
Do we need to install spark in all nodes?
Is rdd type safe?
What are the actions in spark?
What are the ways to create RDDs in Apache Spark? Explain.
What is a hive on spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)