Can spark work without hadoop?
Answer / Sonu Vaishya
Yes, Spark can work without Hadoop. While Spark was initially designed to run on top of Hadoop for resource management and storage, it has since evolved into a standalone platform with its own cluster management system (Spark Standalone) and data storage system (Spark RDDs).
| Is This Answer Correct ? | 0 Yes | 0 No |
Explain Spark SQL caching and uncaching?
Is cache an action in spark?
What is spark vectorization?
Explain Accumulator in Spark?
What is the difference between spark and apache spark?
What is executor spark?
How do I install spark?
How you can remove the element with a critical present in any other Rdd is Apache spark?
What is pipelined rdd?
What is the default spark executor memory?
Explain lineage graph
Why spark is used?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)