Is it possible to run Apache Spark without Hadoop?
Answer / Bijay Kumar
Yes, it is possible to run Apache Spark without Hadoop. Spark can be run standalone or with other cluster managers like Mesos and YARN. However, running Spark on Hadoop provides seamless integration and access to the HDFS file system.
| Is This Answer Correct ? | 0 Yes | 0 No |
What is data skew in spark?
Why is spark good?
Is spark based on hadoop?
Are sparks dangerous?
What is executor spark?
Is there an api for implementing graphs in spark?
What is apache spark written in?
What is python spark?
What is Speculative Execution in Apache Spark?
What is row rdd in spark?
What are 4 v's of big data?
What are the limitations of Spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)