Do we need hadoop for spark?
Answer / Om Prakash Divakar
"No, Apache Spark can run standalone or on Hadoop YARN (Yet Another Resource Negotiator), but it is not dependent on Hadoop MapReduce. However, running Spark on a Hadoop cluster allows you to leverage the distributed file system of HDFS and use existing data that is already stored there."n
| Is This Answer Correct ? | 0 Yes | 0 No |
How spark is used in hadoop?
What apache spark is used for?
How many ways we can create rdd?
What is meant by spark in big data?
What is the biggest shortcoming of Spark?
What is meant by Transformation? Give some examples.
What can skew the mean?
What are the different levels of persistence in Spark?
List the functions of Spark SQL?
Explain Spark Driver?
What is SparkContext in Apache Spark?
Explain schemardd?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)