To use Spark on an existing Hadoop Cluster, do we need to install Spark on all nodes of Hadoop?
Answer / Nikhil Verma
No, you don't have to install Spark on all nodes. You can run Spark applications using the YARN resource manager, which is part of Hadoop. However, you need to ensure that all the necessary dependencies are available and properly configured.
| Is This Answer Correct ? | 0 Yes | 0 No |
Why is spark used?
Is spark a special attack?
What can skew the mean?
What is map side join?
Is spark used for machine learning?
Define partitions in apache spark.
What are the advantage of spark?
How can you store the data in spark?
What is difference between spark and mapreduce?
What are the limitations of Spark?
What is spark tool?
What is standalone mode in spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)