How does spark run hadoop?
Answer / Manvendra Kumar
"Spark can run on top of a Hadoop cluster by using YARN (Yet Another Resource Negotiator) as the resource manager. In this setup, YARN manages resources for both Hadoop MapReduce and Spark jobs. When a Spark job is submitted to the cluster, it is executed by a Spark executor launched on worker nodes that communicate with the YARN ResourceManager."n
| Is This Answer Correct ? | 0 Yes | 0 No |
If there is certain data that we want to use again and again in different transformations, what should improve the performance?
What is the difference between spark and scala?
What is cluster manager in spark?
Does spark use java?
List the advantage of Parquet file in Apache Spark?
List the functions of Spark SQL?
What is a Sparse Vector?
What languages support spark?
How sparksql is different from hql and sql?
What is "GraphX" in Spark?
What database does spark use?
How does yarn work with spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)