How many partitions are created by default in Apache Spark RDD?
Answer / Chhavi Singh
By default, no partitions are created for an RDD. The number of partitions is determined when the RDD is first created or read from a storage system like HDFS.
| Is This Answer Correct ? | 0 Yes | 0 No |
Explain Spark countByKey() operation?
Where is apache spark used?
List various commonly used machine learning algorithm?
What is driver memory and executor memory in spark?
List the popular use cases of Apache Spark?
Explain cogroup() operation in Spark?
Is it possible to run Apache Spark on Apache Mesos?
Define sparkcontext in apache spark?
Explain the terms Spark Partitions and Partitioners?
What is spark master?
What do you know about schemardd?
Name few companies that are the uses of apache spark?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)