did you maintain the hadoop cluster in-house or used hadoop in the cloud?
No Answer is Posted For this Question
Be the First to Post Answer
Why do we use HDFS for applications having large data sets and not when there are lot of small files?
How to change replication factor of files already stored in HDFS?
What are the different methods to run Spark over Apache Hadoop?
What is HDFS block size and what did you chose in your project?
What is a task instance in hadoop? Where does it run?
What is the process to change the files at arbitrary locations in HDFS?
How is security achieved in Apache Hadoop?
How to exit the vi editor?
What is column families? What happens if you alter the block size of ColumnFamily on an already populated database?
What are the hadoop's three configuration files?
On what basis Namenode will decide which datanode to write on?
How is hadoop different from other data processing tools?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)