Answer Posted / Gama Yadav
A common use case for Apache Flume is collecting log data from distributed systems and moving them into Hadoop Distributed File System (HDFS) for analysis. By using Flume, organizations can efficiently gather large volumes of log data generated by various applications running across multiple servers.nOnce the data is stored in HDFS, it can be processed using Apache Hadoop MapReduce or other big data processing frameworks for insights and reporting.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers