Answer Posted / Sarvesh Raghuvanshi
Data flow in Apache Flume follows the following sequence: 1. Data is generated by various sources like files, network, or spool directories. 2. The source agent reads data and sends it to the channel. 3. The channel temporarily stores the events and passes them to the sink when the capacity threshold is reached. 4. The sink writes or forwards the collected events to their final destinations like databases, HDFS, or Kafka.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers