Answer Posted / Abhay Kumar Roy
Talend does not have built-in support for creating native MapReduce jobs like Hadoop or Spark. However, you can use Talend Big Data Integration to write ETL (Extract, Transform, Load) jobs that process large datasets using Apache Hadoop Distributed File System (HDFS). You can then leverage the MapReduce paradigm by writing custom Mapper and Reducer functions within tJavaRow components.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
No New Questions to Answer in this Category !! You can
Post New Questions
Answer Questions in Different Category