Answer Posted / Rita Devi
A block in HDFS (Hadoop Distributed File System) is the smallest unit of data storage. Each file is divided into multiple blocks, which are then distributed across DataNodes for parallel processing and fault tolerance. The default block size in HDFS is 64MB, chosen to balance between providing enough space for large files and reducing network traffic when transferring small files.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
No New Questions to Answer in this Category !! You can
Post New Questions
Answer Questions in Different Category