Answer Posted / Arvind Saini
The Leaky ReLU (Rectified Linear Unit) function is a variation of the standard ReLU function that aims to address the vanishing gradient problem in deep neural networks. Instead of outputting 0 for negative inputs as the standard ReLU does, the Leaky ReLU outputs a small positive value between 0 and some constant 'a'. The formula for Leaky ReLU is: f(x) = max(0, ax) + b. Here, 'a' is the slope of the line for negative values (usually set to 0.1), and 'b' is a bias term.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
No New Questions to Answer in this Category !! You can
Post New Questions
Answer Questions in Different Category