-
Notifications
You must be signed in to change notification settings - Fork 587
Open
Description
I would like to extend my sincere gratitude for this impressive work. However, I find myself deeply puzzled by the pertaining process.
In this phase, I observe that the final convolutional output of the DepthAnything model is followed by a ReLU activation function. Given the zero translation and unit scale operation described in the paper, the ground truth inherently contains negative values. This raises a critical concern: it seems impossible for the model to generate accurate predictions with a non-negative decoder due to the presence of the ReLU module, which inherently restricts outputs to non-negative values.
I would greatly appreciate any clarification or response from the authors regarding this issue. Thank you very much.
Metadata
Metadata
Assignees
Labels
No labels