Degrees of freedom in ReluComplementarityFormulation #148
Unanswered
Sakshi21299
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I noticed in the example notebook https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/neural_network_formulations.ipynb that there are degrees of freedom in the ReluComplementarityFormulation. Any thoughts on why that is the case? If we fix the inputs x, is the output of this formulation not uniquely determined for a trained surrogate model?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions