This repository was archived by the owner on May 6, 2025. It is now read-only.
Linearize function explanation #153
Unanswered
eeshaan-ravi-tivari
asked this question in
Q&A
Replies: 1 comment 4 replies
-
Correct, and you can read more about it in https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html#jacobian-vector-products-jvps-aka-forward-mode-autodiff or in papers about JAX https://arxiv.org/abs/2105.09469 https://arxiv.org/abs/2204.10923 Torch has https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html, but please note their warning
so perhaps https://github.com/pytorch/functorch#jvp is a better entry point. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Can someone explain exactly how does linearize() function works and what is it using as inputs, what & how is it calculating that it is supposed to return i.e. first-order Taylor approximation of the function? An example to answer all these in detail will b much helpful. Also if I want to replicate its behaviour in Pytorch, how can I do that?
Beta Was this translation helpful? Give feedback.
All reactions