-
Notifications
You must be signed in to change notification settings - Fork 612
Open source two simplical attention kernels #4445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
This pull request was exported from Phabricator. Differential Revision: D77756574 |
Summary: X-link: facebookresearch/FBGEMM#1508 Kernels for two simplical attention of form: L = Q @ (K1 X K2) P = softmax(L, axis=[-1, -2]) O = P @ (V1 X V2) Differential Revision: D77756574
Summary: X-link: facebookresearch/FBGEMM#1508 Kernels for two simplical attention of form: L = Q @ (K1 X K2) P = softmax(L, axis=[-1, -2]) O = P @ (V1 X V2) Differential Revision: D77756574
This pull request was exported from Phabricator. Differential Revision: D77756574 |
Summary: Pull Request resolved: pytorch#4445 X-link: facebookresearch/FBGEMM#1508 Kernels for two simplical attention of form: L = Q @ (K1 X K2) P = softmax(L, axis=[-1, -2]) O = P @ (V1 X V2) Differential Revision: D77756574
This pull request was exported from Phabricator. Differential Revision: D77756574 |
Summary: Pull Request resolved: pytorch#4445 X-link: facebookresearch/FBGEMM#1508 Kernels for two simplical attention of form: L = Q @ (K1 X K2) P = softmax(L, axis=[-1, -2]) O = P @ (V1 X V2) Differential Revision: D77756574
6aac7d6
to
d77d8e8
Compare
Summary: X-link: facebookresearch/FBGEMM#1508 Kernels for two simplical attention of form: L = Q @ (K1 X K2) P = softmax(L, axis=[-1, -2]) O = P @ (V1 X V2) Reviewed By: jiecaoyu, sijiac Differential Revision: D77756574
d77d8e8
to
8464ef1
Compare
Summary: X-link: facebookresearch/FBGEMM#1508 Kernels for two simplical attention of form: L = Q @ (K1 X K2) P = softmax(L, axis=[-1, -2]) O = P @ (V1 X V2) Reviewed By: jiecaoyu, sijiac Differential Revision: D77756574
This pull request was exported from Phabricator. Differential Revision: D77756574 |
Summary: Pull Request resolved: pytorch#4445 X-link: facebookresearch/FBGEMM#1508 Kernels for two simplical attention of form: L = Q @ (K1 X K2) P = softmax(L, axis=[-1, -2]) O = P @ (V1 X V2) Reviewed By: jiecaoyu, sijiac Differential Revision: D77756574
8464ef1
to
55e7fc5
Compare
Summary: Pull Request resolved: pytorch#4445 X-link: facebookresearch/FBGEMM#1508 Kernels for two simplical attention of form: L = Q @ (K1 X K2) P = softmax(L, axis=[-1, -2]) O = P @ (V1 X V2) Reviewed By: jiecaoyu, sijiac Differential Revision: D77756574
This pull request was exported from Phabricator. Differential Revision: D77756574 |
55e7fc5
to
508b0c0
Compare
Summary:
Kernels for two simplical attention of form:
L = Q @ (K1 X K2)
P = softmax(L, axis=[-1, -2])
O = P @ (V1 X V2)
Differential Revision: D77756574