Skip to content

Implicit Q8_1 quantization for matrix multiplications? #1288

Answered by JohannesGaessler
tlemo asked this question in Q&A
Discussion options

You must be logged in to vote

Because in CUDA there is an instruction __dp4a for per-byte dot products as well as tensor core instructions for int8 matrix multiplications.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by tlemo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants