Skip to content

[QST] How to do concurrent GEMMs ? #1418

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jomivaan opened this issue Mar 23, 2024 · 8 comments
Open

[QST] How to do concurrent GEMMs ? #1418

jomivaan opened this issue Mar 23, 2024 · 8 comments

Comments

@jomivaan
Copy link

Hello,
Is it possible to launch concurrent GEMMs from the host? (only using more CPU threads as a last resort) I have used streams, but they are not running concurrently, but sequentially (which was what I expected from the code). Is there a way to do it or is there a more efficient way using another kind of GEMM instead of the basic template? Thank you, below is the code I am using to call the GEMMs.

for(int i = 0; i < 4; i++ ){
      for(int j = 0; j < 4; j++){
        CutlassGemm(m,n,k,W1[i], m, Input_8bit_pointer[j], m,Interm1[i*4+j], m, cuda_streams[i][j], i, j );
      }
}
@thakkarV
Copy link
Collaborator

Whether multiple grids form different streams run concurrently or not is a property of their occupancy etc. what are you trying to achieve? You had another thread a few days ago where it seemed like batched GEMM was sufficient for your use case. Generally speaking horizontal fusions like that will yield you best results. Based on your requirements you can pick batched GEMM, pointer array batched GEMM, or grouped GEMM in that order of preference.

@jomivaan
Copy link
Author

Whether multiple grids form different streams run concurrently or not is a property of their occupancy etc. what are you trying to achieve? You had another thread a few days ago where it seemed like batched GEMM was sufficient for your use case. Generally speaking horizontal fusions like that will yield you best results. Based on your requirements you can pick batched GEMM, pointer array batched GEMM, or grouped GEMM in that order of preference.

Hello,
The other thread was a different part of the work I was doing. In this case, what I am trying to do is the following, taking as an example 4 matrices A and 4 matrices B ( with max{m, n, k} = 256) :

A0 x {B0,B1,B2,B3}, A1 x {B0,B1,B2,B3}, A2 x {B0,B1,B2,B3}, A3 x {B0,B1,B2,B3}

Would doing this as 4 batched GEMMs be a good option (by setting the A stride to 0 in each GEMM)?
What I tried to do in the for cycle above was to define each of these GEMMs individually. However, since each call to the GEMM is synchronous I was only doing 16 GEMMs sequentially and not taking full advantage of the GPU's parallelism capabilities. Thank you for the help.

@hwu36
Copy link
Collaborator

hwu36 commented Apr 18, 2024

if the size of A[0-3] are the same and B[0-3] are the same, use batched gemm, otherwise use group gemm. your size is too small for multiple stream to run in the same time. when the 2nd grid is launched, the first 1 is finished.

@jomivaan
Copy link
Author

if the size of A[0-3] are the same and B[0-3] are the same, use batched gemm, otherwise use group gemm. your size is too small for multiple stream to run in the same time. when the 2nd grid is launched, the first 1 is finished.

I ended up using GemmArray, since that allowed me to send the pointers instead of allocating the same matrices multiple times. I did try using streams and the reported behaviour was that they did not run concurrently. Nvidia profiler reports only 50% compute throughput for the kernel, is that normal for this kind of GEMM implementation?

@hwu36
Copy link
Collaborator

hwu36 commented Apr 19, 2024

your kernel is too tiny. they are memory bound. 50% sounds reasonable.

@jomivaan
Copy link
Author

jomivaan commented Apr 25, 2024

your kernel is too tiny. they are memory bound. 50% sounds reasonable.

Even when using a batch size in the order of 100s? Thank you.

Copy link

This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.

Copy link

This issue has been labeled inactive-90d due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants