Replies: 3 comments 2 replies
-
Sorry for the delayed response!
Wgpu is the most portable, but not the fastest. Though it certainly should be performing much better than the reported numbers here. Are you sure wgpu is detecting you GPU correctly and not using the CPU? On my machine (laptop with 4050 GTX notebook), I get the following training stats on wgpu
tch-gpu
Also, the initial iter/sec reportings might be skewed by autotune when using wgpu. |
Beta Was this translation helpful? Give feedback.
-
I also tried with vulkan and cuda features, but they are basically the same - both in speed, displayed GPU number and GPU load. My system does have integrated AMD GPU (built-in in Ryzen 9), but I believe it is not in use. |
Beta Was this translation helpful? Give feedback.
-
Tried text-classification examples with different backends, and found out that wgpu training is going at about 0.25 iters/sec, CPU (Ryzen 9) - at about 1.3 iters/sec, and tch-gpu is about 15-17 iters/sec.
Is it OK? Tried with different env vars, but no luck.
Also to compile db-pedia-train with wgpu there is a need to specify:
#![recursion_limit = "256"]
Beta Was this translation helpful? Give feedback.
All reactions