We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent a68f400 commit 6923658Copy full SHA for 6923658
docs/src/usage/multitasking.md
@@ -140,7 +140,8 @@ end
140
141
By using the `Threads.@spawn` macro, the tasks will be scheduled to be run on different CPU
142
threads. This can be useful when you are calling a lot of operations that "block" in CUDA,
143
-e.g., memory copies to or from unpinned memory. Generally though, operations that
+e.g., memory copies to or from unpinned memory. The same result will occur when using a
144
+`Threads.@threads for ... end` block. Generally, though, operations that
145
synchronize GPU execution (including the call to `synchronize` itself) are implemented in a
146
way that they yield back to the Julia scheduler, to enable concurrent execution without
147
requiring the use of different CPU threads.
0 commit comments