Skip to content

[WIP] Provide an efficient way for toPixels #5914

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

qjia7
Copy link
Contributor

@qjia7 qjia7 commented Dec 2, 2021

This PR is still in draft. I just want to showcase how webgpu write the tensor image data to a canvas.

Ideally, we just need a compute pass to read from the input tensor's GPUBuffer and write to the webgpu canvas's swap chain texture. Due to one chrome bug, I have to split it to two pass: 1) use compute pass to read from the input tensor's GPUBuffer and write to an intermediate texture, 2) draw the intermediate texture to canvas. But maybe I can combine them into one render pass.

Currently, two rank-3 tensor with 4 channel cases are still failed. I am investigating the reason. In the final version, a new API in my mind will be like:
function kernelName(img: Tensor2D|Tensor3D|TensorLike): HTMLCanvasElement|OffscreenCanvas

webgpu backend will return a webgpu context canvas. webgl backend will return a webgl context canvas.

To see the logs from the Cloud Build CI, please join either our discussion or announcement mailing list.


This change is Reviewable

@google-cla google-cla bot added the cla: yes label Dec 2, 2021
@qjia7
Copy link
Contributor Author

qjia7 commented Dec 2, 2021

All the changes are still based on existed `toPixels. I just want to test buffer -> canvas path. For an efficient way, we can add a new kernel to directly return the canvas.

CC @pyu10055 @lina128

const renderPassDescriptor: GPURenderPassDescriptor = {
colorAttachments: [
{
view: ctx.getCurrentTexture().createView(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this drawing to a different webgpu context? can similar be done with WebGL?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This webgpu context is configured to the current using GPUDevice. So I can draw to any webgpu context canvas as long as they are set to the same gpu device. You can see that this webgpu context needs to be configured as below:

  const gpuContext = canvas.getContext('webgpu');
  gpuContext.configure({
    device: backend.device,
    ...
  });

For webgl, you can't draw to a different webgl context. So we have to adjust the default webgl context's size and draw to it. I think webgl should support adjust the default framebuffer's size. I will check it.

const [height, width] = $img.shape.slice(0, 2);

const outShape = [height, width, 4];
const program = new ToCanvasProgram(outShape, $img.dtype);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the caller provide the canvas,given that some of the platform do not have the document variable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the caller provide the canvas, it may need one or two extra copying and we also need to check different canvas context.
For example, if we directly return the webgpu context canvas, assume that the user want to use it in his webgl program. He can just call texImage2D with the webgpu context canvas as the source to get a GLTexture and continue to use the texture in his program.
But if we let user pass a canvas, if the canvas is a webgl canvas, we need to 1) use texImage2D to get a GLTexture from the webgpu context canvas, then 2) draw the GLTexture to the canvas that the user provided. 3) then the user can use this canvas, he has to call texImage2D with the canvas he provided again to get the GLTexture.
If the canvas is a 2d canvas, we need to draw the webgpu context canvas to the 2d canvas.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think canvas can serve both webgl and 2d, only the context is fixed to one or the other?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pyu10055 Are you suggesting the new kernel API as

  1. function [async] kernelName(img: Tensor2D|Tensor3D|TensorLike): HTMLCanvasElement|OffscreenCanvas

or

  1. function [async ]kernelName(img: Tensor2D|Tensor3D|TensorLike, canvas: HTMLCanvasElement|OffscreenCanvas)

For 1, we can return different canvas context based on backend (I prefer this one). For 2, once the canvas the user passed has been specified to one context, we can't change it. So that's way I said there will be extra copying needed for this case.

should the caller provide the canvas,given that some of the platform do not have the document variable.

Can OffscreenCanvas resolve it?

I think canvas can serve both webgl and 2d, only the context is fixed to one or the other?

Yes. But once it's fixed to one, we can't change it to another. And we need to do different processing for them.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, I was thinking option 2, do we need a 2d or webgl canvas?
can we ask users to provide the exact type of canvas that we need?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for option 2, the best way is let user pass a non-context specified canvas and let backends decide what kind of context they want. In this case, there is no difference for webgpu backend since in both situation it needs an extra canvas as the draw target. But for webgl backend, it needs extra copy to move the data from the default webgl context canvas to the new passed canvas.
But if we allow user to provide a specific context canvas, the extra copy is needed both for webgl and webgpu unless we also expose the GPU device id to users. I mean for example when users register webgpu backend, we allow users specify a device id to webgpu backend, then backend can directly use this device id and don't need to create a new one. If the passed canvas context is in the same device id with the webgpu backend, no extra copying is needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants