Replies: 4 comments 1 reply
-
Easier conversion with Nx tensor Current nx_tensor =
Nx.iota({256, 1}, type: :u8)
|> Nx.broadcast({256, 256, 3})
nx_tensor
|> Evision.Mat.from_nx_2d()
|> Evision.resize({400, 400})
|> Evision.Mat.to_nx() Easier nx_tensor =
Nx.iota({256, 1}, type: :u8)
|> Nx.broadcast({256, 256, 3})
Evision.resize(nx_tensor, {400, 400}, to_nx: true) And, I want to specify the backend. nx_tensor =
Nx.iota({256, 1}, type: :u8)
|> Nx.broadcast({256, 256, 3})
Evision.resize(nx_tensor, {400, 400}, to_nx: EXLA.Backend) I want to use the nx default backend.
|
Beta Was this translation helpful? Give feedback.
-
Actually, I think that perhaps I can return a tensor that is on the same backend. For example, exla_tensor =
Nx.iota({256, 1}, type: :u8)
|> Nx.broadcast({256, 256, 3})
result_exla_tensor =
Evision.resize(exla_tensor, {400, 400}, from_nx_2d: true) And to switch to another backend, we can explicitly call exla_tensor =
Nx.iota({256, 1}, type: :u8)
|> Nx.broadcast({256, 256, 3})
result_torch_tensor =
Evision.resize(exla_tensor, {400, 400}, from_nx_2d: true)
|> Nx.backend_transfer(Torchx.Backend) Or as you proposed, we can do both. exla_tensor =
Nx.iota({256, 1}, type: :u8)
|> Nx.broadcast({256, 256, 3})
torch_tensor_1 =
Evision.resize(exla_tensor, {400, 400}, from_nx_2d: true, to_nx: Torchx.Backend)
exla_tensor_2 =
Evision.imread("demo.png", to_nx: EXLA.Backend) # this returns an EXLA tensor instead of an Evision.Mat
exla_tensor_3 =
Evision.imread("demo.png", to_nx: EXLA.Backend) # this returns an EXLA tensor
|> Evision.resize({400, 400}, from_nx_2d: true) # and it should stay as an EXLA tensor
exla_tensor_4 =
Evision.imread("demo.png") # <= this returns an Evision.Mat
|> Evision.resize({400, 400}, to_nx: EXLA.Backend) # and we can pass `to_nx: EXLA.Backend`
# so that it returns an EXLA tensor What do you think? |
Beta Was this translation helpful? Give feedback.
-
I found that there are some difficulties in applying this universally. I'll look into this further! |
Beta Was this translation helpful? Give feedback.
-
The following warning appears during a build when CUDA is enabled.
Exempting older architectures should reduce build time. opencv/opencv#20576 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Please feel free to share your ideas 💡 for new features!
Beta Was this translation helpful? Give feedback.
All reactions