Replies: 3 comments 6 replies
-
That's because Filament materials are expected to output values with no transfer function applied ("in linear space"). When reading from a texture this is usually handled automatically by marking the texture as an sRGB texture so the sampler automatically applies the EOTF. Otherwise, you have to do it yourself in your material. |
Beta Was this translation helpful? Give feedback.
-
I am absolutely positive I am using linear tone-mapping,
The texture for the camera preview is created using (all on the native side using the
I create a stream for the
This is rendered using an To take a screenshot, I do
where
and feed it the pixel buffer just filled by
Then render this again using an I would say that, assuming I haven't forgotten anything in the summary above, this I was suspecting that this might be an issue in an automatic tone-mapping taking place |
Beta Was this translation helpful? Give feedback.
-
I can get this to work right by doing an
In that case the rendered camera stream texture looks the same as the rendered texture which I generate from the data obtained from I can't say I understand why, because the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
On Android, in order to render the camera feed, I am using
Stream
with.stream(...)
set to the camera external texture, and use this in theTexture::setStream
. This works fine. I render this texture on a quad full-screen, and then callreadPixels
on the stream to get a bitmap of the full resolution camera image (as opposed to callingreadPixels
on the renderer).If I now take that bitmap image and use it as a source for a new
Texture
, it seems that there is a brightness or gamma mismatch. In particular, if I make a camera shot of a dark room, the preview renders fine (the camera knows how to set the exposure or correct for low light), but once I stick the image obtained fromStream::readPixels
into a new texture and render that, it is a lot darker. For a bright scene the preview and final bitmap are much closer or indistinguishable. I am using linear tonemapping.What is the right way to compensate for this, so that the
Stream::readPixels
result gives what I actually see on the screen when I render this stream through a texture?(I need a solution that works on Android versions down to 5.x, so I cannot use the more modern hardware buffer-based approach).
Beta Was this translation helpful? Give feedback.
All reactions