Replies: 1 comment 2 replies
-
No the add messages node has no access to the 'role: user' input dictionary which contains the nested image url/file reference, that's a whole different part of the processing procedure. I have it set to auto, in that I don't specify it and auto is the default. I didn't read this as saying that 512x512 would automatically be rated low by the auto setting, it seemed to be unstated exactly what criteria would be used for making this determination. It sounded like it makes a low res 512x512 version of the image and additionally 512 x 512 crops using more tokens if it determines that 'high' would be the appropriate setting. It might even look at the prompt to determine if it can provide a reasonable response just using a low-res version of the image, I don't know. Anyway, this is something that is going to be outside the scope of this project for the time being. You could experiment with the image and prompt and see if you could trigger a low res/low token response, like maybe just asking for the background color or something that doesn't require much resolution. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
The vision models on OpenAI's API will accept low or high fidelity image processing. Supposedly, if you send a 512x512 image, it should automatically use the low fidelity setting. But I haven't found that to be the case. It still seems to process 512x512 images using the high fidelity setting, and charges you 3x more in tokens. So to use the low setting we need explicitly set the
detail
parameter tolow
.Here are the docs:
How can we set the
detail
parameter in Plush? From the looks of it you have to set this parameter inside the messages array, it is not just another top level parameter. Could we add these parameters using the Add Parameters node, like this?messages::content::image_url::detail::low
Beta Was this translation helpful? Give feedback.
All reactions