Json mode in offline batch inference #7482
amritap-ef
announced in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Is it possible to have a custom decoding config for outlines / lm-enforcer json format per sample in a batch in offline inference mode?
I have a bunch of samples that I want to enforce json mode for but the json structure is different per sample, and as per the docs it seems that its only possible to pass in a single decoding config to be used for all prompts?
Beta Was this translation helpful? Give feedback.
All reactions