-
Notifications
You must be signed in to change notification settings - Fork 416
Closed as not planned
Labels
Description
Description
When using following code, I don't know what will be sent to the llm endpoint.
guard = Guard.from_pydantic(output_class=Pet, prompt=prompt)
raw_output, validated_output, *rest = guard(
llm_api=openai.completions.create,
engine="gpt-3.5-turbo-instruct"
)
Even the history doesn't show the, not to mention I don't want to view prompt only after send it successfully
guard_llm.history.last.compiled_instructions
This will output somehting like
You are a helpful assistant, able to express yourself purely through JSON, strictly and precisely adhering to the provided XML schemas.
I don't see where is provided XML schemas.
Why is this needed
I would like to get the original prompt for debugging purpose.
Implementation details
[If known, describe how this change should be implemented in the codebase]
End result
guard = Guard.from_pydantic(output_class=Pet, prompt=prompt)
guard.complied_prompt_to_be_sent