Skip to content

[feat] what's the prompt generated from Guard.from_pydantic #919

@jack2684

Description

@jack2684

Description
When using following code, I don't know what will be sent to the llm endpoint.

guard = Guard.from_pydantic(output_class=Pet, prompt=prompt)

raw_output, validated_output, *rest = guard(
    llm_api=openai.completions.create,
    engine="gpt-3.5-turbo-instruct"
)

Even the history doesn't show the, not to mention I don't want to view prompt only after send it successfully

guard_llm.history.last.compiled_instructions

This will output somehting like

You are a helpful assistant, able to express yourself purely through JSON, strictly and precisely adhering to the provided XML schemas.

I don't see where is provided XML schemas.

Why is this needed
I would like to get the original prompt for debugging purpose.

Implementation details
[If known, describe how this change should be implemented in the codebase]

End result

guard = Guard.from_pydantic(output_class=Pet, prompt=prompt)
guard.complied_prompt_to_be_sent

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions