Avoiding ambiguity between formatted ChatPromptTemplate messages? #28979
Unanswered
chrispy-snps
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
When a
ChatPromptTemplate
is rendered to text, all messages are string-concatenated directly adjacent to each other. But when newlines are significant within messages (such as to delineate Markdown block elements or assistant/user paragraphs), this causes ambiguity in the message boundaries because the role delimiters now occur in the middle of what the LLM might interpret as "paragraphs".Here is the rendered
ChatPromptTemplate
output from the provided code, with markers added to show how the LLM might interpret paragraphs after processing the Markdown content:We are working around this by post-processing all our messages to manually add trailing newlines, but is this something that LangChain should be handling natively?
System Info
langchain 0.3.9, using Python 3.10 on Linux
Beta Was this translation helpful? Give feedback.
All reactions