Last message context for "prompt enchancer" tool #5622
VooDisss
started this conversation in
Feature Requests
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Currently, the prompt enchancer tool has a variable of
${userInput}
, need to implement a variable of${lastllmoutput}
so to have the context of the last message that was sent, so when using "prompt enchancement" tool - it will take that into consideration and will make much better converstation targeted prompt enchancements.LLM-Generated explanation (with examples), how this is useful:
Of course. The suggestion to add a
${lastllmoutput}
variable to RooCode's prompt enhancer is an excellent one for improving conversational context. Here’s a breakdown of why it’s needed, with simple examples of how it would work and where the current system falls short.The Core Idea: Why Conversational Context Matters
RooCode's "prompt enhancer" is a tool that takes your input (
${userInput}
) and makes it more detailed and effective before sending it to the AI. This is great for one-off commands.However, conversations with an AI are rarely a single command. They are a back-and-forth dialogue. The proposed
${lastllmoutput}
variable would allow the prompt enhancer to "remember" what the AI just said, leading to much smarter and more relevant follow-up prompts.Example 1: Refining a Broad Request
Imagine you're starting with a very general idea and want the AI to help you narrow it down.
Scenario: You have a Python script and you want to make it better.
How it works now (without
${lastllmoutput}
)${userInput}
): "Improve this Python script."${lastllmoutput}
): "The script is functional, but it could be improved by adding error handling and by refactoring the main function to be more modular."${userInput}
): "Okay, do that."Without the context of the AI's last message, the prompt enhancer can only generate a generic prompt like:
This is vague and relies on the AI to remember its own previous suggestion.
How it would work with
${lastllmoutput}
With access to the AI's last response, the prompt enhancer can be much more specific.
${userInput}
): "Improve this Python script."${lastllmoutput}
): "The script is functional, but it could be improved by adding error handling and by refactoring the main function to be more modular."${userInput}
): "Okay, do that."Now, the enhanced prompt could be:
This is a much clearer and more direct instruction.
Example 2: Asking a Follow-Up Question
This is a very common use case where the current system struggles.
Scenario: You're asking the AI for recommendations.
How it works now (without
${lastllmoutput}
)${userInput}
): "What database should I use for my project?"${lastllmoutput}
): "For your project's needs, I recommend using PostgreSQL."${userInput}
): "What are the pros and cons of that?"Without knowing what "that" refers to, the prompt enhancer is at a disadvantage. It might produce:
The AI will likely know you're talking about PostgreSQL from the conversation history, but the enhanced prompt itself isn't helping to specify that.
How it would work with
${lastllmoutput}
${userInput}
): "What database should I use for my project?"${lastllmoutput}
): "For your project's needs, I recommend using PostgreSQL."${userInput}
): "What are the pros and cons of that?"The enhanced prompt could now be:
This creates a much more robust and contextually aware instruction.
Example 3: When Things Go Wrong (Debugging)
Scenario: The AI's suggestion didn't work as expected.
How it works now (without
${lastllmoutput}
)${userInput}
): "This code is giving me an error."${lastllmoutput}
): "It seems to be aKeyError
. Here is a suggested fix to check if the key exists before accessing it." (provides a code snippet)${userInput}
): "That didn't work, it's still crashing."The current prompt enhancer can only create a vague follow-up:
How it would work with
${lastllmoutput}
${userInput}
): "This code is giving me an error."${lastllmoutput}
): "It seems to be aKeyError
. Here is a suggested fix to check if the key exists before accessing it." (provides a code snippet)${userInput}
): "That didn't work, it's still crashing."The enhanced prompt, aware of the failed solution, can be much more intelligent:
In summary, the addition of a
${lastllmoutput}
variable would be a significant step up for RooCode's prompt enhancer, transforming it from a tool that improves single commands into one that excels at enhancing an entire conversation.Beta Was this translation helpful? Give feedback.
All reactions