-
Notifications
You must be signed in to change notification settings - Fork 56
fix:added the agent prompt to passed , while doing the web search #781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughUpdates adjust how agent prompts are handled. webSearchQuestion now appends a formatted Name/Description/Prompt block to the system prompt when an agent prompt exists. In chat routing, agentPromptValue becomes mutable and may be replaced with the agent’s configured prompt after lookup, affecting stored chat agentId and AgentMessageApi routing/logging. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant ChatAPI as Chat API (server/api/chat/chat.ts)
participant Agents as Agent Store
participant AgentMsg as AgentMessageApi
User->>ChatAPI: Create chat request (agentPromptValue)
ChatAPI->>Agents: Load agentDetails by id/name
Agents-->>ChatAPI: agentDetails { prompt? }
alt agentDetails.prompt present
ChatAPI->>ChatAPI: agentPromptValue = agentDetails.prompt
else
ChatAPI->>ChatAPI: agentPromptValue unchanged
end
ChatAPI->>ChatAPI: Create chat (agentId = agentPromptValue)
ChatAPI->>AgentMsg: Route message (using agentPromptValue)
AgentMsg-->>ChatAPI: Response
ChatAPI-->>User: Chat created and first response
sequenceDiagram
autonumber
participant Provider as webSearchQuestion (server/ai/provider/index.ts)
participant Parser as Agent Prompt Parser
participant Model as LLM
Provider->>Provider: base = webSearchSystemPrompt(userCtx)
alt agentPrompt provided AND no systemPrompt
Provider->>Parser: parse(agentPrompt)
Parser-->>Provider: { name, description, prompt }
Provider->>Provider: systemPrompt = base + "\n\n" + formatted block (Name/Description/Prompt)
else
Provider->>Provider: systemPrompt = provided systemPrompt OR base
end
Provider->>Model: Invoke with systemPrompt
Model-->>Provider: Answer
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @naSim087, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a critical issue where the AI agent's specific prompt was not being fully utilized during web search operations, leading to inconsistent agent behavior. The changes ensure that the agent's defined persona and instructions are properly incorporated into the system prompt for web searches, thereby maintaining the agent's intended functionality and improving the accuracy of its responses.
Highlights
- Agent Prompt Integration: Ensured that the agent's custom prompt is correctly passed and respected during web search operations, resolving an issue where the agent's defined behavior was not maintained.
- System Prompt Construction: Modified the
webSearchQuestionfunction to explicitly include the parsed agent prompt's name, description, and prompt into the system prompt, ensuring comprehensive context for web searches. - Agent Prompt Value Retrieval: Updated the chat API to retrieve and utilize the full agent prompt string from agent details, rather than just the agent ID, when an agent is involved in a web search.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses the issue of not passing the agent prompt during web searches. The changes ensure that the agent's configuration is respected. The refactoring in server/ai/provider/index.ts to use an if/else block instead of a ternary operator improves code readability. My main feedback is on variable naming and reuse in server/api/chat/chat.ts to enhance maintainability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
server/ai/provider/index.ts (1)
172-181: Avoid logging full agent prompts (possible PII/secret leakage).Both warn/info paths log the entire raw agentPrompt string. These can contain secrets or internal policy text and will end up in logs.
Apply:
- Logger.warn( - `Agent prompt string is valid JSON but did not match expected structures. Treating as literal prompt: '${agentPromptString}'`, - ) + Logger.warn( + `Agent prompt string is valid JSON but did not match expected structures. Treating as literal prompt.` + ) ... - Logger.info( - `Agent prompt string is not valid JSON or is empty. Treating as literal prompt: '${agentPromptString}'`, - ) + Logger.info( + `Agent prompt string is not valid JSON or is empty. Treating as literal prompt.` + )Optionally add a helper to redact/truncate before any future logging.
server/api/chat/chat.ts (5)
4085-4107: Bug: agentPromptValue now conflates agent ID and prompt; DB stores prompt in chat.agentId and logs prompt content.Reassigning agentPromptValue to the agent’s prompt causes:
- chat insert to persist the prompt string into agentId (Line 4188-4191), breaking agent-to-chat relations.
- log line “Routing to AgentMessageApi for agent ${agentPromptValue}” to emit the entire prompt (PII/leak).
Keep agent ID and prompt separate.
Apply:
- let agentPromptValue = agentId && isCuid(agentId) ? agentId : undefined // Use undefined if not a valid CUID + const agentIdValue = agentId && isCuid(agentId) ? agentId : undefined + let agentPromptString: string | undefined @@ - const agentDetails = await getAgentByExternalId( + const agentDetails = await getAgentByExternalId( db, - agentPromptValue, + agentIdValue!, userAndWorkspaceCheck.workspace.id, ) - agentPromptValue = agentDetails?.prompt || agentPromptValue + agentPromptString = agentDetails?.prompt if (!isAgentic && !enableWebSearch && agentDetails) { - Logger.info(`Routing to AgentMessageApi for agent ${agentPromptValue}.`) + Logger.info( + `Routing to AgentMessageApi for agent ${agentDetails.externalId} (${agentDetails.name}).` + ) return AgentMessageApi(c) }Also fix chat creation/storage to keep the true agent ID:
- agentId: agentPromptValue, + agentId: agentIdValue,And pass the prompt string (not the ID) into LLM params:
- searchOrAnswerIterator = webSearchQuestion(message, ctx, { + searchOrAnswerIterator = webSearchQuestion(message, ctx, { modelId: Models.Gemini_2_5_Flash, stream: true, json: false, - agentPrompt: agentPromptValue, + agentPrompt: agentPromptString, reasoning: userRequestsReasoning && ragPipelineConfig[RagPipelineStages.AnswerOrSearch].reasoning, messages: llmFormattedMessages, webSearch: true, })And for the non-web search path:
- { + { modelId: ragPipelineConfig[RagPipelineStages.AnswerOrSearch] .modelId, stream: true, json: true, - agentPrompt: agentPromptValue, + agentPrompt: agentPromptString, reasoning: userRequestsReasoning && ragPipelineConfig[RagPipelineStages.AnswerOrSearch] .reasoning, messages: llmFormattedMessages, },
4181-4191: Store the agent’s externalId, not the prompt, in chats.This line currently writes whatever agentPromptValue holds (now a prompt). Use the stable agentId for relational integrity and analytics.
Apply:
- agentId: agentPromptValue, + agentId: agentIdValue,
4668-4678: Pass the resolved prompt string to web search; avoid passing raw agent IDs.Ensure only a prompt string is sent.
See diff in earlier comment (lines 4085-4107) changing agentPrompt to agentPromptString.
4681-4703: Same here for conversation path: pass prompt string, not agent ID.Align this param with the separation fix.
See diff in earlier comment.
4106-4110: Redact logs that include prompt text.Logging the full prompt risks leaking internal instructions. Use agent externalId/name instead.
See diff in earlier comment replacing the log line.
🧹 Nitpick comments (1)
server/ai/provider/index.ts (1)
1814-1822: Good fix: agent prompt now reaches web-search system prompt. Add delimiting and size guard.Appending Name/Description/Prompt solves the original issue. To reduce prompt injection risk and accidental blending with the base instructions, wrap the appended block in a clearly delimited section and optionally clip very long prompts.
Apply:
- params.systemPrompt = - webSearchSystemPrompt(userCtx) + - "\n\n" + - `Name: ${parsed.name}\nDescription: ${parsed.description}\nPrompt: ${parsed.prompt}` + const maxAppend = 6000 + const safe = (s?: string) => + (s ?? "").toString().replace(/\u0000/g, "").slice(0, maxAppend) + const { name, description, prompt } = parsed + params.systemPrompt = `${webSearchSystemPrompt(userCtx)}\n\n[Agent Context]\nName: ${safe(name) || "(unnamed)"}\nDescription: ${safe(description) || "(none)"}\nPrompt:\n"""${safe(prompt)}"""`
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
server/ai/provider/index.ts(1 hunks)server/api/chat/chat.ts(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
server/ai/provider/index.ts (1)
server/ai/prompts.ts (1)
webSearchSystemPrompt(2348-2352)
🔇 Additional comments (3)
server/api/chat/chat.ts (3)
229-229: No functional change.Type-only import formatting is fine.
1813-1819: Minor: ensure systemPrompt augmentation always happens when desired.webSearchQuestion only appends agent prompt if params.systemPrompt is falsy. If upstream sets a custom systemPrompt, agent context won’t be injected. Confirm that’s intended; otherwise, append conditionally or merge.
Would you like me to prepare a follow-up PR to merge agent context even when a custom systemPrompt is provided (behind a flag)?
4085-4191: Verify allagentIdreferences across code and DB
Run without type filters to catch every occurrence:rg -n '\bagentId\b' -g '*.ts' -g '*.sql' || true rg -n 'agent_id' -g '*.sql' || trueEnsure any reads/writes (migrations, Prisma schema, model definitions, API handlers) that treat
chat.agentIdas an external ID are updated to handle the new prompt value.
Description
Earlier we were not passing the agent prompt while doing the websearch , which resulted in agent not respecting the prompt given to agent while its creation.
fix: added the agent prompt in final prompt construction
Testing
Additional Notes
Summary by CodeRabbit