Skip to content

Conversation

@naSim087
Copy link
Contributor

@naSim087 naSim087 commented Sep 4, 2025

Description

Earlier we were not passing the agent prompt while doing the websearch , which resulted in agent not respecting the prompt given to agent while its creation.
fix: added the agent prompt in final prompt construction

Testing

Additional Notes

Summary by CodeRabbit

  • New Features
    • Improved agent handling in chats: when an agent has a configured prompt, it’s now used consistently across routing and stored conversations for more reliable behavior.
    • Enhanced web search prompts: when an agent is used without a system prompt, a clear Name/Description/Prompt block is appended with proper spacing for better context.
  • Refactor
    • Minor internal cleanup with no user-facing impact.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 4, 2025

Walkthrough

Updates adjust how agent prompts are handled. webSearchQuestion now appends a formatted Name/Description/Prompt block to the system prompt when an agent prompt exists. In chat routing, agentPromptValue becomes mutable and may be replaced with the agent’s configured prompt after lookup, affecting stored chat agentId and AgentMessageApi routing/logging.

Changes

Cohort / File(s) Summary
Provider: web search system prompt composition
server/ai/provider/index.ts
Reworks agent prompt handling in webSearchQuestion: when systemPrompt is absent and agentPrompt exists, parse agent prompt and append a formatted block (Name, Description, Prompt) to the base system prompt (webSearchSystemPrompt(userCtx)). Uses explicit if/else and inserts a separating newline.
Chat API: agent prompt resolution and routing
server/api/chat/chat.ts
Changes agentPromptValue from const to mutable; after fetching agent details, sets `agentPromptValue = agentDetails?.prompt

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant ChatAPI as Chat API (server/api/chat/chat.ts)
  participant Agents as Agent Store
  participant AgentMsg as AgentMessageApi

  User->>ChatAPI: Create chat request (agentPromptValue)
  ChatAPI->>Agents: Load agentDetails by id/name
  Agents-->>ChatAPI: agentDetails { prompt? }
  alt agentDetails.prompt present
    ChatAPI->>ChatAPI: agentPromptValue = agentDetails.prompt
  else
    ChatAPI->>ChatAPI: agentPromptValue unchanged
  end
  ChatAPI->>ChatAPI: Create chat (agentId = agentPromptValue)
  ChatAPI->>AgentMsg: Route message (using agentPromptValue)
  AgentMsg-->>ChatAPI: Response
  ChatAPI-->>User: Chat created and first response
Loading
sequenceDiagram
  autonumber
  participant Provider as webSearchQuestion (server/ai/provider/index.ts)
  participant Parser as Agent Prompt Parser
  participant Model as LLM

  Provider->>Provider: base = webSearchSystemPrompt(userCtx)
  alt agentPrompt provided AND no systemPrompt
    Provider->>Parser: parse(agentPrompt)
    Parser-->>Provider: { name, description, prompt }
    Provider->>Provider: systemPrompt = base + "\n\n" + formatted block (Name/Description/Prompt)
  else
    Provider->>Provider: systemPrompt = provided systemPrompt OR base
  end
  Provider->>Model: Invoke with systemPrompt
  Model-->>Provider: Answer
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • zereraz
  • shivamashtikar
  • kalpadhwaryu

Poem

A whisk of prompts, a hop of thought,
I stitched a Name and Prompt I caught—
Into the system’s gentle loom,
Then bounced the chat to find its room.
Carrots aligned, the routes are clear—
Agent whispers, answers near. 🥕✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/agentPromptConstruction

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @naSim087, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue where the AI agent's specific prompt was not being fully utilized during web search operations, leading to inconsistent agent behavior. The changes ensure that the agent's defined persona and instructions are properly incorporated into the system prompt for web searches, thereby maintaining the agent's intended functionality and improving the accuracy of its responses.

Highlights

  • Agent Prompt Integration: Ensured that the agent's custom prompt is correctly passed and respected during web search operations, resolving an issue where the agent's defined behavior was not maintained.
  • System Prompt Construction: Modified the webSearchQuestion function to explicitly include the parsed agent prompt's name, description, and prompt into the system prompt, ensuring comprehensive context for web searches.
  • Agent Prompt Value Retrieval: Updated the chat API to retrieve and utilize the full agent prompt string from agent details, rather than just the agent ID, when an agent is involved in a web search.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses the issue of not passing the agent prompt during web searches. The changes ensure that the agent's configuration is respected. The refactoring in server/ai/provider/index.ts to use an if/else block instead of a ternary operator improves code readability. My main feedback is on variable naming and reuse in server/api/chat/chat.ts to enhance maintainability.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
server/ai/provider/index.ts (1)

172-181: Avoid logging full agent prompts (possible PII/secret leakage).

Both warn/info paths log the entire raw agentPrompt string. These can contain secrets or internal policy text and will end up in logs.

Apply:

-    Logger.warn(
-      `Agent prompt string is valid JSON but did not match expected structures. Treating as literal prompt: '${agentPromptString}'`,
-    )
+    Logger.warn(
+      `Agent prompt string is valid JSON but did not match expected structures. Treating as literal prompt.`
+    )
...
-    Logger.info(
-      `Agent prompt string is not valid JSON or is empty. Treating as literal prompt: '${agentPromptString}'`,
-    )
+    Logger.info(
+      `Agent prompt string is not valid JSON or is empty. Treating as literal prompt.`
+    )

Optionally add a helper to redact/truncate before any future logging.

server/api/chat/chat.ts (5)

4085-4107: Bug: agentPromptValue now conflates agent ID and prompt; DB stores prompt in chat.agentId and logs prompt content.

Reassigning agentPromptValue to the agent’s prompt causes:

  • chat insert to persist the prompt string into agentId (Line 4188-4191), breaking agent-to-chat relations.
  • log line “Routing to AgentMessageApi for agent ${agentPromptValue}” to emit the entire prompt (PII/leak).

Keep agent ID and prompt separate.

Apply:

-    let agentPromptValue = agentId && isCuid(agentId) ? agentId : undefined // Use undefined if not a valid CUID
+    const agentIdValue = agentId && isCuid(agentId) ? agentId : undefined
+    let agentPromptString: string | undefined
@@
-      const agentDetails = await getAgentByExternalId(
+      const agentDetails = await getAgentByExternalId(
         db,
-        agentPromptValue,
+        agentIdValue!,
         userAndWorkspaceCheck.workspace.id,
       )
-      agentPromptValue = agentDetails?.prompt || agentPromptValue
+      agentPromptString = agentDetails?.prompt
       if (!isAgentic && !enableWebSearch && agentDetails) {
-        Logger.info(`Routing to AgentMessageApi for agent ${agentPromptValue}.`)
+        Logger.info(
+          `Routing to AgentMessageApi for agent ${agentDetails.externalId} (${agentDetails.name}).`
+        )
         return AgentMessageApi(c)
       }

Also fix chat creation/storage to keep the true agent ID:

-            agentId: agentPromptValue,
+            agentId: agentIdValue,

And pass the prompt string (not the ID) into LLM params:

-              searchOrAnswerIterator = webSearchQuestion(message, ctx, {
+              searchOrAnswerIterator = webSearchQuestion(message, ctx, {
                 modelId: Models.Gemini_2_5_Flash,
                 stream: true,
                 json: false,
-                agentPrompt: agentPromptValue,
+                agentPrompt: agentPromptString,
                 reasoning:
                   userRequestsReasoning &&
                   ragPipelineConfig[RagPipelineStages.AnswerOrSearch].reasoning,
                 messages: llmFormattedMessages,
                 webSearch: true,
               })

And for the non-web search path:

-                  {
+                  {
                     modelId:
                       ragPipelineConfig[RagPipelineStages.AnswerOrSearch]
                         .modelId,
                     stream: true,
                     json: true,
-                    agentPrompt: agentPromptValue,
+                    agentPrompt: agentPromptString,
                     reasoning:
                       userRequestsReasoning &&
                       ragPipelineConfig[RagPipelineStages.AnswerOrSearch]
                         .reasoning,
                     messages: llmFormattedMessages,
                   },

4181-4191: Store the agent’s externalId, not the prompt, in chats.

This line currently writes whatever agentPromptValue holds (now a prompt). Use the stable agentId for relational integrity and analytics.

Apply:

-            agentId: agentPromptValue,
+            agentId: agentIdValue,

4668-4678: Pass the resolved prompt string to web search; avoid passing raw agent IDs.

Ensure only a prompt string is sent.

See diff in earlier comment (lines 4085-4107) changing agentPrompt to agentPromptString.


4681-4703: Same here for conversation path: pass prompt string, not agent ID.

Align this param with the separation fix.

See diff in earlier comment.


4106-4110: Redact logs that include prompt text.

Logging the full prompt risks leaking internal instructions. Use agent externalId/name instead.

See diff in earlier comment replacing the log line.

🧹 Nitpick comments (1)
server/ai/provider/index.ts (1)

1814-1822: Good fix: agent prompt now reaches web-search system prompt. Add delimiting and size guard.

Appending Name/Description/Prompt solves the original issue. To reduce prompt injection risk and accidental blending with the base instructions, wrap the appended block in a clearly delimited section and optionally clip very long prompts.

Apply:

-        params.systemPrompt =
-          webSearchSystemPrompt(userCtx) +
-          "\n\n" +
-          `Name: ${parsed.name}\nDescription: ${parsed.description}\nPrompt: ${parsed.prompt}`
+        const maxAppend = 6000
+        const safe = (s?: string) =>
+          (s ?? "").toString().replace(/\u0000/g, "").slice(0, maxAppend)
+        const { name, description, prompt } = parsed
+        params.systemPrompt = `${webSearchSystemPrompt(userCtx)}\n\n[Agent Context]\nName: ${safe(name) || "(unnamed)"}\nDescription: ${safe(description) || "(none)"}\nPrompt:\n"""${safe(prompt)}"""`
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 5684dc4 and accc40a.

📒 Files selected for processing (2)
  • server/ai/provider/index.ts (1 hunks)
  • server/api/chat/chat.ts (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
server/ai/provider/index.ts (1)
server/ai/prompts.ts (1)
  • webSearchSystemPrompt (2348-2352)
🔇 Additional comments (3)
server/api/chat/chat.ts (3)

229-229: No functional change.

Type-only import formatting is fine.


1813-1819: Minor: ensure systemPrompt augmentation always happens when desired.

webSearchQuestion only appends agent prompt if params.systemPrompt is falsy. If upstream sets a custom systemPrompt, agent context won’t be injected. Confirm that’s intended; otherwise, append conditionally or merge.

Would you like me to prepare a follow-up PR to merge agent context even when a custom systemPrompt is provided (behind a flag)?


4085-4191: Verify all agentId references across code and DB
Run without type filters to catch every occurrence:

rg -n '\bagentId\b' -g '*.ts' -g '*.sql' || true  
rg -n 'agent_id' -g '*.sql' || true  

Ensure any reads/writes (migrations, Prisma schema, model definitions, API handlers) that treat chat.agentId as an external ID are updated to handle the new prompt value.

@zereraz zereraz merged commit 05a7e38 into main Sep 4, 2025
4 checks passed
@zereraz zereraz deleted the fix/agentPromptConstruction branch September 4, 2025 14:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants