Skip to content

Conversation

robertjdominguez
Copy link
Collaborator

Description

Streamlined DocsBot system instructions to reduce over-engineering and improve response quality.

  • Removed excessive validation protocols that were causing the bot to over-query documentation
  • Simplified query classification from complex rules to clear patterns
  • Introduced smart response routing based on question type
  • Eliminated redundant validation requirements that slowed down simple answers

Previously, the bot was configured with heavy-handed validation protocols that required documentation queries for every response:

<validation_protocols description="Instructions for validating CLI and metadata content">
- CLI Validation Rule: ANY response containing CLI commands MUST validate those commands against documentation pages BEFORE crafting the response - this is a blocking requirement
- Metadata Validation Rule: ANY response containing metadata examples MUST validate those examples against reference pages BEFORE crafting the response - this is a blocking requirement
- MANDATORY VALIDATION GATE: Cannot provide response until validation is complete for all CLI commands, flags, and configuration examples

Now, the bot uses intelligent response patterns that only query documentation when actually needed:

<general_information_pattern>
For conceptual questions about PromptQL:
1. Answer directly using natural language knowledge
2. Keep response concise (1-3 sentences + key points as bullets if needed)
3. ALWAYS end with: "Would you like me to show you how to set this up, or would you prefer to see a specific example?"
4. NO data queries unless user requests guide/example in follow-up
</general_information_pattern>

The key insight here is that most users asking "what is PromptQL?" don't need the bot to search documentation - they need a direct answer followed by an offer to dive deeper. This change eliminates unnecessary latency for conceptual questions while maintaining rigorous validation only when providing specific commands or examples.

The bot now distinguishes between three clear patterns: general information (answer directly), guide requests (validate CLI commands), and example requests (validate metadata). This targeted approach should significantly improve response times for common questions while maintaining accuracy where it matters most.

Copy link

github-actions bot commented Aug 1, 2025

🚀 PromptQL Build Complete

Build Version: 4fb7e287e5
Project: pql-docs
PromptQL Playground: Open Playground

Description: PR #26: PQL: Streamline responses and improve validation steps

@robertjdominguez robertjdominguez merged commit 78f8e95 into main Aug 1, 2025
1 check passed
@robertjdominguez robertjdominguez deleted the rob/pql/streamline-responses branch August 1, 2025 15:34
Copy link

github-actions bot commented Aug 1, 2025

✅ PromptQL Build Applied

Build Version: 4fb7e287e5
Status: Successfully applied to production
Applied at: 2025-08-01T15:34:44.099Z

robertjdominguez added a commit that referenced this pull request Aug 10, 2025
## Description

Streamlined DocsBot system instructions to reduce over-engineering and
improve response quality.

- Removed excessive validation protocols that were causing the bot to
over-query documentation
- Simplified query classification from complex rules to clear patterns
- Introduced smart response routing based on question type
- Eliminated redundant validation requirements that slowed down simple
answers

Previously, the bot was configured with heavy-handed validation
protocols that required documentation queries for every response:

````yaml path=pql/globals/metadata/promptql-config.hml mode=EXCERPT
<validation_protocols description="Instructions for validating CLI and metadata content">
- CLI Validation Rule: ANY response containing CLI commands MUST validate those commands against documentation pages BEFORE crafting the response - this is a blocking requirement
- Metadata Validation Rule: ANY response containing metadata examples MUST validate those examples against reference pages BEFORE crafting the response - this is a blocking requirement
- MANDATORY VALIDATION GATE: Cannot provide response until validation is complete for all CLI commands, flags, and configuration examples
````

Now, the bot uses intelligent response patterns that only query
documentation when actually needed:

````yaml path=pql/globals/metadata/promptql-config.hml mode=EXCERPT
<general_information_pattern>
For conceptual questions about PromptQL:
1. Answer directly using natural language knowledge
2. Keep response concise (1-3 sentences + key points as bullets if needed)
3. ALWAYS end with: "Would you like me to show you how to set this up, or would you prefer to see a specific example?"
4. NO data queries unless user requests guide/example in follow-up
</general_information_pattern>
````

The key insight here is that most users asking "what is PromptQL?" don't
need the bot to search documentation - they need a direct answer
followed by an offer to dive deeper. This change eliminates unnecessary
latency for conceptual questions while maintaining rigorous validation
only when providing specific commands or examples.

The bot now distinguishes between three clear patterns: general
information (answer directly), guide requests (validate CLI commands),
and example requests (validate metadata). This targeted approach should
significantly improve response times for common questions while
maintaining accuracy where it matters most.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant