Skip to content

docs(update) | design principles / why-dspy / optimizers #8433

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

ammirsm
Copy link
Collaborator

@ammirsm ammirsm commented Jun 21, 2025

Overview

Note: Github copilot generated these details based on the changes.

This PR significantly expands and improves DSPy's documentation, especially around prompt optimization and the project’s core philosophy. It introduces detailed docs for major optimizers, new community/introductory material, and improved support for mathematical notation in the docs site.


Major Changes

1. Optimizer Documentation Improvements

  • BootstrapFewShotWithRandomSearch

    • Expanded documentation explaining the problem it solves, how it works (with staged breakdown), technical details, strengths, limitations, and best practices.
    • Added practical examples for text classification and math problem solving.
    • Clarified key parameters, cost/performance expectations, and when to use vs. alternatives.
  • MIPROv2

    • Added a high-level explanation of its purpose, how it differs from other optimizers, and its technical approach (including Bayesian optimization).
    • Detailed practical usage examples for zero-shot and full optimization.
    • Clarified expectations, best practices, and how to interpret results.
    • Linked to the relevant research paper for further details.

2. New Conceptual & Community Docs

  • design-principles.md

    • Articulates the design philosophy behind DSPy: modularity, abstraction, information flow, polymorphic modules, optimization-as-first-class, and future vision.
    • Explains the rationale for DSPy’s abstractions and how they address common pain points in LLM development.
  • why-dspy.md

    • Explains the motivation for DSPy, common issues with manual prompt engineering, and how DSPy’s approach is different.
    • Describes target users and the value proposition for individuals, researchers, and engineering teams.

3. Docs Site Enhancements

  • mathjax-config.js & MathJax Integration

    • Adds MathJax configuration and scripts to enable LaTeX/Math rendering in documentation.
    • Ensures docs can include mathematical notation seamlessly, supporting new technical sections.
  • mkdocs.yml Navigation & Scripts

    • Adds the new community pages (Why DSPy? and Design Principles) to the navigation.
    • Integrates MathJax and polyfill scripts for improved docs rendering.

Motivation

  • Make DSPy’s optimizers more accessible and understandable to new and advanced users.
  • Share the project’s philosophy and vision for better LLM programming, helping onboard new community members.
  • Equip users with clear guidance, best practices, and concrete examples for prompt optimization.
  • Prepare the documentation site for more advanced technical content.

Impact

  • End users: Easier onboarding, deeper understanding of optimizers, and better guidance for practical use.
  • Researchers/Contributors: Clearer articulation of DSPy's design, goals, and distinguishing features.
  • Documentation: Improved readability, navigation, and support for mathematical concepts.

Please review for clarity, accuracy, and completeness. Feedback and suggestions welcome!


ammirsm added 9 commits June 20, 2025 19:56
Include MathJax configuration script and necessary polyfills in the MkDocs setup. Update `mkdocs.yml` to load `mathjax-config.js` and relevant polyfills for enhanced math rendering capabilities.
…ription, outlining the optimization process, and clarifying functionality. Include links to related resources and specific usage examples for zero-shot and full optimization scenarios.
…y its functionality, usage, and benefits. Enhance sections on problem-solving, solution methodology, operational stages, practical examples, expectations, strengths, limitations, and best practices. Include technical details on optimization challenges and strategies, as well as comparisons with other optimizers, ensuring comprehensive guidance for users.
@ammirsm ammirsm requested a review from Copilot June 27, 2025 03:18
@ammirsm ammirsm marked this pull request as ready for review June 27, 2025 03:18
@ammirsm ammirsm requested review from okhat and chenmoneygithub June 27, 2025 03:18
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR enriches DSPy’s documentation by adding conceptual guides, enhancing optimizer references, and enabling mathematical notation in the docs site.

  • Added new community pages (“Why DSPy?” and “Design Principles”)
  • Integrated MathJax for LaTeX support and updated site navigation
  • Expanded optimizer docs with problem statements, detailed stages, and usage examples

Reviewed Changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
docs/mkdocs.yml Added nav entries for new community pages and MathJax scripts
docs/docs/why-dspy.md Introduced “Why DSPy?” conceptual overview
docs/docs/js/mathjax-config.js Added MathJax configuration for in-doc LaTeX rendering
docs/docs/design-principles.md Introduced DSPy philosophy and design principles
docs/docs/api/optimizers/MIPROv2.md Expanded MIPROv2 reference with problem context, technical details, and examples
docs/docs/api/optimizers/BootstrapFewShotWithRandomSearch.md Expanded Bootstrapping optimizer reference with stages, examples, and best practices
Comments suppressed due to low confidence (1)

docs/docs/api/optimizers/MIPROv2.md:3

  • The phrase "an prompt optimizer" is grammatically incorrect; it should be "a prompt optimizer."
`MIPROv2` (<u>M</u>ultiprompt <u>I</u>nstruction <u>PR</u>oposal <u>O</u>ptimizer Version 2) is an prompt optimizer capable of optimizing both instructions and few-shot examples jointly. It does this by bootstrapping few-shot example candidates, proposing instructions grounded in different dynamics of the task, and finding an optimized combination of these options using Bayesian Optimization. It can be used for optimizing few-shot examples & instructions jointly, or just instructions for 0-shot optimization.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant