Skip to content

Adding a Chatbot #211

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Conversation

gadhvirushiraj
Copy link

Addition of Chatbot to Website [Idea]

This PR adds a documentation-assist chatbot to the website with LLM + RAG. It is intended to help users; especially those new to programming with barrier in heavy API docs to navigate the docs more easily. The chatbot also supports multilingual queries which can be helpful.

Note: The chatbot is designed for informational support only ; it is not capable of generating executable code.

DEMO

Screenshot 2025-06-07 at 5 09 38 PM
Screenshot 2025-06-07 at 5 45 54 PM
Screenshot 2025-06-07 at 5 47 53 PM

Note: This implementation requires further testing. Initial tests show occasional inaccuracies. For example, the mcsolve argument is correctly listed as tlist, but the chatbot refers to it as "times", as it is mentioned as "times" in a code sample in the user guide. Using directly pdf chunking maybe causing thi; api doc is in json format this can be fixed easily.

Setup Instructions (Demo-Ready)

A working demo is available with minimal setup. To try it locally:

  1. API Keys
    Get free-tier API keys for: Groq and Pinecone
    Add these keys to both infer_api.py and pc_upcert.py.

  2. Pinecone Index Setup:

    • Create a Pinecone index named: qutip-shrody (If you change the name, update the references in the code.)
    • Set the vector dimension to: 384
  3. Documentation Data: Download the QuTiP documentation in PDF format. I used QuTiP 5.1.1 docs only.

  4. Install Dependencies under ./backend

  5. Run the Uploader:

    python pc_upcert.py

    This will upload vectors to Pinecone — may take ~2–3 minutes on first run. And needs to be done only once.

  6. Start Inference Server:

    python infer_api.py
  7. Host the Website as earlier, or can use the new Dockerfile


Further Imporvement

This is a basic integration and can be expanded with:

  • Support for more versions of QuTiP. Answers can be version specific.
  • Expand to include all tutorials for further context and answer ability.
  • Better embeddings and better models can be explored, for me llama 3.3 70b worked the best till now.
  • Auto update embeddings for new version

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant