-
Develop a Streamlit web application that:
- Allows selection of previously parsed PDF content or new PDF files.
- Utilizes Large Language Models (LLMs) like GPT-4o through LiteLLM to:
- Provide summaries of the document content.
- Answer user-submitted questions about the document content.
-
Integrate FastAPI to handle backend API interactions between Streamlit and LLM services.
-
Implement and manage API integrations with LiteLLM to simplify connections to LLM providers.
- User-friendly interface with the following features:
- Select the LLM of choice.
- Ability to select from prior parsed PDF markdowns or upload new PDF documents.
- Text input for asking specific questions.
- Buttons to trigger summarization and Q&A functionalities.
- Clear display areas for showing generated summaries and answers.
- REST API endpoints using FastAPI to manage requests from Streamlit:
/select_pdfcontent
→ Select prior parsed PDF content./upload_pdf
→ Accept new PDF content./summarize
→ Generate summaries./ask_question
→ Process user questions and return answers.
- Implement appropriate JSON response formats for communication.
- Use Redis streams for communication between FastAPI and other services.
- Manage all LLM API interactions using LiteLLM.
- Document pricing and token usage for input and output queries.
- Implement error handling and logging for API calls.
- Use Docker Compose and deploy all components on the cloud.
- Well-organized and structured code.
- README.md with detailed setup instructions.
- Diagrammatic representations of architecture and data flows.
- AIUseDisclosure.md → Transparent documentation of all AI tools used.
- Task tracking via GitHub Issues.
- Technical and architectural diagrams.
- Final Codelab with a step-by-step implementation guide.
Number | Model | Documentation |
---|---|---|
1 | GPT-4o | OpenAI GPT-4o Documentation |
2 | Gemini - Flash | Google Gemini 2.0 Flash Documentation |
3 | DeepSeek | DeepSeek LLM Documentation |
4 | Claude | Anthropic Claude Documentation |
5 | Grok | xAI Grok Documentation |
- Python 3.7+
- Docker & Docker Compose
- Redis Server
- FastAPI & LiteLLM
- Streamlit
- Clone the repository:
git clone [repo_link] cd [repo_name]
- Install dependencies:
pip install -r requirements.txt
- Start Redis:
redis-server
- Start FastAPI backend:
uvicorn main:app --host 0.0.0.0 --port 8000
- Start Streamlit frontend:
streamlit run app.py
# Build and start services
docker-compose up --build -d
- FASTAPI URL = http://34.58.87.68:8080/docs
- STREAMLIT URL = https://llm-interactor-gpt.streamlit.app/
Contributions are welcome! Please open an Issue or submit a Pull Request on GitHub.
WE ATTEST THAT WE HAVEN’T USED ANY OTHER STUDENTS’ WORK IN OUR ASSIGNMENT AND ABIDE BY THE POLICIES LISTED IN THE STUDENT HANDBOOK.
- Member 1: Shushil Girish
- Member 2: Yash Khavnekar
- Member 3: Riya Mate
This project is licensed under the MIT License. See the LICENSE file for details.