An AI-powered academic guidance system that provides personalized advice through specialized advisor personas. Get diverse perspectives on your PhD journey from multiple AI advisors, each bringing unique expertise in methodology, theory, and practical guidance.
- Multiple AI Advisor Personas: Chat with specialized advisors including Methodologist (methodology expert), Theorist (conceptual frameworks), and Pragmatist (action-focused guidance)
- Document Upload Support: Upload PDFs, Word documents, and text files to provide context for your questions
- Multi-LLM Backend: Supports both Gemini API and Ollama models with seamless provider switching
- Real-time Chat Interface: Modern, responsive chat interface with advisor-specific styling
- Context-Aware Responses: Maintains conversation history and document context across the session
- Sequential Advisor Responses: Get input from all advisors in a structured sequence
- Individual Advisor Chat: Have focused conversations with specific advisors
- Technology: React 18 with modern hooks and functional components
- Styling: CSS custom properties with dark/light theme support
- Components: Modular component architecture with reusable UI elements
- State Management: React hooks for local state management
- Icons: Lucide React for consistent iconography
- Framework: FastAPI with automatic API documentation
- LLM Integration: Support for multiple providers (Gemini, Ollama)
- Document Processing: PDF, DOCX, and text file extraction
- Session Management: Global session context with file upload tracking
- CORS Support: Configured for React development server
- Node.js 16+ and npm
- Python 3.8+
- (Optional) Gemini API key for Google's models
- (Optional) Ollama installation for local models
- Clone and navigate to backend directory
cd multi_llm_chatbot_backend
- Install dependencies
pip install -r requirements.txt
- Set up environment variables
Create a
.env
file:
GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_MODEL=gemini-2.0-flash
- Start the backend server
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
The API will be available at http://localhost:8000
with interactive docs at http://localhost:8000/docs
- Navigate to frontend directory
cd phd-advisor-frontend
- Install dependencies
npm install
- Start the development server
npm start
The application will open at http://localhost:3000
- Focus: Research design, validity, sampling, methodological rigor
- Style: Precise, analytical, methodology-focused
- Best for: Research design questions, data collection methods, validity concerns
- Focus: Theoretical positioning, epistemological assumptions, conceptual clarity
- Style: Thoughtful, intellectually rigorous, theory-oriented
- Best for: Literature review, theoretical frameworks, conceptual development
- Focus: Practical next steps, immediate actions, progress over perfection
- Style: Warm, motivational, results-oriented
- Best for: Getting unstuck, prioritizing tasks, actionable advice
POST /chat-sequential
- Get responses from all advisors in sequencePOST /chat/{persona_id}
- Chat with a specific advisorPOST /reply-to-advisor
- Reply to a specific advisor's message
POST /upload-document
- Upload documents (PDF, DOCX, TXT)GET /uploaded-files
- List uploaded filenamesGET /context
- View current session context
GET /current-provider
- Get current LLM provider infoPOST /switch-provider
- Switch between Gemini and Ollama
GET /debug/personas
- Debug persona configurationsGET /
- API health check
Gemini (Default)
- Requires GEMINI_API_KEY environment variable
- Uses gemini-2.0-flash model by default
- Cloud-based, requires internet connection
Ollama (Local)
- Requires Ollama installation
- Uses llama3.2:1b model by default
- Runs locally, no internet required
- Maximum file size: 10MB per file
- Total session limit: 50MB
- Supported formats: PDF, DOCX, TXT
# Required for Gemini
GEMINI_API_KEY=your_api_key
# Optional configurations
GEMINI_MODEL=gemini-2.0-flash
OLLAMA_BASE_URL=http://localhost:11434
phd-advisor-frontend/
├── public/
├── src/
│ ├── components/ # Reusable UI components
│ ├── data/ # Advisor configurations
│ ├── pages/ # Main page components
│ ├── styles/ # CSS stylesheets
│ └── utils/ # Helper functions
multi_llm_chatbot_backend/
├── app/
│ ├── api/ # API routes
│ ├── core/ # Business logic
│ ├── llm/ # LLM client implementations
│ ├── models/ # Data models
│ ├── tests/ # Test files
│ └── utils/ # Utility functions
- Create persona in
app/api/routes.py
with system prompt - Add advisor configuration in frontend
src/data/advisors.js
- Update advisor styling in
src/styles/
Use the test script:
cd multi_llm_chatbot_backend/app/tests
python test_document_upload.py
- Navigate to the home page
- Click "Start Conversation"
- Type your PhD-related question
- Receive responses from all three advisors
- In the chat interface, click the upload button
- Select your PDF, DOCX, or TXT file
- The document content will be added to the conversation context
- Ask questions about your uploaded documents
curl -X POST "http://localhost:8000/switch-provider" \
-H "Content-Type: application/json" \
-d '{"provider": "ollama"}'
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- Check the API documentation at
http://localhost:8000/docs
- Review the debug endpoints for troubleshooting
- Ensure all environment variables are properly configured
- Verify that your LLM provider (Gemini/Ollama) is accessible