A modern, AI-powered portfolio website showcasing full-stack development expertise with an intelligent chatbot assistant
- Responsive Design: Fully responsive across all devices with modern UI/UX
- Dark/Light Mode: Seamless theme switching with system preference detection
- Dynamic Sections: Experience, Education, Projects, Publications, and Contact
- Interactive Components: Smooth animations and hover effects
- SEO Optimized: Meta tags, structured data, and performance optimized
- RAG (Retrieval-Augmented Generation): Intelligent chatbot with portfolio knowledge
- Vector Search: Semantic search through portfolio content using embeddings
- Redis Caching: 97% speed improvement with Upstash Redis for repeated queries
- Real-time Responses: Powered by Google Gemini 2.5 Flash model
- Context-Aware: Understands portfolio context for accurate responses
- Edge Runtime: Optimized API routes for fast response times
- TypeScript: Full type safety throughout the application
- Component-Based: Modular and reusable React components
- Performance Optimized: Image optimization, lazy loading, and efficient bundling
Next.js 15 (App Router) + React 19
├── TypeScript for type safety
├── TailwindCSS for styling
├── Custom UI components
└── Responsive design system
AI-Powered Chatbot
├── Google Gemini 2.5 Flash (LLM)
├── AstraDB Vector Database
├── Google text-embedding-004
├── Upstash Redis (Caching)
└── RAG Implementation
Portfolio Data
├── JSON-based content (experiences, projects, etc.)
├── Vector embeddings (26+ documents)
├── Homepage content integration
└── Social media profiles
- Node.js 18+
- npm/yarn/pnpm
- AstraDB account
- Google AI Studio API key
- Upstash Redis account (optional, has fallback)
- Clone the repository
git clone https://github.com/Rahul-lalwani-learner/rahullalwani.com.git
cd personal-portfolio
- Install dependencies
npm install --legacy-peer-deps
# or
yarn install --legacy-peer-deps
- Environment Setup
Create a
.env.local
file in the root directory:
# Google AI Configuration
GOOGLE_AI_API_KEY=your_google_ai_api_key
# AstraDB Configuration
ASTRA_DB_APPLICATION_TOKEN=your_astra_db_token
ASTRA_DB_API_ENDPOINT=your_astra_db_endpoint
# Upstash Redis (Optional - has memory fallback)
UPSTASH_REDIS_REST_URL=your_upstash_redis_url
UPSTASH_REDIS_REST_TOKEN=your_upstash_redis_token
- Build Vector Embeddings
npm run build-embeddings
- Start Development Server
npm run dev
Open http://localhost:3000 to view the portfolio.
personal-portfolio/
├── app/ # Next.js App Router
│ ├── api/ # API Routes
│ │ └── chat/ # Chatbot API endpoint
│ ├── components/ # React Components
│ │ ├── ChatBot.tsx # AI Chatbot component
│ │ ├── MainSection.tsx # Hero section
│ │ ├── ExperienceEducation.tsx
│ │ ├── FeaturedSection.tsx
│ │ └── Navbar.tsx # Navigation
│ ├── context/ # React Context
│ ├── ui/ # UI Components & Icons
│ └── page.tsx # Homepage
├── lib/ # Utility Libraries
│ ├── cache.ts # Redis caching system
│ └── vectordb.ts # AstraDB vector operations
├── scripts/ # Build Scripts
│ ├── build-embeddings.ts # Generate vector embeddings
│ └── cache-manager.mjs # Cache management
├── src/ # Data Sources
│ ├── experiences.json # Work experience
│ ├── projects.json # Project portfolio
│ ├── educations.json # Education background
│ ├── publications.json # Research publications
│ └── socials.json # Social media links
├── docs/ # Documentation
│ ├── CACHING.md # Caching implementation
│ ├── ENHANCED-EMBEDDINGS.md # Embedding system
│ └── CACHE-RESOLUTION.md # Troubleshooting
└── public/ # Static Assets
# Development
npm run dev # Start development server with Turbopack
# Production
npm run build # Build for production
npm run start # Start production server
# AI System
npm run build-embeddings # Generate/update vector embeddings
# Maintenance
npm run lint # Run ESLint
npm run cache:clear # Clear Redis cache
- Vector Embeddings: Portfolio content is converted into vector embeddings using Google's text-embedding-004 model
- Semantic Search: User queries are embedded and searched against the vector database
- Context Retrieval: Relevant portfolio information is retrieved based on similarity
- AI Response: Google Gemini 2.5 Flash generates contextual responses
- Caching: Responses are cached in Redis for 97% faster subsequent queries
- Work experience and skills
- Project details and technologies
- Education background
- Contact information
- Technical expertise
- Publications and research
- First Query: ~2-3 seconds (includes AI processing)
- Cached Query: ~0.3 seconds (Redis retrieval)
- Vector Search: Sub-second semantic matching
- Cache Hit Rate: ~85% for common queries
- Update relevant JSON files in the
src/
directory - Run
npm run build-embeddings
to update vector database - Content will be automatically available to the chatbot
- TailwindCSS: Modify
tailwind.config.js
for design system changes - Dark Mode: Configured with
class
strategy for manual switching - Components: Individual component styling in respective files
- Model: Switch between Gemini models in
app/api/chat/route.ts
- Embeddings: Configure embedding models in
lib/vectordb.ts
- Caching: Adjust cache TTL in
lib/cache.ts
Variable | Description | Required |
---|---|---|
GOOGLE_AI_API_KEY |
Google AI Studio API key | ✅ |
ASTRA_DB_APPLICATION_TOKEN |
AstraDB application token | ✅ |
ASTRA_DB_API_ENDPOINT |
AstraDB endpoint URL | ✅ |
UPSTASH_REDIS_REST_URL |
Upstash Redis URL | |
UPSTASH_REDIS_REST_TOKEN |
Upstash Redis token |
*Redis is optional - the system falls back to in-memory caching
- Edge Runtime: API routes optimized for edge deployment
- Image Optimization: Next.js automatic image optimization
- Code Splitting: Automatic code splitting for optimal loading
- Caching Strategy: Multi-level caching (Redis + Memory + Browser)
- Bundle Optimization: Tree shaking and dead code elimination
- Performance: 95+
- Accessibility: 100
- Best Practices: 95+
- SEO: 100
- Connect your GitHub repository to Vercel
- Add environment variables in Vercel dashboard
- Deploy automatically on every push
- Netlify: Supports Next.js with edge functions
- Railway: Full-stack deployment with database support
- Self-hosted: Docker container with Node.js runtime
The chatbot uses a sophisticated RAG (Retrieval-Augmented Generation) system:
- Document Processing: Portfolio content is chunked and processed
- Vector Generation: Text chunks are converted to embeddings
- Similarity Search: User queries find relevant content via cosine similarity
- Context Injection: Retrieved content is injected into AI prompts
- Response Generation: Gemini generates contextually accurate responses
Multi-tier caching for optimal performance:
Query → Redis Cache → Memory Cache → Vector Search → AI Generation
(97% hit) (fallback) (semantic) (generation)
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is open source and available under the MIT License.
- Next.js Team for the amazing framework
- Google AI for the Gemini models and embeddings
- DataStax for AstraDB vector database
- Upstash for serverless Redis
- Vercel for deployment platform
Rahul Lalwani
- 🌐 Portfolio: rahullalwani.com
- 💼 LinkedIn: rahul-lalwani-learner
- 🐙 GitHub: Rahul-lalwani-learner
- 📧 Email: rahul.lalwani.learner@gmail.com
Built with ❤️ by Rahul Lalwani | Powered by AI & Modern Web Technologies