AI Helper is a web application that uses computer vision and AI to analyze questions in real-time. It captures images, performs OCR (Optical Character Recognition) to extract text, and then uses AI models to provide responses to questions.
- Real-time image capture using device camera
- OCR text extraction from captured images
- Question analysis using multiple AI models via OpenRouter
- Responsive design for both desktop and mobile devices
- Dark mode support
- Settings configuration for API keys and model selection
- Progressive Web App (PWA) support for offline access and native app-like experience
- OpenGraph metadata for rich social media sharing
To use the AI features, follow these steps:
- Visit OpenRouter Keys Page
- Create an account if you haven't already
- Click on "Create Key" and give it any name
- Copy the generated API key value
- Paste the key in the application's settings panel to start using AI features
- Open the website in your browser
- Look for the install icon (↓) in the address bar
- Click "Install" when prompted
- The app will install and create a desktop shortcut
- Open the website in Chrome
- Tap the three-dot menu (⋮)
- Select "Add to Home screen"
- Follow the installation prompts
- Open the website in Safari
- Tap the share button (□↑)
- Scroll down and tap "Add to Home Screen"
- Tap "Add" to confirm
The PWA will now behave like a native app with its own window/instance and can be accessed from your device's app launcher or home screen.
- Framework: Next.js 15 with App Router
- UI: React 19, Tailwind CSS, shadcn/ui components
- State Management: Zustand
- Animation: Framer Motion
- API Integration: OpenRouter for AI model access
- Styling: Tailwind CSS with custom components
- SEO & Sharing: OpenGraph protocol implementation
First, run the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
Open http://localhost:3000 with your browser to see the result.
This project is licensed under the MIT License - see the LICENSE file for details.