A modern, full-stack weather dashboard application built with Laravel 12 backend and React 18 frontend. Features real-time weather data, AI-powered weather summaries, and secure authentication.
- 🌐 Real-time Weather Data - Get current weather for any city worldwide
- 🤖 AI Weather Summaries - Intelligent weather insights powered by local LLM
- 🔐 Secure Authentication - Laravel Sanctum-based login/register system
- 🎨 Modern UI/UX - Beautiful, responsive design with weather-themed backgrounds
- ⚡ Fast Performance - Built with Vite for lightning-fast development
- 📱 Mobile Responsive - Works perfectly on all devices
backend/
— Laravel 12+ API backendfrontend/
— React 18+ frontend (Vite + TypeScript)
- Install dependencies:
cd backend composer install
- Copy
.env.example
to.env
and set your environment variables:cp .env.example .env
- Generate application key:
php artisan key:generate
- Run migrations:
php artisan migrate
- Start the server:
php artisan serve
php artisan serve
— Start the local development serverphp artisan migrate
— Run database migrationsphp artisan migrate:rollback
— Rollback the last database migrationphp artisan db:seed
— Seed the database with test dataphp artisan make:model ModelName
— Create a new Eloquent modelphp artisan make:controller ControllerName
— Create a new controllerphp artisan make:migration create_table_name
— Create a new migration filephp artisan make:seeder SeederName
— Create a new seederphp artisan route:list
— List all registered routesphp artisan cache:clear
— Clear the application cachephp artisan config:cache
— Cache the configuration filesphp artisan queue:work
— Process the next job on the queuephp artisan storage:link
— Create a symbolic link from public/storage to storage/app/publicphp artisan tinker
— Interact with your application via the command line
- Install dependencies:
cd frontend npm install # or yarn install
- Start the development server:
npm run dev # or yarn dev
- The backend API will be available at
http://localhost:8000
by default. - The frontend will be available at
http://localhost:3000
by default.
- Never commit your
.env
files or any secrets to version control. - For production, make sure to set up proper environment variables and secure your keys.
MIT
This project uses a local Large Language Model (LLM) for AI weather summaries.
- Ollama (or similar local LLM runner)
- Download the model you want to use (e.g.,
tinyllama
,mistral
)
-
Install Ollama
Follow instructions at https://ollama.com/download -
Pull the model
ollama pull tinyllama # or ollama pull mistral
-
Start the Ollama server
ollama serve
-
Configure your
.env
OPENAI_BASE=http://localhost:11434/v1 OPENAI_MODEL=tinyllama
-
Restart your Laravel backend
The backend will now use your local LLM for AI summaries.
- To use a different model, update
OPENAI_MODEL
in your.env
and pull the model with Ollama.