A free, open-source web application that analyzes websites against Google Search Console best practices, providing comprehensive SEO audits and actionable recommendations.
- Technical SEO Analysis: Robots.txt, sitemap validation, crawlability checks
- Core Web Vitals: LCP, INP, CLS measurements and optimization tips
- Mobile-Friendliness: Responsive design validation and mobile UX analysis
- Performance Metrics: Page speed analysis and performance recommendations
- Security Checks: HTTPS validation, SSL certificate verification
- Multi-tenant Architecture: Support for agencies managing multiple clients
- Caching System: Fast repeated analyses with intelligent cache management
- Backend: Flask (Python)
- Frontend: React.js
- Database: SQLite (easily upgradeable to PostgreSQL)
- Analysis Engines: Custom Python modules for each analysis aspect
- Python 3.8+
- Node.js 14+
- Git
- Clone the repository:
git clone https://github.com/yourusername/url-scanner-gsc.git
cd url-scanner-gsc
- Set up Python virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install Python dependencies:
pip install -r requirements.txt
- Set up environment variables:
cp .env.example .env
# Edit .env and set your SECRET_KEY
- Install frontend dependencies:
cd frontend
npm install
cd ..
- Initialize the database:
python create_test_client.py
- Run the application:
python main.py
The application will be available at http://localhost:5001
The application uses environment variables for configuration. Copy .env.example
to .env
and update the values:
# Flask Configuration
FLASK_ENV=development
SECRET_KEY=your-secret-key-here # IMPORTANT: Change this to a random secret key!
# Database Configuration
DATABASE_URL=sqlite:///database/app.db
# API Keys (Optional - app works without these)
FIRECRAWL_API_KEY=your-firecrawl-api-key # Optional
APIFY_API_KEY=your-apify-api-key # Optional fallback
PAGESPEED_API_KEY=your-pagespeed-api-key # Optional
# Server Configuration
HOST=0.0.0.0
PORT=5001
DEBUG=True
Important: Always set a strong SECRET_KEY
for production deployments!
For enhanced crawling capabilities, you can add:
- Firecrawl API - Primary web crawler
- Apify - Fallback crawler
The application works without these APIs but with limited crawling features.
- Single URL Analysis: Enter a URL in the dashboard to analyze
- Batch Analysis: Upload multiple URLs for bulk processing
- View Results: Get detailed reports with:
- Compliance scores
- Issue identification
- Actionable recommendations
- Priority rankings
POST /api/analysis/analyze
- Analyze a single URLGET /api/analysis/results/{url}
- Get analysis resultsGET /api/clients
- List all clients (multi-tenant)POST /api/clients
- Create new client
curl -X POST http://localhost:5001/api/analysis/analyze \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com"}'
url-scanner-gsc/
├── src/
│ ├── models/ # Database models
│ ├── routes/ # API endpoints
│ ├── services/ # Business logic
│ └── engines/ # Analysis engines
├── static/ # Frontend build
├── database/ # SQLite database
├── tests/ # Test suite
├── main.py # Application entry point
└── requirements.txt # Python dependencies
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
- Fork the repository
- Create a feature branch:
git checkout -b feature-name
- Make your changes
- Run tests:
python -m pytest
- Submit a pull request
- Google Search Console API integration
- Enhanced performance monitoring
- Competitor analysis features
- Scheduled monitoring
- Email alerts for issues
- Export to PDF reports
- WordPress/CMS plugins
This project is licensed under the MIT License - see LICENSE file for details.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Wiki
- Google Search Console documentation
- Core Web Vitals guidelines
- Open source community
Note: This tool is not affiliated with Google. It implements publicly documented best practices for search engine optimization.