A FastAPI application that integrates with monitoring and infrastructure management tools
ServiceMesh integrates with multiple monitoring and infrastructure management tools:
- Uptime Kuma for monitoring - using uptime-kuma-api by Lucas Held
- Prometheus for metrics collection - using prometheus-api-client by Anand Sanmukhani
- Grafana for visualization - using grafana-client by Panodata
- Proxmox for infrastructure management - using proxmoxer by Proxmoxer Team
- FastAPI backend with high-performance, asynchronous API design
- Integration with various monitoring and infrastructure management tools
- Comprehensive error handling and logging
- Health check endpoints for system monitoring
- Modular architecture with clear separation of concerns
- Documentation with OpenAPI/Swagger
- Containerization and orchestration support
- CI/CD pipeline configuration
- Database integration for storing credentials and configurations
- API for managing service credentials and monitors
- Python 3.10+
- FastAPI
- Pydantic
- SQLAlchemy
- MySQL
- uptime-kuma-api
- prometheus-api-client-python
- grafana-client
- proxmoxer
The easiest way to get started with local development is using Docker and Docker Compose. This approach ensures consistent development environments and minimizes setup issues.
Copy the example environment file and modify it with your settings:
cp .env.example .env
Edit the .env
file to set your API keys and service credentials.
docker compose up -d
This will:
- Build the API container
- Start a MySQL database container
- Initialize the database
- Start the FastAPI application with hot-reload enabled
-
The API will be available at http://localhost:6000
-
API Documentation:
- Swagger UI: http://localhost:6000/docs
- ReDoc: http://localhost:6000/redoc
The development setup uses uvicorn with the --reload flag, so any changes to your code will automatically reload the server.
The container automatically runs migrations at startup, but if you need to run them manually:
docker compose exec api alembic upgrade head
If you're setting up the project from scratch or need to initialize Alembic:
docker compose exec api alembic init migrations
This creates the initial migration environment with a migrations folder and alembic.ini file.
- Update the alembic.ini file to point to your database URL or use environment variables.
- Edit the migrations/env.py file to use the SQLAlchemy models from your application.
docker compose exec fastapi alembic revision --autogenerate -m "fresh migration"
This command scans your SQLAlchemy models and generates migration scripts to create the corresponding database schema.
docker compose exec api alembic upgrade head
This applies all pending migrations to bring your database schema up to date.
-
Create a new migration manually:
docker compose exec api alembic revision -m "description of changes"
-
Generate migration based on model changes:
docker compose exec api alembic revision --autogenerate -m "description of changes"
-
Upgrade to a specific version:
docker compose exec api alembic upgrade <revision>
-
Downgrade to a previous version:
docker compose exec api alembic downgrade <revision>
-
View migration history:
docker compose exec api alembic history
-
View current database version:
docker compose exec api alembic current
The application is configured to connect to either an external MySQL database or another MySQL container. The connection details are specified in your .env file.
To connect directly to the MySQL database:
docker compose exec db mysql -u <DB_USER> -p<DB_PASSWORD> <DB_NAME>
docker compose down
To completely remove all data (including database volumes):
docker compose down -v
Once the application is running, you can access the OpenAPI documentation at:
- Swagger UI: http://localhost:6000/docs
- ReDoc: http://localhost:6000/redoc
- GET /api/v1/health - Check the health of the API Service Credentials
- GET /api/v1/credentials - List all service credentials
- GET /api/v1/credentials/{id} - Get a specific credential by ID
- GET /api/v1/credentials/service/{service_type} - Get credentials by service type
- POST /api/v1/credentials - Create a new service credential
- PATCH /api/v1/credentials/{id} - Update a service credential
- DELETE /api/v1/credentials/{id} - Delete a service credential
The application includes deployment configurations for both Docker and Kubernetes environments.
To deploy the application using Docker:
# Build the Docker image
docker build -t service-mesh:latest .
# Run the container
docker run -d -p 6000:5000 --env-file .env --name monitoring-api service-mesh:latest
Kubernetes manifests are provided in the kubernetes/ directory:
# Apply the ConfigMap
kubectl apply -f kubernetes/configmap.yaml
# Apply the Deployment
kubectl apply -f kubernetes/deployment.yaml
# Apply the Service
kubectl apply -f kubernetes/service.yaml
kubectl get deployments
kubectl get pods
kubectl get services
This project includes GitHub Actions workflows for both CI and CD:
- CI Pipeline : Automatically runs on pull requests to test the codebase
- CD Pipeline : Deploys to your Kubernetes cluster when changes are merged to the main branch
-
Runs tests, linting, and security checks
-
Pull requests to main CD
-
Builds and deploys to production
-
Push to main branch Release
-
Creates a new release with version tag
-
Manual trigger
This project follows Semantic Versioning :
- MAJOR version for incompatible API changes
- MINOR version for new functionality in a backward compatible manner
- PATCH version for backward compatible bug fixes
- Update the version in pyproject.toml
- Create a new tag:
git tag v1.0.0
- Push the tag:
git push origin v1.0.0
- The GitHub Actions workflow will automatically create a release
To set up the CD pipeline, you'll need to add the following secrets to your GitHub repository:
- KUBE_CONFIG : Your Kubernetes configuration file (base64 encoded)
- DOCKER_USERNAME : Your Docker Hub username
- DOCKER_PASSWORD : Your Docker Hub password
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.