Cloud-native Microservices mit Go - SoSe 2025
The script /scripts/deploy-to-aca.sh
was created to deploy the newest images to the Azure Container Apps automatically.
-
Navigate from root to
/scripts
-
Make file executable:
chmod +x deploy-to-aca.sh
-
Execute file:
./deploy-to-aca.sh
- Azure CLI installed and configured:
az login
- Docker installed and Docker Daemon running
- Permissions to access the ACR (Contributor or Owner Role). To do so you have to be added to the azure resource group by authority.
- Uses
az acr login
to authenticate Docker with your ACR - Builds all images
- Pushes images to the Azure Container Registry if they aren't up to date and tags images with version
latest
- Updates all App Containers with the newest image from the Azure Container Registry
The script /scripts/manage-acr-manually.sh
was created to manually configure the Azure Container Registry.
- Azure CLI installed and configured:
az login
- Docker installed and Docker Daemon running
- Permissions to access the ACR (Contributor or Owner Role). To do so you have to be added to the azure resource group by authority.
-
Navigate from root to
/scripts
-
Make it executable:
chmod +x manage-acr-manually.sh
-
Execute:
./acr_manamanage-acr-manuallyger.sh
1 - Login to ACR # Authenticates with your registry
2 - Push specific image # Tags and uploads a local image
3 - Push all images # Pushes all images and tags them as `latest`
4 - Delete image # Removes tags or purges entire repository
5 - Exit
- Uses
az acr login
to authenticate Docker with your ACR - Required only once per session
- Enter local image name (e.g.,
cmg-ss2025-job-service
) - Target tag (e.g.,
v1
) - Actions:
docker tag my-app:latest cmgss2025.azurecr.io/my-app:v1 docker push cmgss2025.azurecr.io/my-app:v1
- Safe Workflow:
- First lists all existing tags
- Chose between:
- Single tag deletion (
untag
) - Full purge (
--purge
) with confirmation
- Single tag deletion (
$ ./acr_manager.sh
Choose action (1-4): 2
Local image name: cmg-ss2025-job-service
Tag: v1
β
Successfully pushed: cmgss2025.azurecr.io/my-app:v1
$ ./acr_manager.sh
Choose action (1-4): 3
Image name: my-app
Existing tags: latest, v1, v2
Tag to delete: v1
β
Deleted tag: my-app:v1
This document outlines the authentication flows, JWT usage, registration restrictions, and secure secret handling for the Green Load Shifting Platform.
We use Auth0 as our centralized identity provider.
- Token URL:
https://dev-jqhwcu7xuwgdqi56.eu.auth0.com/oauth/token
- Audience:
https://green-load-shifting-platform/
- Not permitted to self-register.
- Must contact the system admin for access.
- Receive JWTs via internal issuance if authorized.
- The only service allowed to call the
/auth/login
endpoint. - Uses
client_id.client_secret
flow to obtain JWT. - Role-based access ensures only the provider can manage users.
- Use client credentials grant to authenticate.
- Tokens are attached as
Authorization: Bearer <token>
in requests.
A sample JWT issued to an internal service using the Client Credentials flow:
{
"https://green-load-shifting-platform/role": "dummy_role",
"https://green-load-shifting-platform/client_id": "QgXJrkSv5Z5dF8hc8wrfODv2VOHeWBj9",
"iss": "https://dev-jqhwcu7xuwgdqi56.eu.auth0.com/",
"sub": "QgXJrkSv5Z5dF8hc8wrfODv2VOHeWBj9@clients",
"aud": "https://green-load-shifting-platform/",
"iat": 1746911116,
"exp": 1746997516,
"gty": "client-credentials",
"azp": "QgXJrkSv5Z5dF8hc8wrfODv2VOHeWBj9",
"permissions": []
}
- Only the User Management Provider is allowed to access the
/register
endpoint. - Manual onboarding: External users must contact the administrator to request access.
- Unauthorized clients will be rejected at gateway level.
- Endpoint:
POST /login
- Flow: Resource Owner Password (for users) or Client Credentials (for services)
- Response: JWT token with custom claims used for RBAC
- Type of Roles: consumer / provider / job_scheduler
Secrets such as DB credentials, certificates, or external API keys are never stored in code. They are securely managed using Azure Key Vault and injected into container apps via managed identity.
- Go to Key Vault β Secrets β + Generate/Import
- Add your secret name and value.
- Save the secret.
- Go to your Container App β Security β Identity
- Turn System-assigned managed identity to
On
and save.
-
Go back to Key Vault β Access Control (IAM)
-
Add Role Assignment
- Role:
Key Vault Secrets User
- Assign access to:
Managed identity
- Select the identity of your Container App
- Role:
-
Go to Container App β Security β Secrets β + Add
-
Set:
- Name: Internal reference key (e.g.
db-password
) - Type:
Key Vault reference
- Key Vault Secret URI: From your secretβs Current Version
- Identity:
System-assigned
- Name: Internal reference key (e.g.
-
Add the secret.
-
Go to Application β Containers
-
Scroll to Environment Variables β + Add
-
Set:
- Name: Final env var (e.g.
DB_PASSWORD
) - Value Source:
Secret
- Value: Select the secret key (e.g.
db-password
)
- Name: Final env var (e.g.
Component | Description |
---|---|
Identity Provider | Auth0 (client_id.client_secret + resource owner password) |
JWT Role Mapping | Custom claims under https://green-load-shifting-platform/* namespace |
Registration Flow | Only via internal provider; no public access |
JWT Auth | All services authenticate using JWT headers |
Secret Management | Azure Key Vault + system-assigned managed identity |
Secrets to Runtime | Secrets referenced in Container App β injected as environment variables |
This project uses Azure DevOps Pipelines for automated testing, building, and containerization of all microservices. The pipeline system is designed to work with Makefiles that are specifically configured for the CI/CD environment.
The azure-pipelines.yml
file defines our CI/CD workflow:
- Dependency Installation: Installs Go, make, Docker, and required testing tools (
go-junit-report
,gocover-cobertura
) - Change Detection: Identifies which services have been modified in pull requests
- Integration Testing: Runs tests, coverage analysis, builds, and containerization for affected services
- Test Reporting: Publishes JUnit test results and code coverage to Azure DevOps
- Image Building: Creates Docker images for changed services and pushes them to Azure Container Registry
The project uses a two-level Makefile system:
The root Makefile
orchestrates operations across all microservices:
- Detects which services have been modified
- Runs integration checks on affected services
- Coordinates containerization of all services
Each microservice has its own Makefile (e.g., services/job/Makefile
):
BINARY_NAME=job-binary
DOCKER_IMAGE=job
.PHONY: all build test containerize clean integrationcheck deployment junit coverage
integrationcheck: clean test build
deployment: clean test build containerize
clean:
rm -f $(BINARY_NAME)
rm -f report.xml
rm -f coverage.out
rm -f coverage.xml
test: junit coverage
junit:
go test -v ./... | go-junit-report > report.xml
coverage:
go test -coverprofile=coverage.out ./...
gocover-cobertura < coverage.out > coverage.xml
build:
go build -o $(BINARY_NAME) .
containerize:
docker build -t $(DOCKER_IMAGE) .
The Makefiles are designed exclusively for the CI/CD pipeline environment and are NOT intended for local development.
Why you shouldn't run make
commands locally:
- Missing Dependencies: The pipeline installs specific tools (
go-junit-report
,gocover-cobertura
) that may not be available locally - Environment Differences: The pipeline runs on Ubuntu with specific configurations and Docker daemon
- File Path Assumptions: Some commands assume the pipeline's directory structure
- Azure Credentials: Image pushing requires Azure Container Registry credentials only available in the pipeline
Target | Purpose | When Used |
---|---|---|
integrationcheck |
Run tests, coverage, build, and containerize | Every PR and commit |
containerize |
Build Docker images for the service | Part of integration check |
clean |
Remove build artifacts and old images | Before each build |
test |
Run unit tests with coverage reporting | Part of integration check |
- Developer creates PR β Pipeline detects changed services
- Runs
make integrationcheck
on affected services - Runs go test and build binary on selected Microservice
- Reports test results and coverage to Azure DevOps
- Blocks merge if tests fail
- Code merged to
main
β Pipeline runs full integration check - Builds Docker images for all changed services
- Pushes images to Azure Container Registry with
latest
tag - Manual deployment still required using
/scripts/deploy-to-aca.sh
The pipeline only builds Docker. Actual deployment to Azure Container Apps must be done manually using the deployment script.
After the pipeline completes successfully:
- Navigate to
/scripts
- Run
./deploy-to-aca.sh
to deploy the newly built images to Azure Container Apps
Package | Description |
---|---|
github.com/gorilla/mux |
HTTP request router and dispatcher |
github.com/google/uuid |
UUID generation (e.g., for user identifiers) |
go.opentelemetry.io/otel |
Core OpenTelemetry API for tracing and metrics |
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp |
Sends trace data to an OTLP-compatible backend over HTTP |
go.opentelemetry.io/otel/sdk |
SDK implementation for OpenTelemetry (tracer provider, processors, etc.) |
go.opentelemetry.io/otel/trace |
Trace-related types and interfaces (e.g., Tracer, Span) from OpenTelemetry |
github.com/jackc/pgx/v5 |
PostgreSQL Driver and Toolkit |
github.com/cenkalti/backoff/v5 |
Transitive dependency |
github.com/go-logr/logr |
Transitive dependency |
github.com/go-logr/stdr |
Transitive dependency |
github.com/grpc-ecosystem/grpc-gateway/v2 |
Transitive dependency |
go.opentelemetry.io/auto/sdk |
Transitive dependency |
go.opentelemetry.io/otel/exporters/otlp/otlptrace |
Transitive dependency |
go.opentelemetry.io/otel/metric |
Transitive dependency |
go.opentelemetry.io/proto/otlp |
Transitive dependency |
golang.org/x/net |
Transitive dependency |
golang.org/x/sys |
Transitive dependency |
golang.org/x/text |
Transitive dependency |
google.golang.org/genproto/googleapis/api |
Transitive dependency |
google.golang.org/genproto/googleapis/rpc |
Transitive dependency |
google.golang.org/grpc |
Transitive dependency |
google.golang.org/protobuf |
Transitive dependency |