Skip to content

A lightweight setup that uses Envoy as both an API Gateway and Kubernetes Ingress Controller in front of two small Go services — one for login/auth (with gRPC for Envoy’s external authorization) and one for a backend with public/private routes. Everything runs on Kubernetes using Helm.

Notifications You must be signed in to change notification settings

nguyenptk/gatekeeper-k8s

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

gatekeeper-k8s

A lightweight setup that uses Envoy as both an API Gateway and Kubernetes Ingress Controller in front of two small Go services — one for login/auth (with gRPC for Envoy’s external authorization) and one for a backend with public/private routes. Everything runs on Kubernetes using Helm.

This project started as a playground for testing service-to-service authentication, and route protection — and then grew into a reusable, multi-environment chart you can install with a single make command.

Architecture


What’s in the box

  • Envoy – serves as the Ingress Controller and API gateway, handling routing, load‑balancing, logging, and calling the auth service for token validation (via gRPC)
  • Auth (Go) – REST endpoint for /login, gRPC endpoint for Envoy ext_authz, issues JWTs, and /healthcheck for probes
  • Backend (Go) – public /public, protected /private, and /healthcheck for probes
  • Helm chart – one chart for all services, with values for ops and stg environments; Envoy is exposed via Kubernetes LoadBalancer (no separate Ingress CRD).
  • Makefile – to help typing Helm commands repeatedly

Structure

.
├── deploy
│   ├── charts
│   │   ├── auth
│   │   ├── backend
│   │   ├── envoy
│   ├── Chart.yaml
│   ├── values.ops.yaml
│   ├── values.stg.yaml
│   └── values.yaml
├── scripts
│   └── test.sh
├── services
│   ├── auth
│   │   ├── Dockerfile
│   │   └── main.go
│   ├── backend
│   │   ├── Dockerfile
│   │   └── main.go
│   └── envoy
│       ├── Dockerfile
│       └── envoy.yaml
├── Makefile
└── README.md

1. Prerequisites

Start Minikube:

minikube start

2. Build and Load Docker Images

make build-images
make load-images

This builds the auth, backend, and envoy images and loads them into Minikube’s Docker daemon.


3. Install with Helm

Ops environment:

make helm-install-ops

Staging environment:

make helm-install-stg

4. Test the Helm Template

make helm-test-ops
make helm-test-stg

5. Preview the Helm Template (Dry-run)

make helm-template-ops
make helm-template-stg

6. Run Tests

6.1. Test the system with curl

After deployment, the LoadBalancer will route the downstream's requests to all envoy pods.

Port-forward ops-envoy:

kubectl port-forward deployment/ops-envoy 8060:8060 -n ops

or port-forward stg-envoy:

kubectl port-forward deployment/stg-envoy 8060:8060 -n stg

Then run the test script:

./scripts/test.sh functional

This script:

  • calls / endpoint to get the landing page information
  • calls /login endpoint to the auth service to fetch a JWT token
  • calls /public endpoint without token
  • calls /private endpoint without and with token
  • prints results; Envoy will round‑robin across auth & backend pods.

Further, we can collect the Envoy gateway's stats:

kubectl port-forward deployment/ops-envoy 9901:9901 -n ops

and call admin's endpoint:

curl http://localhost:9901/stats

Here are some samples of the stats:

cluster.auth_http.upstream_rq_200: 5
cluster.backend_service.upstream_rq_200: 10
cluster.auth_ext.internal.upstream_rq_time: P0(nan,0) P25(nan,0) P50(nan,0) P75(nan,0) P90(nan,0) P95(nan,1.05) P99(nan,1.09) P99.5(nan,1.095) P99.9(nan,1.099) P100(nan,1.1)

6.2. Performance Test to trigger the HPA

Once we have verified the basic functionality, you we stress the system to prove autoscaling. This test will generate a flood of requests against the /public endpoint, driving CPU utilization above the HPA threshold (ops environment) and causing your Backend and Envoy pods to scale out automatically.

How it works

  • We fire off a configurable number of concurrent curl requests (PERF_REQUESTS) with a fixed parallelism (PERF_CONCURRENCY) for a duration (PERF_DURATION).
  • Each request hits /public, which does minimal work but still consumes CPU.
  • The HorizontalPodAutoscaler watches the CPU usage (via metrics-server) and will spin up new replicas once average utilization exceeds the configured target.

Usage

  • Enable metrics-server in Minikube:
minikube addons enable metrics-server
kubectl get deploy metrics-server -n kube-system
  • Run only the performance test:
./scripts/test.sh perf

6.3. Smoke Tests

Quick “on/off” tests to verify each sub‑chart’s enabled flag works as expected.

make helm-smoke-noauth

Verify only backend service & envoy are running

kubectl get deployments -n smoke-test

Clean up the smoke tests

make cleanup-smoke

7. Upgrade Chart with New Changes

make helm-upgrade-ops
make helm-upgrade-stg    

8. Cleanup Resources

To uninstall Helm releases and delete Docker images from both local and Minikube:

make cleanup

This runs:

  • helm uninstall
  • docker rmi locally
  • minikube ssh -- ctr images rm ...

9. Rollback (Optional)

Revert to a previous Helm release:

make helm-rollback-ops
make helm-rollback-stg

Notes

  • Envoy config is generated from a Helm ConfigMap
  • Auth and backend service names are dynamically set per environment via values.stg.yaml and values.ops.yaml

About

A lightweight setup that uses Envoy as both an API Gateway and Kubernetes Ingress Controller in front of two small Go services — one for login/auth (with gRPC for Envoy’s external authorization) and one for a backend with public/private routes. Everything runs on Kubernetes using Helm.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published