This repository demonstrates how to use ArgoCD ApplicationSet to automatically deploy multiple Helm chart instances across different Kubernetes clusters and namespaces using a GitOps approach.
This project sets up an automated deployment system where:
- Multiple Kubernetes clusters can be managed from a single Git repository
- Helm charts are deployed automatically based on the directory structure
- Each application instance can have its own configuration
- Changes to the repository automatically trigger updates in your clusters
Before you begin, you need:
- Kubernetes clusters - At least one cluster, but this example assumes three clusters
- ArgoCD installed on your management cluster
- kubectl configured to access your clusters
- argocd CLI installed on your local machine
- Git installed on your local machine
Add this repository (or your fork) to ArgoCD:
argocd repo add https://github.com/digitalstudium/argocd-example.git
For private repositories, you'll need to add credentials:
argocd repo add git@github.com:digitalstudium/argocd-example.git --ssh-private-key-path ~/.ssh/id_rsa
Register each cluster with ArgoCD (replace with your actual cluster names):
# If using multiple clusters, add them to ArgoCD
argocd cluster add cluster-name-1
argocd cluster add cluster-name-2
argocd cluster add cluster-name-3
# List registered clusters to verify
argocd cluster list
Note: The cluster where ArgoCD is installed is automatically registered as in-cluster
.
clusters/
├── cluster-name-1/ # Your Kubernetes cluster name
│ └── namespaces/
│ ├── bar/ # Kubernetes namespace
│ │ └── charts/
│ │ ├── nginx/ # Helm chart name
│ │ │ ├── instance1/ # Instance name
│ │ │ │ ├── metadata.yaml # Chart version & repo info
│ │ │ │ └── values.yaml # Helm values for this instance
Each level has a specific meaning:
- Cluster name: Must match the cluster name registered in ArgoCD
- Namespace: The Kubernetes namespace where the chart will be deployed
- Chart name: The name of the Helm chart to deploy
- Instance name: Allows multiple instances of the same chart
- metadata.yaml: Contains chart version and repository URL
- values.yaml: Custom Helm values for this specific instance
The ApplicationSet (appset.yaml
) automatically:
- Scans the repository for all
metadata.yaml
files - Creates an ArgoCD Application for each instance found
- Deploys the specified Helm chart version to the correct cluster and namespace
- Applies the custom values from
values.yaml
git clone https://github.com/digitalstudium/argocd-example.git
cd argocd-example
Replace cluster-name-1
, cluster-name-2
, cluster-name-3
with your actual cluster names as shown in argocd cluster list
.
In appset.yaml
, replace the Git repository URLs with your own:
repoURL: git@github.com:YOUR_USERNAME/YOUR_REPO.git
kubectl apply -f appset.yaml
Check ArgoCD UI or use:
kubectl get applications -n argocd
# or
argocd app list
To deploy a new application, create the directory structure:
# Example: Deploy PostgreSQL to cluster-name-1 in namespace "database"
mkdir -p clusters/cluster-name-1/namespaces/database/charts/postgresql/instance1
# Create metadata.yaml
cat > clusters/cluster-name-1/namespaces/database/charts/postgresql/instance1/metadata.yaml << EOF
version: 12.5.8
repoUrl: https://charts.bitnami.com/bitnami
EOF
# Create values.yaml with your custom configuration
cat > clusters/cluster-name-1/namespaces/database/charts/postgresql/instance1/values.yaml << EOF
auth:
postgresPassword: "mysecretpassword"
database: "myapp"
EOF
# Commit and push
git add .
git commit -m "Add PostgreSQL to cluster-name-1"
git push
ArgoCD will automatically detect the new files and create the application.
Each instance has its own values.yaml
file. For example, to configure nginx:
# clusters/cluster-name-1/namespaces/foo/charts/nginx/instance1/values.yaml
replicaCount: 3
service:
type: LoadBalancer
port: 80
resources:
limits:
cpu: 200m
memory: 256Mi
The metadata.yaml
file tells ArgoCD which Helm chart to use:
version: 21.0.3 # Helm chart version
repoUrl: https://mirror.yandex.ru/helm/charts.bitnami.com # Helm repository URL
- Edit the
metadata.yaml
file and change the version - Commit and push the change
- ArgoCD will automatically update the deployment
Simply create the directory structure under a different namespace folder.
Delete the instance directory and push the change. ArgoCD will automatically remove the application.
- Check that cluster names match exactly with those registered in ArgoCD
- Verify the ApplicationSet is created:
kubectl get applicationset -n argocd
- Check ApplicationSet logs:
kubectl logs -n argocd deployment/argocd-applicationset-controller
- Check the ArgoCD UI for error messages
- Verify the Helm repository URL is accessible
- Ensure the chart version exists in the repository
- Check your
values.yaml
for syntax errors
# List all registered clusters
argocd cluster list
# Or check secrets
kubectl get secrets -n argocd -l argocd.argoproj.io/secret-type=cluster
- The namespace will be automatically created if it doesn't exist (due to
CreateNamespace=true
) - Applications are set to auto-sync and self-heal
- Deleting files from Git will automatically remove the corresponding applications (prune is enabled)
- Make sure your Git repository is accessible by ArgoCD with proper credentials
- Explore adding more charts from different Helm repositories
- Set up different environments (dev, staging, prod) using different clusters
- Implement Helm value overlays for common configurations
- Add health checks and notifications for your deployments
For more information about ArgoCD ApplicationSets, visit the official documentation.