Skip to content

nebuly-ai/terraform-google-nebuly-platform

Repository files navigation

Nebuly Platform (GCP)

Terraform module for provisioning Nebuly Platform resources on GCP.

Available on Terraform Registry.

Prerequisites

Nebuly Credentials

Before using this Terraform module, ensure that you have your Nebuly credentials ready. These credentials are necessary to activate your installation and should be provided as input via the nebuly_credentials input.

Required GCP APIs

Before using this Terraform module, ensure that the following GCP APIs are enabled in your Google Cloud project:

You can enable the APIs using either the GCP Console or the gcloud CLI, as explained in the GCP Documentation.

Required GCP Quotas

Ensure that your GCP project has the necessary quotas for the following resources over the regions you plan to deploy Nebuly:

  • Name: GPUs (all regions)

    Min Value: 2

  • Name: NVIDIA L4 GPUs

    Min Value: 1

For more information on how to check and increase quotas, refer to the GCP Documentation.

Quickstart

To get started with installing Nebuly on GCP, follow the steps below. This guide uses the standard configuration provided by the official Nebuly Helm chart.

For advanced configurations or support, feel free to reach out via the Nebuly Slack channel or email us at support@nebuly.ai.

Additional examples are available:

  • Basic: Minimal setup with default settings.
  • Microsoft SSO: Setup with Microsoft SSO authentication.

1. Terraform setup

Import Nebuly into your Terraform root module, provide the necessary variables, and apply the changes.

For configuration examples, you can refer to the Examples.

Once the Terraform changes are applied, proceed with the next steps to deploy Nebuly on the provisioned Google Kubernetes Engine (GKE) cluster.

2. Connect to the GKE Cluster

For connecting to the created GKE cluster, you can follow the steps below. For more information, refer to the GKE Documentation.

gcloud components install kubectl
  • Install the Install the gke-gcloud-auth-plugin:
gcloud components install gke-gcloud-auth-plugin
  • Fetch the command for retrieving the credentials from the module outputs:
terraform output gke_cluster_get_credentials
  • Run the command you got from the previous step

3. Create image pull secret

The auto-generated Helm values use the name defined in the k8s_image_pull_secret_name input variable for the Image Pull Secret. If you prefer a custom name, update either the Terraform variable or your Helm values accordingly. Create a Kubernetes Image Pull Secret for authenticating with your Docker registry and pulling the Nebuly Docker images.

Example:

kubectl create secret generic nebuly-docker-pull \
  -n nebuly \    
  --from-file=.dockerconfigjson=dockerconfig.json \
  --type=kubernetes.io/dockerconfigjson

4. Bootstrap GKE cluster

Install the bootstrap Helm chart to set up all the dependencies required for installing the Nebuly Platform Helm chart on GKE.

Refer to the chart documentation for all the configuration details.

helm install nebuly-bootstrap oci://ghcr.io/nebuly-ai/helm-charts/bootstrap-gcp \
  --namespace nebuly-bootstrap \
  --create-namespace 

5. Create Secret Provider Class

Create a Secret Provider Class to allow GKE to fetch credentials from the provisioned Key Vault.

  • Get the Secret Provider Class YAML definition from the Terraform module outputs:

    terraform output secret_provider_class
  • Copy the output of the command into a file named secret-provider-class.yaml.

  • Run the following commands to install Nebuly in the Kubernetes namespace nebuly:

    kubectl create ns nebuly
    kubectl apply --server-side -f secret-provider-class.yaml

6. Install nebuly-platform chart

Retrieve the auto-generated values from the Terraform outputs and save them to a file named values.yaml:

terraform output helm_values

Install the Nebuly Platform Helm chart. Refer to the chart documentation for detailed configuration options.

helm install <your-release-name> oci://ghcr.io/nebuly-ai/helm-charts/nebuly-platform \
  --namespace nebuly \
  -f values.yaml \
  --timeout 45m 

ℹ️ During the initial installation of the chart, all required Nebuly LLMs are uploaded to your model registry. This process can take approximately 5 minutes. If the helm install command appears to be stuck, don't worry: it's simply waiting for the upload to finish.

7. Access Nebuly

Retrieve the external Load Balancer IP address to access the Nebuly Platform:

kubectl get svc -n nebuly-bootstrap -o jsonpath='{range .items[?(@.status.loadBalancer.ingress)]}{.status.loadBalancer.ingress[0].ip}{"\n"}{end}'

You can then register a DNS A record pointing to the Load Balancer IP address to access Nebuly via the custom domain you provided in the input variable platform_domain.

Examples

You can find examples of code that uses this Terraform module in the examples directory.

Providers

Name Version
google ~>6.3.0
random ~>3.6
tls ~>4.0

Outputs

Name Description
gke_cluster_get_credentials The command for connecting with the provisioned GKE cluster.
helm_values The values.yaml file for installing Nebuly with Helm.

The default standard configuration is used, which uses Nginx as ingress controller and exposes the application to the Internet. This configuration can be customized according to specific needs.
secret_provider_class The secret-provider-class.yaml file to make Kubernetes reference the secrets stored in the Key Vault.

Inputs

Name Description Type Default Required
gke_cluster_admin_users The list of email addresses of the users who will have admin access to the GKE cluster. set(string) n/a yes
gke_delete_protection Whether the GKE Cluster should have delete protection enabled. bool true no
gke_kubernetes_version The used Kubernetes version for the GKE cluster. string "1.32.4" no
gke_nebuly_namespaces The namespaces used by Nebuly installation. Update this if you use custom namespaces in the Helm chart installation. set(string)
[
"nebuly",
"nebuly-bootstrap"
]
no
gke_node_pools The node Pools used by the GKE cluster.
map(object({
machine_type = string
min_nodes = number
max_nodes = number
node_count = number
resource_labels = optional(map(string), {})
disk_type = optional(string, "pd-balanced")
disk_size_gb = optional(number, 128)
node_locations = optional(set(string), null)
preemptible = optional(bool, false)
labels = optional(map(string), {})
taints = optional(set(object({
key = string
value = string
effect = string
})), null)
guest_accelerator = optional(object({
type = string
count = number
}), null)
}))
{
"gpu-primary": {
"guest_accelerator": {
"count": 1,
"type": "nvidia-l4"
},
"labels": {
"gke-no-default-nvidia-gpu-device-plugin": true,
"nebuly.com/accelerator": "nvidia-l4"
},
"machine_type": "g2-standard-8",
"max_nodes": 1,
"min_nodes": 0,
"node_count": null,
"resource_labels": {
"goog-gke-accelerator-type": "nvidia-l4",
"goog-gke-node-pool-provisioning-model": "on-demand"
}
},
"web-services": {
"machine_type": "n2-highmem-4",
"max_nodes": 1,
"min_nodes": 1,
"node_count": 1,
"resource_labels": {
"goog-gke-node-pool-provisioning-model": "on-demand"
}
}
}
no
gke_private_cluster_config Configuration for the GKE private cluster.
- enable_private_nodes: Prevents nodes from having public IP addresses
- enable_private_endpoint: Prevents access to the GKE master via public endpoint.
- master_ipv4_cidr_block: Must be a /28 block not overlapping others.
- authorized_cidr_blocks: A set of CIDR blocks that are allowed to access the GKE master.
object({
enable_private_nodes : bool
enable_private_endpoint : bool
master_ipv4_cidr_block : string
authorized_cidr_blocks : optional(map(string), {})
})
null no
gke_service_account_name The name of the Kubernetes Service Account used by Nebuly installation. string "nebuly" no
k8s_image_pull_secret_name The name of the Kubernetes Image Pull Secret to use.
This value will be used to auto-generate the values.yaml file for installing the Nebuly Platform Helm chart.
string "nebuly-docker-pull" no
labels Common labels that will be applied to all resources. map(string) {} no
microsoft_sso Settings for configuring the Microsoft Entra SSO integration.
object({
tenant_id : string
client_id : string
client_secret : string
})
null no
nebuly_credentials The credentials provided by Nebuly are required for activating your platform installation.
If you haven't received your credentials or have lost them, please contact support@nebuly.ai.
object({
client_id : string
client_secret : string
})
n/a yes
network_cidr_blocks The CIDR blocks of the VPC network used by Nebuly.

- primary: The primary CIDR block of the VPC network.
- secondary_gke_pods: The secondary CIDR block used by GKE for pods.
- secondary_gke_services: The secondary CIDR block used by GKE for services.
object({
primary : string
secondary_gke_pods : string
secondary_gke_services : string
})
{
"primary": "10.0.0.0/16",
"secondary_gke_pods": "10.4.0.0/16",
"secondary_gke_services": "10.6.0.0/16"
}
no
openai_api_key The API Key used for authenticating with OpenAI. string n/a yes
openai_endpoint The endpoint of the OpenAI API. string n/a yes
openai_gpt4o_deployment_name The name of the deployment to use for the GPT-4o model. string n/a yes
openai_translation_deployment_name The name of the deployment to use for enabling the translations feature. Recommended to use gpt-4o-mini.
Provide an empty string to disable the translations feature.
string n/a yes
platform_domain The domain on which the deployed Nebuly platform is made accessible. string n/a yes
postgres_server_backup_configuration The backup settings of the PostgreSQL server.
object({
enabled = bool
point_in_time_recovery_enabled = bool
n_retained_backups = number
})
{
"enabled": true,
"n_retained_backups": 14,
"point_in_time_recovery_enabled": true
}
no
postgres_server_delete_protection Whether the PostgreSQL server should have delete protection enabled. bool true no
postgres_server_disk_size The size of the disk in GB for the PostgreSQL server.
object({
initial = number
limit = number
})
{
"initial": 16,
"limit": 1000
}
no
postgres_server_edition The edition of the PostgreSQL server. Possible values are ENTERPRISE, ENTERPRISE_PLUS. string "ENTERPRISE" no
postgres_server_high_availability The high availability configuration for the PostgreSQL server.
object({
enabled : bool
})
{
"enabled": true
}
no
postgres_server_maintenance_window Time window when the PostgreSQL server can automatically restart to apply updates. Specified in UTC time.
object({
day : string
hour : number
})
{
"day": "6",
"hour": 23
}
no
postgres_server_tier The tier of the PostgreSQL server. Default value: 4 vCPU, 16GB memory. string "db-custom-4-16384" no
region The region where the resources will be created string n/a yes
resource_prefix The prefix that is used for generating resource names. string n/a yes

Resources

  • resource.google_compute_global_address.main (/terraform-docs/main.tf#43)
  • resource.google_compute_network.main (/terraform-docs/main.tf#38)
  • resource.google_compute_network_peering_routes_config.main (/terraform-docs/main.tf#73)
  • resource.google_compute_subnetwork.main (/terraform-docs/main.tf#50)
  • resource.google_container_cluster.main (/terraform-docs/main.tf#215)
  • resource.google_container_node_pool.main (/terraform-docs/main.tf#287)
  • resource.google_project_iam_binding.gke_cluster_admin (/terraform-docs/main.tf#374)
  • resource.google_project_iam_member.gke_secret_accessors (/terraform-docs/main.tf#351)
  • resource.google_secret_manager_secret.jwt_signing_key (/terraform-docs/main.tf#391)
  • resource.google_secret_manager_secret.microsoft_sso_client_id (/terraform-docs/main.tf#443)
  • resource.google_secret_manager_secret.microsoft_sso_client_secret (/terraform-docs/main.tf#459)
  • resource.google_secret_manager_secret.nebuly_client_id (/terraform-docs/main.tf#417)
  • resource.google_secret_manager_secret.nebuly_client_secret (/terraform-docs/main.tf#429)
  • resource.google_secret_manager_secret.openai_api_key (/terraform-docs/main.tf#405)
  • resource.google_secret_manager_secret.postgres_analytics_password (/terraform-docs/main.tf#150)
  • resource.google_secret_manager_secret.postgres_analytics_username (/terraform-docs/main.tf#138)
  • resource.google_secret_manager_secret.postgres_auth_password (/terraform-docs/main.tf#191)
  • resource.google_secret_manager_secret.postgres_auth_username (/terraform-docs/main.tf#179)
  • resource.google_secret_manager_secret_version.jwt_signing_key (/terraform-docs/main.tf#399)
  • resource.google_secret_manager_secret_version.microsoft_sso_client_id (/terraform-docs/main.tf#453)
  • resource.google_secret_manager_secret_version.microsoft_sso_client_secret (/terraform-docs/main.tf#469)
  • resource.google_secret_manager_secret_version.nebuly_client_id (/terraform-docs/main.tf#425)
  • resource.google_secret_manager_secret_version.nebuly_client_secret (/terraform-docs/main.tf#437)
  • resource.google_secret_manager_secret_version.openai_api_key (/terraform-docs/main.tf#413)
  • resource.google_secret_manager_secret_version.postgres_analytics_password (/terraform-docs/main.tf#158)
  • resource.google_secret_manager_secret_version.postgres_analytics_username (/terraform-docs/main.tf#146)
  • resource.google_secret_manager_secret_version.postgres_auth_password (/terraform-docs/main.tf#199)
  • resource.google_secret_manager_secret_version.postgres_auth_username (/terraform-docs/main.tf#187)
  • resource.google_service_account.gke_node_pool (/terraform-docs/main.tf#283)
  • resource.google_service_networking_connection.main (/terraform-docs/main.tf#68)
  • resource.google_sql_database.analytics (/terraform-docs/main.tf#122)
  • resource.google_sql_database.auth (/terraform-docs/main.tf#163)
  • resource.google_sql_database_instance.main (/terraform-docs/main.tf#82)
  • resource.google_sql_user.analytics (/terraform-docs/main.tf#133)
  • resource.google_sql_user.auth (/terraform-docs/main.tf#174)
  • resource.google_storage_bucket.main (/terraform-docs/main.tf#478)
  • resource.google_storage_bucket_iam_binding.gke_storage_object_user (/terraform-docs/main.tf#362)
  • resource.random_password.analytics (/terraform-docs/main.tf#128)
  • resource.random_password.auth (/terraform-docs/main.tf#169)
  • resource.tls_private_key.jwt_signing_key (/terraform-docs/main.tf#387)
  • data source.google_compute_zones.available (/terraform-docs/main.tf#23)
  • data source.google_container_engine_versions.main (/terraform-docs/main.tf#24)
  • data source.google_project.current (/terraform-docs/main.tf#22)

About

Terraform module for provisioning Nebuly Platform resources on Google Cloud Platform (GCP).

Resources

License

Stars

Watchers

Forks

Packages

No packages published