kubectl-ai
acts as an intelligent interface, translating user intent into
precise Kubernetes operations, making Kubernetes management more accessible and
efficient.
First, ensure that kubectl is installed and configured.
curl -sSL https://raw.githubusercontent.com/GoogleCloudPlatform/kubectl-ai/main/install.sh | bash
Other Installation Methods
-
Download the latest release from the releases page for your target machine.
-
Untar the release, make the binary executable and move it to a directory in your $PATH (as shown below).
tar -zxvf kubectl-ai_Darwin_arm64.tar.gz
chmod a+x kubectl-ai
sudo mv kubectl-ai /usr/local/bin/
First of all, you need to have krew insatlled, refer to krew document for more details Then you can install with krew
kubectl krew install ai
Now you can invoke kubectl-ai
as a kubectl plugin like this: kubectl ai
.
kubectl-ai
supports AI models from gemini
, vertexai
, azopenai
, openai
, grok
and local LLM providers such as ollama
and llama.cpp
.
Set your Gemini API key as an environment variable. If you don't have a key, get one from Google AI Studio.
export GEMINI_API_KEY=your_api_key_here
kubectl-ai
# Use different gemini model
kubectl-ai --model gemini-2.5-pro-exp-03-25
# Use 2.5 flash (faster) model
kubectl-ai --quiet --model gemini-2.5-flash-preview-04-17 "check logs for nginx app in hello namespace"
Use other AI models
You can use kubectl-ai
with AI models running locally. kubectl-ai
supports ollama and llama.cpp to use the AI models running locally.
Additionally, the modelserving
directory provides tools and instructions for deploying your own llama.cpp
-based LLM serving endpoints locally or on a Kubernetes cluster. This allows you to host models like Gemma directly in your environment.
An example of using Google's gemma3
model with ollama
:
# assuming ollama is already running and you have pulled one of the gemma models
# ollama pull gemma3:12b-it-qat
# if your ollama server is at remote, use OLLAMA_HOST variable to specify the host
# export OLLAMA_HOST=http://192.168.1.3:11434/
# enable-tool-use-shim because models require special prompting to enable tool calling
kubectl-ai --llm-provider ollama --model gemma3:12b-it-qat --enable-tool-use-shim
# you can use `models` command to discover the locally available models
>> models
You can use X.AI's Grok model by setting your X.AI API key:
export GROK_API_KEY=your_xai_api_key_here
kubectl-ai --llm-provider=grok --model=grok-3-beta
You can also use Azure OpenAI deployment by setting your OpenAI API key and specifying the provider:
export AZURE_OPENAI_API_KEY=your_azure_openai_api_key_here
export AZURE_OPENAI_ENDPOINT=https://your_azure_openai_endpoint_here
kubectl-ai --llm-provider=azopenai --model=your_azure_openai_deployment_name_here
# or
az login
kubectl-ai --llm-provider=openai://your_azure_openai_endpoint_here --model=your_azure_openai_deployment_name_here
You can also use OpenAI models by setting your OpenAI API key and specifying the provider:
export OPENAI_API_KEY=your_openai_api_key_here
kubectl-ai --llm-provider=openai --model=gpt-4.1
For example, you can use aliyun qwen-xxx models as follows
export OPENAI_API_KEY=your_openai_api_key_here
export OPENAI_ENDPOINT=https://dashscope.aliyuncs.com/compatible-mode/v1
kubectl-ai --llm-provider=openai --model=qwen-plus
Run interactively:
kubectl-ai
The interactive mode allows you to have a chat with kubectl-ai
, asking multiple questions in sequence while maintaining context from previous interactions. Simply type your queries and press Enter to receive responses. To exit the interactive shell, type exit
or press Ctrl+C.
Or, run with a task as input:
kubectl-ai --quiet "fetch logs for nginx app in hello namespace"
Combine it with other unix commands:
kubectl-ai < query.txt
# OR
echo "list pods in the default namespace" | kubectl-ai
You can even combine a positional argument with stdin input. The positional argument will be used as a prefix to the stdin content:
cat error.log | kubectl-ai "explain the error"
kubectl-ai
leverages LLMs to suggest and execute Kubernetes operations using a set of powerful tools. It comes with built-in tools like kubectl
, bash
, and trivy
.
You can also extend its capabilities by defining your own custom tools. By default, kubectl-ai
looks for your tool configurations in ~/.config/kubectl-ai/tools.yaml
.
To specify tools configuration files or directories containing tools configuration files, use:
kubectl-ai --custom-tools-config=YOUR_CONFIG
You can include multiple tools in a single configuration file, or a directory with multiple configuration files, each dedicated to a single or multiple tools. Define your custom tools using the following schema:
- name: tool_name
description: "A clear description that helps the LLM understand when to use this tool."
command: "your_command" # For example: 'gcloud' or 'gcloud container clusters'
command_desc: "Detailed information for the LLM, including command syntax and usage examples."
A custom tool definition for helm
could look like the following example:
- name: helm
description: "Helm is the Kubernetes package manager and deployment tool. Use it to define, install, upgrade, and roll back applications packaged as Helm charts in a Kubernetes cluster."
command: "helm"
command_desc: |
Helm command-line interface, with the following core subcommands and usage patterns:
- helm install <release-name> <chart> [flags]
Install a chart into the cluster.
- helm upgrade <release-name> <chart> [flags]
Upgrade an existing release to a new chart version or configuration.
- helm list [flags]
List all releases in one or all namespaces.
- helm uninstall <release-name> [flags]
Uninstall a release and clean up associated resources.
Use `helm --help` or `helm <subcommand> --help` to see full syntax, available flags, and examples for each command.
You can use the following special keywords for specific actions:
model
: Display the currently selected model.models
: List all available models.tools
: List all available tools.version
: Display thekubectl-ai
version.reset
: Clear the conversational context.clear
: Clear the terminal screen.exit
orquit
: Terminate the interactive shell (Ctrl+C also works).
You can also run kubectl ai
. kubectl
finds any executable file in your PATH
whose name begins with kubectl-
as a plugin.
You can also use kubectl-ai
as a MCP server that exposes kubectl
as one of the tools to interact with locally configured k8s environment. See mcp docs for more details.
kubectl-ai project includes k8s-bench - a benchmark to evaluate performance of different LLM models on kubernetes related tasks. Here is a summary from our last run:
Model | Success | Fail |
---|---|---|
gemini-2.5-flash-preview-04-17 | 10 | 0 |
gemini-2.5-pro-preview-03-25 | 10 | 0 |
gemma-3-27b-it | 8 | 2 |
Total | 28 | 2 |
See full report for more details.
We welcome contributions to kubectl-ai
from the community. Take a look at our
contribution guide to get started.
Note: This is not an officially supported Google product. This project is not eligible for the Google Open Source Software Vulnerability Rewards Program.