AI for Kubernetes – part 2 Setting up K8sGPT with Ollama

In our previous blog, AI for Kubernetes – The big picture, we wrote about AI in general and its applications for Kubernetes. One of the tools that we came across was K8sGPT.

In this blog I show how to install K8sGPT with ollama and llama 3 LLMs. I tested K8sGPT with my local MiniKube cluster to avoid sending my local cluster information to public AI providers. A security concern mentioned in the previous blog.

By: Stephan Duivelshof
Reviewed by: Yosuf Haydary

As part of Blogtober 2025

Installing K8sGPT

For setting up I used Homebrew from my terminal:

# brew install k8sgpt

I verified the installation by running it with the help flag:

# k8sgpt --help

Kubernetes debugging powered by AI

Available Commands:
  analyze         This command will find problems within your Kubernete.
  auth            Authenticate with your chosen backend
  cache           For working with the cache the results of an analysis
  completion      Generate the autocompletion script for the...
  custom-analyzer Manage a custom analyzer
  dump            Creates a dumpfile for debugging issues with K8sGPT
  filters         Manage filters for analyzing Kubernetes resources
  generate        Generate Key for your chosen backend (opens browser)
  help            Help about any command
  integration     Integrate another tool into K8sGPT
  serve           Runs k8sgpt as a server
  version         Print the version number of k8sgpt

Integration with Kubernetes

K8sGPT uses the current present kube-context, like kubectl, helm and other tools. Or you can manually pass the kube-config path.

What can it do out of the box

The most interesting (sub)commands is analyze.

# k8sgpt analyze

It analyzes the cluster, looks for problems , and reports the problems. It will sum up all the errors or warnings found in the cluster by default, but it can be scoped to a single namespace.

# k8sgpt analyze -npoc-1
W1021 14:45:06.357546   61267 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
AI Provider: AI not used; --explain not set

0: ConfigMap poc-1/kube-root-ca.crt()
- Error: ConfigMap kube-root-ca.crt is not used by any pods in the namespace

It can also be instructed to output as json with the -ojson flag.

# k8sgpt analyze -npoc-1 -ojson
W1021 14:46:08.442471   61498 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
{
  "provider": "",
  "errors": null,
  "status": "ProblemDetected",
  "problems": 1,
  "results": [
    {
      "kind": "ConfigMap",
      "name": "poc-1/kube-root-ca.crt",
      "error": [
        {
          "Text": "ConfigMap kube-root-ca.crt is not used by any pods in the namespace",
          "KubernetesDoc": "",
          "Sensitive": []
        }
      ],
      "details": "",
      "parentObject": ""
    }
  ]
}

So far, it feels familiar like using kubectl and helm.

Analyzing Operational problems

Although the default installation without an LLM is somehow useful, the real potential of K8sGPT is to use it with an LLM.

Integration with Ollama

Integrating it with an LLM is easy. I am using Ollama’s Llama 3 model. I installed the Ollama’s Mac version from Ollama.com. Then I pulled Llama 3 in as follows:

# ollama pull llama3

Verifying successful installation of the model:

# ollama run llama2 "what is kubernetes in just one sentence?"

Kubernetes is an open-source platform for automating the deployment, scaling, and management of containerized applications in a 
distributed environment.

There are 2 ways to use Ollama and llama3 with K8sGPT.

  1. Create a file called k8sgpt.yaml in your user home with the following:
apiVersion: v1
kind: K8sGPT
metadata:
  name: k8sgpt
spec:
  backend: ollama
  model: llama3
  baseurl: http://localhost:11434
  secretRef:
    name: k8sgpt-secret
  1. Manual Setup
# k8sgpt auth add --backend ollama --model llama3 --baseurl http://localhost:11434

To see all the available backend providers use the following command:

# k8sgpt auth list
Default: 
> openai
Active: 
> ollama
Unused: 
> openai
> localai
> azureopenai
...

Analyze with an LLM

 # k8sgpt analyze --explain --backend ollama

When you run this command with the backend configured it will give you a list like the one before but this time it also gives potential ways to fix the issue. So you dont have to look up online how to fix it but it shows itself in the terminal. The output looks like this:

3: Pod k8sgpt-test/crashloop-pod()
- Error: the last termination reason is Error container=crashloop pod=crashloop-pod
Error: The crashloop pod is stuck in an infinite loop, indicating a problem with the container.

Solution: 
1. Check the container logs for errors.
2. Verify the container's command and arguments.
3. Inspect the pod's environment variables.
4. Run `kubectl describe pod <pod_name>` to gather more information.
5. If necessary, delete the pod and recreate it.

So as you can see it gives me the error, an explanation and the steps to fix it! This can save you some time because you don’t have to look for all this information on the internet, but instead it is given to you. You can directly start applying the suggested changes. It may depend on my own specs but running the –explain flag after the analyze command makes it so it takes some time for my cluster to be analyzed.

Wrapping Up

In this blog I showed how to setup K8sGPT on my local machine, how to install Ollama, and how to integrate K8sGPT with my Kubernetes (MiniKube) cluster and Ollama LLM.

So far, K8sGPT seems to be helpful in getting an overview of the problems, and its suggestions make sense. However, I would love to create some problems and then test how K8sGPT will really help me.

In the next blog, I will go deeper and see how this setup can be helpful for my day-to-day Kubernetes Operations. Stay tuned.

Subscribe here.
✉️