Running Humio on Kubernetes
Beta

If you are looking for information about shipping data from a Kubernetes cluster to Humio without running Humio in Kubernetes, please see our Kubernetes platform integration documentation.

Installation using Helm

The easiest way to install Humio in Kubernetes is to use the offical Humio helm charts.

Directions for installing Helm for your particular OS flavor can be found on the Helm GitHub page.

Once that is done, it will be necessary to update the main Helm chart repository. This main repository contains subcharts for Humio.

We depend on the Confluent Helm Charts as dependencies, which are included automatically when running the installation below.

helm repo add humio https://humio.github.io/humio-helm-charts
helm repo update

Now create a values.yaml file. Adjust the version, resources and JVM memory as appropriate for the nodes on which the pods will be scheduled. The jvm.xmx and jvm.maxDirectMemorySize should each be half of the allocated memory.

---
humio-core:
  enabled: true

  # The number of Humio pods
  replicas: 3

  # Use a custom version of Humio.
  image: humio/humio-core:<version>

  # Custom partitions
  ingest:
    initialPartitionsPerNode: 4
  storage:
    initialPartitionsPerNode: 4

  # Custom CPU/Memory resources
  resources:
    limits:
     cpu: 30
     memory: 220Gi
    requests:
     cpu: 28
     memory: 220Gi

  # Custom JVM memory settings (these will depend on resources defined)
  jvm:
    xss: 2m
    xms: 4g
    xmx: 110g
    maxDirectMemorySize: 110g
    extraArgs: -XX:+UseParallelOldGC -XX:+UnlockDiagnosticVMOptions -XX:CompileCommand=dontinline,com/humio/util/HotspotUtilsJ.dontInline -Xlog:gc+jni=debug:stdout -Dakka.log-config-on-start=on -Xlog:gc*:stdout:time,tags

  affinity:
    # Affinity policy to prevent multiple Humio pods per node (recommended)
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - humio-core
        topologyKey: "kubernetes.io/hostname"
global:
  sharedTokens:
    fluentbit: {kubernetes: in-cluster}

These settings will tell Helm to create a default three-node Humio cluster with Kafka and Zookeeper. It will also create a Fluent Bit daemonset that will collect logs from any pods running in the Kubernetes cluster, and autodiscover the Humio endpoint and token. We recommend installing Humio into its own namespace; in this example we’re using the logging namespace

# Helm v3+
helm install humio humio/humio-helm-charts --namespace logging -f values.yaml
# Helm v2
helm install humio/humio-helm-charts --name humio --namespace logging -f values.yaml

Logging in following installation

There are a few ways to get the URL for a Humio cluster. In most cases, grabbing the load balancer URL is sufficient

kubectl get service humio-humio-core-http -n logging -o go-template --template='http://{{(index .status.loadBalancer.ingress 0 ).ip}}:8080'

If you’re running in Minikube, run this command instead:

minikube service humio-humio-core-http -n logging --url

If humio-core.authenticationMethod is set to single-user (default), then you need to supply a username and password when logging in. The default username is developer and the password can be retrieved from the command

kubectl get secret developer-user-password -n logging -o=template --template={{.data.password}} | base64 -D

The base64 command may vary depending on OS and distribution.

Additonal Helm customization

For a full list of customizations, reference the Helm chart.

Upgrading with Kubernetes

For more information on steps to update Humio with Kubernetes, see updating Humio.

Uninstalling

This will destroy all Humio data.

helm delete --purge humio
kubectl delete namespace logging --cascade=true