Humio is a fast and flexible platform for logs and metrics, available self-hosted or through Humio Cloud.

Humio cloud

Within Humio’s Cloud offerings, at this time, we have two different regions, EU ( and US ( Within the documentation, we will refer to “http://$YOUR_HUMIO_URL/” where $YOUR_HUMIO_URL is the URL for your particular Humio Cloud installation.

Humio Self-hosted

If you choose to self-host your Humio instance, there are two primary ways of installing it

  • Running it in a Docker container, or
  • Running as a jar file

If you are getting started with Humio, we recommend running Humio as a Docker container since Docker contains the external dependencies needed, Kafka and Zookeeper. If you plan on running Humio on bare metal, please refer to our Bare Metal Installation Guide.

For information on how to choose hardware, and how to size your Humio installation, see Humio instance sizing.


Hardware Requirements

Hardware requirements depend on - how much data you will be ingesting, and - how many concurrent searches you will be running

Scaling Your Environment

Humio was made to scale, and scales very well within the nodes in a cluster. Running a cluster of three or more Humio nodes provides higher capacity in terms of both ingest and search performance, and also allows high availability by replicating data to more than one node.

If you want to run a clustered node please review Cluster Setup.

Estimating Resources

Here are a few guidelines to help you determine what hardware you’ll need.

  1. Assume data compresses 9x on ingest. Test your installation; better compression means better performance.
  2. You need to be able to hold 48 hours of compressed data in 80% of your RAM.
  3. You want enough hyper-threads/vCPUs (each giving you 1GB/s search) to be able to search 24 hours of data in less than 10 seconds.
  4. You need disk space to hold your compressed data. Never fill your disk more than 80%.

Example Setup Your machine has 64 GB of RAM, 8 hyper-threads (4 cores) and 1 TB of storage. Your machine can hold 460 GB of ingest data compressed in RAM and process 8 GB/s.
In this case, it means 10 seconds worth of query time will run through 80 GB of data. So this machine fits an 80 GB/day ingest, with +5 days’ data available for fast querying. You can store 7.2 TB of data before your disk is 80% full, corresponding to 90 days at 80 GB/day ingest rate.

This example assumes that all data has the same retention settings. But you can configure Humio to automatically delete some events before others, allowing some data to be kept for several years while other data gets deleted after one week, for example.

For more details, refer to our Instance Sizing Reference.

Configuration Options

Please refer to the configuration reference page.