Humio can run as distributed system in a cluster. Root users can access the Cluster Node Administration Page in the UI from the account menu.
All nodes in the cluster run the same software that can be configured to assume a combination of different logical roles:
A node can have any combination of the four roles, and all play a part in Humio’s data ingest flow.
Below is a short overview of the node types. For more detailed explanation refer to the data ingest flow page.
It may be beneficial to specialize to only assume one role since that allows you to better tune cluster performance and cost; See the examples.
Nodes in the Ingest or HTTP API roles are usually the only nodes exposed to external systems and are placed behind a load balancer.
An Ingest node is a node that is responsible for servicing:
The ingest node receives data from external systems, parses it into events, and passes the data on to the digest nodes.
A node can be configured as a stateless ingest-only node by adding NODE_ROLES=ingestonly
to the configuration.
In order to remain stateless a node in this role does not join the cluster as seen from the rest of the cluster - It does not show up in the cluster management UI and it does not get a node ID. This means that TCP/UDP ingest listeners that need to run on these nodes must be configured to run on all nodes, not tied to a specific node.
An HTTP API node is a node that is responsible for servicing:
The HTTP API node handles all types of HTTP request, including those of the ingest node. An HTTP API node is visible in the cluster management user interface. It uses the local data directory for cache storage of files.
A node can be configured as an HTTP API node by adding NODE_ROLES=httponly
to the configuration of the node.
A digest node is responsible for
Once a segment file is completed it is passed on to storage nodes.
Digest nodes are designated by adding them to the cluster’s Digest Rules. Any node that appears in the digest rules is a Digest Node.
A digest node must have the NODE_ROLES=all
in the configuration of the node, but as that is the default value, leaving it out works too.
A storage node is a node that saves data to disk. They are responsible for:
The data directory of a storage node is used to store the segment files. Segment files make up for the bulk of all data in Humio.
Storage nodes are configured using the cluster’s Storage Rules. Any node that appears in the storage rules is considered a Storage Node. A node that was previously in a storage rule can still contain segment files that are used for querying.
The Storage Rules are used to configure data replication.
A storage node must have the NODE_ROLES=all
in the configuration of the node, but as that is the default value, leaving it out works too.
Features for NODE_ROLES |
all |
httponly |
ingestonly |
---|---|---|---|
Accept ingest request on HTTP | Yes | Yes | Yes |
Accept ingest request on TCP/UDP | Yes | Yes | Yes |
Accept full set of HTTP API requests | Yes | Yes | No |
Accept file uploads | Yes | Yes | No |
Can coordinate queries | Yes | Yes | No |
Can coordinate alerts | Yes | Yes | No |
Visible in cluster management UI | Yes | Yes | No |
Requires significant local storage | Yes | No | No |
Can store segment files (events) | Yes | No | No |
Can upload segment files to bucket | Yes | No | No |
Executes queries on events | Yes | No | No |
Can be considered staleless | No | No | Yes |
Here are a few examples of how the roles may be applied when setting up a cluster.
A single Humio node is a cluster of just one node which needs to assume all roles.
This configuration has all nodes being equal.
All nodes run on similar hardware configurations and all have the default configuration for role of all
. The load balancer has all the cluster nodes in the set of backend nodes and dispatches HTTP requests to all of them. The digest and storage partitions should be assigned so that all nodes have a fair share of the partitions where they act as primary node.
This configuration allows using potentially cheaper nodes with limited and slow storage as frontend nodes thus relieving the more expensive nodes with fast local storage from the tasks that do not require fast local storage.
The backend nodes with fast local storage are configured with the node role all
and are the ones configured as digest and storage nodes in the cluster.
The cheaper frontend nodes are configured with the node role httponly
and only these are added to the set of nodes known by the load balancer. The backend nodes will then never see HTTP requests from outside the cluster.
As the number of cluster nodes required to handle the ingest traffic grows, it may be convenient to add stateless ingest nodes to the cluster, as those do not show up in the cluster management UI and do not need a persistent disk for their data directory. This makes it easier to add and remove this kind of node as demand changes. The new nodes are configures with role ingestonly
.
The load balancing configuration should to direct ingest traffic primarily to the current set of stateless ingest nodes and direct all other HTTP traffic to the HTTP API nodes. Using a separate DNS name or port for this split is recommended, but splitting the traffic based on matching substrings in the URL is also possible.
The extra complexity added by also managing this split of the HTTP API requests means that adding dedicated ingest nodes is not worth the effort for smaller clusters.
A cluster node is identified in the cluster by it’s UUID (Universally unique identifier). The UUID is automatically generated the first time a node is started. The UUID is stored in $HUMIO_DATA_DIR/cluster_membership.uuid
. When moving/replacing a node you can use this file to ensure a node rejoins the cluster with the same identity.