Ingest Flow

When data arrives at Humio it needs to be processed. The journey data takes from arriving at a Humio node until it is presented in search results and saved to disk is called the ingest flow.

If you are planning a large system or tuning the performance of your Humio cluster it can help to understand the flow of data. E.g. if you understand the different phases of the ingest flow you can ensure that the right machines have the optimal hardware configuration.

In this section we’ll explain the different ingest phases and how nodes participate.

Parse, Digest, Store

There are three phases incoming data goes through:

graph LR; x{"Data"} --> Parse Parse --> Digest Digest --> Store
  1. Parse: Receiving messages over the wire and processing them with parsers.
  2. Digest: Building segment files and buffering data for real-time queries.
  3. Store: Replicating the segment files to designated storage nodes.

These phases may be handled by different nodes in a Humio cluster, but any node can take part in any combination of the three phases.

The Parse Phase

graph LR; Data{"Data"} --> Parse Parse --> Digest Digest --> Store style Parse fill:#2ac76d;

When an system sends data (e.g. logs) to Humio over one of the Ingest APIs or through an ingest listener the cluster node that receives the request is called the arrival node. The arrival node parses the incoming data (using the configured parsers) and puts the result (called events) in a Humio’s humio-ingest Kafka queue.

If you are not familiar with Kafka - don’t worry.

graph LR; subgraph External Ext1[External Service] end subgraph Ingest Ext1 --> A1("Arrival Node") end subgraph Kafka P1["Partition #1"] A1 --> P2["Partition #2"] PN["Partition #N"] end
Events are parsed and placed on a random digest partitions.

The events are now ready to be processed by a Digest Node.

The Digest Phase

graph LR; Data{"Data"} --> Parse Parse --> Digest Digest --> Store style Digest fill:#2ac76d;

After the events are placed in the humio-ingest queue a Digest Node will grab them off the queue as soon as possible. A queue in Kafka is configured with a number of partitions (parallel streams), and each such Kafka partition is consumed by a digest node. A single node can consume multiple partitions and exactly which node that handles which digest partition is defined in the cluster’s Digest Rules.

Constructing Segment Files

Digest nodes are responsible for buffering new events and compiling segment files (the files that are written to disk in the Store phase).

Once a segment file is full it is passed on to Storage Nodes in the Store Phase.

Real-Time Query Results

Digest nodes also processes the Real-Time part of search results. Whenever a new event is pulled off the humio-ingest queue the digest node examines it and updates the result of any matching live searches that are currently in progress. This is what makes results appear instantly in results after arriving in Humio.

The Store Phase

graph LR; Data{"Data"} --> Parse Parse --> Digest Digest --> Store style Store fill:#2ac76d;

The final phase of the ingest flow is saving segment files to storage. Once a segment file has been completed in the digest phase it is saved in X replicas - how many depend on how your cluster is configured (see Storage Rules).

Detailed Flow Diagram

Now that we have covered all the phases, let’s put the pieces together and give you a more detailed diagram of the complete ingest flow:

graph LR; subgraph External Systems Ext1[Data Producer] Ext2[Log System] end subgraph Humio Cluster subgraph Parse A1("Arrival Node #1
Parsing") A2("Arrival Node #2") AN("Arrival Node #N") Ext1 -->|Ingest| A1 Ext2 -->|Ingest| A1 end subgraph Kafka A1 -.-> P1["Partition #1"] A1 --> P2["Partition #2"] A1 --> P3["Partition #3"] A1 -.-> PN["Partition #M"] end subgraph Digest P2 --> D1("Digest Node 1
Real-Time Query Processing
Builds Segment Files
") P3 --> D1 D2("Digest Node #2") DN("Digest Node #P") end subgraph Store D1 --> S1("Storage Node #1") D1 --> S2("Storage Node #2") D1 --> SN("Storage Node #Q") end end classDef faded opacity:0.3; class A2,AN,P1,PN,D2,DN,S1 faded;
Figure 2: The diagram shows a more detailed view of the ingestion process with two external systems sending data to Humio. The incoming data is first parsed by one of the Arrival Nodes then put on the ingest queue for a Digest Node. The Digest node writes the data to segment files and finally the segment files are sent to Storage Nodes to be saved to disk.