After establishing a Humio Cloud account or installing Humio on a server, you’ll want to put in place a system to feed data automatically into Humio. There are a few significant steps to doing that:
Create a Repository
It’s a simple task to create a repository in Humio. The heading here contains a link which will bring you to the page that explains it.
Generate Ingest Tokens
This is a special API token used only for the Ingest API, for your data shipper software on your servers (see next step).
Install & Configure Data Shippers
This is introduced below, but covered in more detailed in the Data Shippers section of this Documentation.
Parse the Data
To view server logs and metrics, you’ll need to parse the data. This is covered in the Parse the Data section.
Data shipping can be done in a couple of ways:
In most cases you will want to use a data shipper or one of our platform integrations. If you are interested in getting some data into Humio quickly, take a look at Ingesting Application Logs guide.
Below is an overview of how the respective flows of sending data to Humio work:
Note, Humio is optimized for live streaming of events in real time. If you ship data that are not live you need to observe some basic rules in order that the resulting events be stored in Humio as efficiently as if they had been live. See Backfilling).
A Data Shipper is a system tool that looks at files and system properties on a server and sends them to Humio. Data shippers take care of buffering, retransmitting lost messages, log file rolling, network disconnects, and a slew of other things so your data or logs are send to Humio in a reliable form.
In Figure 1 here, “Your Application” is writing logs to a log file. The data shipper reads the data and pre-processes it (for example, this could be converting a multiline stack-trace into a single event). It then ships the data to Humio on one of our Ingest APIs.
You can find a list of supported data shippers on the Data Shippers section of the Documentation.
If you want to get logs and metrics from your deployment platform, like a Kubernetes cluster or your company PaaS, see the Provisioning and Containers sections.
Depending on your platform, the data flow will look slightly different from Figure 2 here. Some systems use a built-in logging subsystem, others have you start a container running with a data shipper. Usually you will assign containers/instances in some way to indicate which repository and parser should be used at ingestion.
Take a look at the individual integrations pages for more details.
If your needs are simple and you don’t care too much about potential data loss due to, for example, network problems, you can also use Humio’s Ingest API directly or through one of Humio’s client libraries.
As you can see, this is by far the simplest flow, and is completely appropriate for some scenarios, like analytics.