S3 Archiving

Humio supports archiving ingested logs to Amazon S3. The archived logs are then available for further processing in any external system that integrates with S3.

Archiving works by running a periodic job inside all Humio nodes which looks for new, unarchived segment files. The segment files are read from disk, streamed to an S3 bucket, and marked as archived in Humio.

An admin user needs to set up archiving per repository. After selecting a repository on the Humio front page, the configuration page is available under Settings.

For slow moving datasources it can take some time before segment files are completed on disk and then made available for the archiving job. In the worst case a segment file must either contain a gigabyte of uncompressed data or seven days must pass before it’s completed. This limitation will be removed in a future version of Humio.

For more information on segments files and datasources, see segments files and datasources.

S3 Layout

When uploading a segment file, Humio creates the S3 object key based on the tags, start date, and repository name of the segment file. The resulting object key makes the archived data browseable through the S3 management console.

Humio uses the following pattern:

REPOSITORY/TYPE/TAG_KEY_1/TAG_VALUE_1/../TAG_KEY_N/TAG_VALUE_N/YEAR/MONTH/DAY/START_TIME-SEGMENT_ID.gz

Read more about Tags.

Format

The default archiving format is NDJSON and optionally raw log lines. When using NDJSON, the parsed fields will be available along with the raw log line. This incurs some extra storage cost compared to using raw log lines but gives the benefit of ease of use when processing the logs in an external system.

On-site Setup

For an on-site installation of Humio, you need an IAM user with write access to the buckets used for archiving. That user must have programmatic access to S3, so when adding a new user through the AWS console make sure programmatic access is ticked:

Adding user with programmatic access.

Later in the process you can retrieve the access key and secret key:

Access key and secret key.

Which in Humio is needed in the following configuration:

S3_ARCHIVING_ACCESSKEY=$ACCESS_KEY
S3_ARCHIVING_SECRETKEY=$SECRET_KEY

The keys are used for authenticating the user against the S3 service. For more guidance on how to retrieve S3 access keys see AWS access keys. More details on creating a new user in IAM.

Configuring the user to have write access to a bucket can be done by attaching a policy to the user.

IAM User Example Policy

The following JSON is an example policy configuration.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKET_NAME"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKET_NAME/*"
            ]
        }
    ]
}

The policy can be used as an inline policy attached directly to the user through the AWS console:

Attach inline policy.

Cloud Setup

Enabling Humio Cloud to write to your S3 bucket means setting up AWS cross-account access

  1. Log in to the AWS console and navigate to the S3 service page.
  2. Click the name of the bucket where archived logs should be written.
  3. Click Permissions > Add User.
  4. Enter the canonical ID for Humio( f2631ff87719416ac74f8d9d9a88b3e3b67dc4e7b1108902199dea13da892780 ) and check Object Access Read and Write permission, then click Save.
  5. In Humio, go to the repository you want to archive and select Settings > S3 Archiving. Configure by giving the bucket name, region, and then Save.

Tag Grouping

If tag grouping is defined for a repository the segment files will by split by each unique combination of tags present in a file. This results in a file in S3 per each unique combination of tags. The same layout pattern is used as in the normal case. The reason for doing this is to make it easier for a human operator to determine whether a log file is relevant.

S3 archiving is only available in version >= 1.1.27.