To configure Humio’s basic functionality, you’ll set environment variables. The example configuration file below contains comments describing each option.
When running Humio in Docker, you can pass set the
--env-file= flag and keep
your configuration in a file. For a quick introduction to setting configuration options,
see running Humio as a Docker container.
Docker only loads the environment file when the container is initially created.
If you make changes to the settings in your environment file, simply
stopping and starting the container will not work. You need to
docker rm the
docker run it again to pick up changes.
# The stack size should be at least 2M. HUMIO_JVM_ARGS=-Xss2M # Make Humio write a backup of the data files: # Backup files are written to mount point "/backup" by default (when run in the Humio Docker containers), # Otherwise the backup directory can be specified # By default, data in backup is deleted 7 days after it has been deleted in Humio. This behavior is configurable. BACKUP_NAME=my-backup-name BACKUP_DIR="/backup" BACKUP_KEY=my-secret-key-used-for-encryption DELETE_BACKUP_AFTER_MILLIS=604800000 # ID to choose for this server when starting up the first time. # Leave commented out to autoselect the next available ID. # If set, the server refuses to run unless the ID matches the state in data. # If set, must be a positive nonzero integer. # Numbers in the range of 1 through 511 are recommended. BOOTSTRAP_HOST_ID=1 # The URL that other Humio hosts in the cluster can use to reach this server. # Required for clustering. Examples: https://humio01.example.com or http://humio01:8080 # Security: We recommend using a TLS endpoint. # If all servers in the Humio cluster share a closed LAN, using those endpoints may be OK. EXTERNAL_URL=https://humio01.example.com # The URL which users/browsers will use to reach the server. # This URL is used to create links to the server. # It is important to set this property when using OAuth authentication or alerts. PUBLIC_URL=https://humio.mycompany.com ## For how long should dashboard queries be kept running if they are not polled. ## When opening a dashboard, results will be immediately ready if queries are running. ## Default is 3 days. IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES=4320 ## Warn when ingest is delayed. ## How much should the ingest delay fall behind before a warning is shown in the search UI. WARN_ON_INGEST_DELAY_MILLIS=30000 # Specify the replication factor for the Kafka ingest queue. INGEST_QUEUE_REPLICATION_FACTOR=2 # Kafka bootstrap servers list. Used as `bootstrap.servers` towards Kafka. # should be set to a comma-separated host:port pairs string. # Example: `my-kafka01:9092` or `kafkahost01:9092,kafkahost02:9092` KAFKA_SERVERS=kafkahost01:9092,kafkahost02:9092 # By default, Humio will create topics and manage the number of replicas in Kafka for the topics being used. # If you run Humio on top of an existing Kafka or want to manage this outside of Humio, set this to false. KAFKA_MANAGED_BY_HUMIO=true # Deletes events from the ingest queue when they have been saved in Humio. # It is still important to configure Kafka retention on the ingest queue. # The Kafka retention defines how long data can be kept on the ingest queue and, thus, how much time Humio has to read the data and store it internally. DELETE_ON_INGEST_QUEUE=true # It is possible to add extra Kafka configuration properties by creating a properties file and pointing to it. # These properties are added to all Kafka producers and consumers in Humio. # For example, this enables Humio to connect to a Kafka cluster using SSL and SASL. # Note the file must be mapped into Humio's Docker container if running Humio as a Docker container. EXTRA_KAFKA_CONFIGS_FILE=extra_kafka_properties.properties # Add a prefix to the topic names in Kafka. # Adding a prefix is recommended if you share the Kafka installation with applications other than Humio. # The default is not to add a prefix. HUMIO_KAFKA_TOPIC_PREFIX= # Zookeeper servers. # Defaults to "localhost:2181", which is OK for a single server system, but # should be set to a comma-separated host:port pairs string. # Example: zoohost01:2181,zoohost02:2181,zoohost03:2181 # Note, there is no security on the Zookeeper connections. Keep inside trusted LAN. ZOOKEEPER_URL=localhost:2181 # Maximum number of datasources (unique tag combinations) in a repo. # There will be a sub-directory for each combination that exists. # (Since v1.1.10) MAX_DATASOURCES=10000 # Strategy for compression: Compress (fast) in digest pipeline or (highly) later. # fast: Compress using LZ4 in the digest pipeline. This is what all versions up to 1.5.x did. # high: Compress using LZ4 in the digest pipeline, then recompress using Zstd when merging mini-segments into proper segments later. # extreme: Compress using Zstd in the digest pipeline, then recompress using Zstd when merging mini-segments into proper segments later. # Recommended setting depends on the hardware and use case. The rule # of thumb is that high provides a 2x compression ratio over fast at the # cost of using more CPU time for decompressing while searching. # Go for high as the default for fresh installs and keep fast on existing systems to allow rolling back to 1.5.x # Default: fast COMPRESSION_TYPE=fast # Compression level for data in segment files. Range is [0 ; 9] # Defaults to 6 for COMPRESSION_TYPE=fast and 9 for COMPRESSION_TYPE=high and extreme. COMPRESSION_LEVEL=6 # For COMPRESSION_TYPE=high and extreme this sets the compression level of the minisegments. # Defaults to 0. Range is [0 ; 6] COMPRESSION_LEVEL_MINI=0 # (Approximate) limit on the number of hours a segment file can be open for writing # before being flushed even if it is not full. (Full is set using BLOCKS_PER_SEGMENT) # Default: version < 1.4.x had 720, 1.4.x has 24 MAX_HOURS_SEGMENT_OPEN=24 # How long a mini-segment can stay open. How long back is a fail-over likely to go? FLUSH_BLOCK_SECONDS=1800 # Desired number of blocks (each ~1 MB before compression) in a final segment after merge # Segments will get closed earlier if expired due to MAX_HOURS_SEGMENT_OPEN. # Defaults to 2000. BLOCKS_PER_SEGMENT=2000 # Desired number of blocks (each ~1 MB before compression) # in a mini-segment before merge. Defaults to 64. # Mini-segments will get closed earlier if expired due to FLUSH_BLOCK_SECONDS #BLOCKS_PER_MINISEGMENT=64 # Minimum size in KB to target for blocks in a segment. Range: [128; 2048] # Blocks may flush due to time, size of pre-filter bits. # Default value: 384 KB # From v1.5.14. #BLOCK_SIZE_MIN_KB=384 # Maximum size in KB to target for blocks in a segment. Range: [128; 2048] # Blocks may flush due to time, size of pre-filter bits. # Default value: 1024 KB # From v1.5.14. #BLOCK_SIZE_MAX_KB=1024 # Target fill percentage of pre-filter. Default value: 30. # Percent of the bits to be set in the pre-filters. Range: [10; 100]. # Influences block size: Lower values may trigger smaller blocks. # From v1.5.14. #HASHFILTER_FILL=30 # Select roles for node, with current options being "all" or # "httponly". The latter allows the node to avoid spending CPU time on # tasks that are irrelevant to a node that has never had any local # segment files and that will never have any assigned either. Leave as # "all" unless the node is a stateless http frontend or ingest # listener only. NODE_ROLES=all # How long the digest worker thread should keep working on # flushing the contents of in-memory buffers when Humio is told to shut down # using "sigterm" (normal shutdown). Default to 300 seconds as millis. # If too low, then the next startup will need to start further back in # time on the ingest queue. #SHUTDOWN_ABORT_FLUSH_TIMEOUT_MILLIS=300000 # Optional: Allow the Humio JVM to exit if it detects more time is spent on GC than on actual computations. # Default to not being set. # GC_KILL_FACTOR=1.0 # Let Humio send emails using the Postmark service # Create a Postmark account and insert the token here POSTMARK_SERVER_SECRET=abc2454232 POSTMARK_FROM=Humio
# Let Humio send emails using an SMTP server. ONLY put a password here # if you also enable starttls. Otherwise you will expose your password. # # Example using GMail: #SMTP_HOST=smtp.gmail.com #SMTP_PORT=587 #SMTP_SENDER_ADDRESSfirstname.lastname@example.org #SMTP_USE_STARTTLS=true #SMTP_USERNAMEemail@example.com #SMTP_PASSWORD=your-secret-password # # Example using a local clear-text non-authenticated SMTP server SMTP_HOST=localhost SMTP_PORT=25 SMTP_SENDER_ADDRESSfirstname.lastname@example.org SMTP_USE_STARTTLS=false # Use an HTTP proxy for sending alert notifications. # This can be useful if Humio is not allowed direct access to the internet. # HTTP_PROXY_HOST=proxy.myorganisation.com HTTP_PROXY_PORT=3129 HTTP_PROXY_USERNAME=you HTTP_PROXY_PASSWORD=your-secret-password # Select the TCP port to listen for HTTP. HUMIO_PORT=8080 # Select the TCP port for Elasticsearch Bulk API. ELASTIC_PORT=9200 # Select the TCP port for exporting Prometheus metrics. This is disabled by default. PROMETHEUS_METRICS_PORT=8081 # Select the IP to bind the UDP/TCP/HTTP listening sockets to. # Each listener entity has a listen-configuration. This ENV is used when # that is not set. HUMIO_SOCKET_BIND=0.0.0.0 # Select the IP to bind the HTTP listening socket to. # (Defaults to HUMIO_SOCKET_BIND) HUMIO_HTTP_BIND=0.0.0.0 # Verify checksum of segments files when reading them. Default to true. # Allows detecting partial and malformed files. # (Since v1.1.16) #VERIFY_CRC32_ON_SEGMENT_FILES=true # S3 access keys for archiving of ingested logs in an export format. S3_ARCHIVING_ACCESSKEY=$ACCESS_KEY S3_ARCHIVING_SECRETKEY=$SECRET_KEY # Optionally point to your own hosting endpoint for the S3 to use for archiving. To use a non-AWS endpoint: # S3_ARCHIVING_ENDPOINT_BASE=http://my-own-s3:8080 # Number of parallel workers for upload. Default is 1. S3_ARCHIVING_WORKERCOUNT=1 # # Bucket storage (S3 variant. For Google variant, replace "S3" with "GCP" in all the following keys.) # - infinite storage using local disks as cache. # See the page on Bucket Storage for more information. # These two take precedence over all other AWS access methods. S3_STORAGE_ACCESSKEY=$ACCESS_KEY S3_STORAGE_SECRETKEY=$SECRET_KEY # Also supported, same as (S3_STORAGE_ACCESSKEY, S3_STORAGE_SECRETKEY): # AWS_ACCESS_KEY_ID=$ACCESS_KEY # AWS_SECRET_ACCESS_KEY=$SECRET_KEY # Optionally point to your own hosting endpoint for the S3 to use for storage # - in order to use a non-AWS endpoint. Comment out for AWS. S3_STORAGE_ENDPOINT_BASE=http://my-own-s3:8080 # Number of parallel workers for upload. Defaults to 1. S3_STORAGE_WORKERCOUNT=1 S3_STORAGE_BUCKET=$BUCKET_NAME S3_STORAGE_REGION=$BUCKET_REGION # The secret can be any UTF-8 string. The suggested value is 64 or more random ASCII characters. S3_STORAGE_ENCRYPTION_KEY=$ENCRYPTION_SECRET # Optional prefix for all object keys, empty if not set. # Allows sharing a bucket for more Humio clusters by letting them each write to a unique prefix. # Note! There is a performance penalty when using a non-empty prefix. Humio recommends an unset prefix. # S3_STORAGE_OBJECT_KEY_PREFIX=/basefolder # Performance tuning settings for S3/GCP storage # Size of chunks for upload. The default is 8MB. Min is 5 MB, Max is 8MB. S3_STORAGE_CHUNK_SIZE=8388608 # Number of parallel chunks at a time for each file (S3 only) S3_STORAGE_CHUNK_COUNT=4 # Number of concurrent uploading files. S3_STORAGE_UPLOAD_CONCURRENCY=vcores/2 # Number of concurrent downloading files. S3_STORAGE_DOWNLOAD_CONCURRENCY=vcores/2 # Makes Humio assume the that the primary data storage may be lost when restarting Humio. # Setting this to true makes Humio attempt to delay shutdown until all required files have been copied to bucket storage. # It also affects calculations on replicas to take into account the fact that replicas listed on other hosts cannot be trusted. # This setting should be set on none or all nodes in the cluster, not on individual nodes. # This setting requires S3/GCP storage. # USING_EPHEMERAL_DISKS=false # The following settings only take effect when S3/GCP storage is enabled. # BETA testing feature. Not ready for production use yet. # Note! This allows Humio to *delete* files from the local storage in # using the assumption that it can fetch the file from S3/GCP if it needs it at some point. # Fetching the file from S3/GCP is much slower than using local storage. # # Percentage of disk full that Humio aims to keep the disk at. # Not enabled by default. BETATESTING_LOCAL_STORAGE_PERCENTAGE=80 # Minimum number of days to keep a fresh segment file before allowing # it to get deleted locally after upload to a bucket. # Mini segment files are kept in any case until their merge result also exists. # (The age is determined using the timestamp of the most recent event in the file) # Make sure to leave most of the free space as room for the system to # manage as a mix of recent and old files. # Note! Min age takes precedence over the fill percentage, so increasing this # implies the risk of overflowing the local file system! LOCAL_STORAGE_MIN_AGE_DAYS=0 # Disable shared dashboards (wall monitors). # The main reason to do this is if your organization requires # stricter security than what is afforded by the the URL shared # secret used for shared dashboards. # SHARED_DASHBOARDS_ENABLED=false # Users need to be created in Humio before they can log in with external # authentication methods like SAML/LDAP/OAUTH. # Set this parameter to true - then users are automatically created in # Humio when successfully logging in with external authentication methods. # Users will not have access to any existing repositories except for a # personal sandbox repository when they are created. # If false - users must be explicitly created in Humio before they can log in. AUTO_CREATE_USER_ON_SUCCESSFUL_LOGIN=false # Allows disabling use of personal API tokens. This may be relevant when # LDAP or SAML is set as the authentication mechanism, as the personal API tokens # never expire and thus allow a user to access Humio even when the LDAP/SAML # account has been closed or deleted. Defaults to true. # ENABLE_PERSONAL_API_TOKENS = true # Initial partition count for storage partitions. # Affects ONLY on the first start of the first node in the cluster. DEFAULT_PARTITION_COUNT=24 # Initial partition count for digest partitions. # Affects ONLY on the first start of the first node in the cluster. INGEST_QUEUE_INITIAL_PARTITIONS=24 # How big a backlog of events in Humio is allowed before Humio starts responding # http-status=503 on the http interfaces and rejecting ingesting messages on HTTP? # Measured in seconds worth of latency from an event arrival at Humio until it has # been fully processed. # (Note that typical latency in normal conditions is zero to one second.) # Set to a large number, such as 31104000 (~1 year as seconds), to avoid # having this kind of backpressure towards the ingest clients. # Range: Min=300, Max=2147483647. MAX_INGEST_DELAY_SECONDS=3600 # How big a backlog of events in Humio is allowed before Humio starts # cancelling live queries in order to catch up with the presumed spike in inbound traffic. # The check occurs every 30 seconds and cancels the queries that account to the percentage of the # locally running live queries on each node that had the highest cost since last check. # LIVEQUERY_CANCEL_TRIGGER_DELAY_MS=60000 # LIVEQUERY_CANCEL_COST_PERCENTAGE=10 # How big a backlog of events in Humio is allowed before Humio starts # dropping stale live queries in order to catch up with the presumed spike in inbound traffic. # Stale live queries are those that have not been refresh on any UI for more than the required keep alive interval. # The check occurs every 30 seconds and cancels the queries that account to the percentage of the # running live queries on each node that had the highest cost since last check. # LIVEQUERY_STALE_CANCEL_TRIGGER_DELAY_MS=20000 # LIVEQUERY_STALE_CANCEL_COST_PERCENTAGE=10 # A configuration flag to limit state in Humio searches. # This is used to limit the number of groups in the groupBy function. # This is necessary to limit how much memory searches can use and avoid out of memory errors. MAX_STATE_LIMIT=20000 # The maximum allowed value for the limit parameter on timechart (and bucket) MAX_SERIES_LIMIT=50 # Maximum allowed file size that can be uploaded to Humio, when uploading CSV or JSON files. # Used to set a limit on how big files can be. MAX_FILEUPLOAD_SIZE=104857600 # Limits how many entries are allowed when using the match and lookup functions EXACT_MATCH_LIMIT=1000000 # The maximum allowed number of points in a timechart (or bucket result) # When this is hit, the result will become approximate and discard input. MAX_BUCKET_POINTS=10000 # SECONDARY_DATA_DIRECTORY enables using a secondary file system to # store segment files. When to move the files is controlled by # PRIMARY_STORAGE_PERCENTAGE # Secondary storage is not enabled by default. # Note! When using Docker, make sure to mount the volume # into the container as well. # See the page on Secondary Storage for more information. SECONDARY_DATA_DIRECTORY=/secondaryMountPoint/humio-data2 PRIMARY_STORAGE_PERCENTAGE=80 # CACHE_STORAGE_DIRECTORY enables a local cache of segment files copied # from the primary/secondary storage. # It only makes sense if the local NVME is ephemeral while the # primary data dir is trustworthy but slow. # # Enable caching of files from a slow EBS file system or for a # file system on spinning disks. # The cache should be placed on local NVME or similar drives, providing # more than 200 MB/s/core in the machine. # CACHE_STORAGE_PERCENTAGE Defaults to 90 and controls how full the cache # file system is allowed to become. # Humio manages the files in the cache directory and will delete files # when there is too little space remaining. # (Do not add a RAM-disk as cache: RAM is better kept for page cache.) # Caching is disabled by default as the location of the cache needs to be known. CACHE_STORAGE_DIRECTORY=/humio-cache CACHE_STORAGE_PERCENTAGE=90 # Humio will write threaddumps to humio-threaddumps.log with the interval specified here # If not specified Humio will write threaddumps every 10 seconds DUMP_THREADS_SECONDS=10
You can supplement or tune the Java virtual machine parameters used
when running Humio with the
HUMIO_JVM_ARGS environment variable. The defaults are:
You can specify the number of processors for the machine running Humio
by setting the
CORES property. Humio uses this number when
parallelizing queries and other internal tasks.
By default, Humio uses the Java available processors function to get the number of CPU cores. This is usually the optimal number. Be aware that the auto-detected number can be too high when running in a containerized environment where the JVM does not always detect the proper number of cores.
Derived from the number of CPU cores, Humio internally sets
DIGEST_EXECUTOR_CORES to half that number
(but a minimum of 2) to reduce pressure on context switching due to
hyperthreading since the number of CPU cores usually include
hyperthreads. If the number of cores set through CORES is the number of
actual physical cores and not hyperthreads, you may want to set these
to the same number as CORES. Note that raising this number above the
default may lead to an unstable and slow system due to context
switching costs growing to a point where no real work gets done when
the system gets loaded, while it may appear to work fine when not
Humio supports different ways of authenticating users. Read more in the Authentication Documentation.
Humio requires NTP to be installed, configured, and in-sync across nodes for all clustered deployments.
PUBLIC_URL is the URL where the Humio instance is reachable from a browser.
Leave out trailing slashes.
The URL might only be reachable behind a VPN, but that is no problem, as the user’s browser can access it.
By default Humio sends it’s own log into the internal Humio repo and into a number of log files. If you would rather avoid the external log files and get the output on stdout, then use this.
# This needs setting through the environment variable along with the rest of configs. # Note that the name is case sensitive and the value must be given exactly as provided here. # Valid values for built-in configurations: # LOG4J_CONFIGURATION=log4j2.xml (This is the default when not set) # LOG4J_CONFIGURATION=log4j2-stdout.xml # LOG4J_CONFIGURATION=log4j2-stdout-json.xml # Alternatively you can provide you own configuration file for log4j2 outside the jar file: # But be aware that the file needs to be similar to the one within the jar as the internal logging # into the humio repository depends on some of the configuration in that. # You can extract the internal version as a starting point using "unzip humio-assembly-0.1.jar log4j2.xml" LOG4J_CONFIGURATION=file:///path/to/your/log4j2-custom-config.xml