hertzbeat-helm-chart

Helm Chart for HertzBeat

Artifact Hub

hertzbeat: An open-source, real-time monitoring system with custom monitoring, high performance cluster and agentless capabilities.

— Open in Artifact Hub

What is HertzBeat?

HertzBeat is an open source, real-time monitoring system with custom monitoring, high performance cluster, prometheus-compatible and agentless capabilities.

Features

HertzBeat’s powerful customization, multi-type support, high performance, easy expansion, and low coupling, aims to help users quickly build their own monitoring system.
We also provide SaaS Monitoring Cloud, users no longer need to deploy a cumbersome monitoring system to monitor their resources. Get started online for free.

Helm Chart for HertzBeat

This Helm chart installs HertzBeat in a Kubernetes cluster. Welcome to contribute to Helm Chart for HertzBeat.

Prerequisites

Installation

Add Helm repository

helm repo add hertzbeat https://charts.hertzbeat.com/
helm repo update

Configure the chart

The following items can be set via --set flag during installation or configured by editing the values.yaml directly (need to download the chart first).

Configure how to expose HertzBeat service

Configure the external URL

The external URL for HertzBeat core service is used to:

  1. populate the docker/helm commands showed on portal
  2. populate the token service URL returned to docker client

Format: protocol://domain[:port]. Usually:

If HertzBeat is deployed behind the proxy, set it as the URL of proxy.

Configure how to persist data

Configure the other items listed in configuration section

Install the chart

Install the HertzBeat helm chart with a release name my-release:

helm install hertzbeat hertzbeat/hertzbeat

Uninstallation

To uninstall/delete the hertzbeat deployment:

helm uninstall hertzbeat

Configuration

The following table lists the configurable parameters of the HertzBeat chart and the default values.

Parameter Description Default
Expose    
expose.type How to expose the service: ingress, clusterIP, nodePort or loadBalancer, other values will be ignored and the creation of service will be skipped. ingress
expose.clusterIP.name The name of ClusterIP service hertzbeat
expose.clusterIP.annotations The annotations attached to the ClusterIP service {}
expose.clusterIP.ports.port The service port HertzBeat listens on when serving HTTP 80
expose.nodePort.name The name of NodePort service hertzbeat
expose.nodePort.ports.port The service port HertzBeat listens on when serving HTTP 80
expose.nodePort.ports.nodePort The node port HertzBeat listens on when serving HTTP 30002
expose.loadBalancer.IP The IP of the loadBalancer. It only works when loadBalancer supports assigning IP ""
expose.loadBalancer.ports.port The service port HertzBeat listens on when serving HTTP 80
expose.loadBalancer.sourceRanges List of IP address ranges to assign to loadBalancerSourceRanges []
Manager    
manager.account.username The hertzbeat account username admin
manager.account.password The hertzbeat account password hertzbeat
manager.resources The resources to allocate for container undefined
manager.nodeSelector Node labels for pod assignment {}
manager.tolerations Tolerations for pod assignment []
manager.affinity Node/Pod affinities {}
manager.podAnnotations Annotations to add to the nginx pod {}
Collector    
collector.replicaCount The replica count 1
collector.autoscaling.enable Is enable auto scaling collector replicas 1
collector.resources The resources to allocate for container undefined
collector.nodeSelector Node labels for pod assignment {}
collector.tolerations Tolerations for pod assignment []
collector.affinity Node/Pod affinities {}
collector.podAnnotations Annotations to add to the nginx pod {}
Database    
database.timezone The database system timezone 1
database.rootPassword The database root user password 1
database.persistence.enabled Enable the data persistence or not true
database.persistence.resourcePolicy Setting it to keep to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted. Does not affect PVCs created for internal database and redis components. keep
database.persistence.existingClaim Use the existing PVC which must be created manually before bound, and specify the subPath if the PVC is shared with other components  
database.persistence.storageClass Specify the storageClass used to provision the volume. Or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning  
database.persistence.subPath The sub path used in the volume  
database.persistence.accessMode The access mode of the volume ReadWriteOnce
database.persistence.size The size of the volume 5Gi
database.persistence.annotations The annotations of the volume  
database.resources The resources to allocate for container undefined
database.nodeSelector Node labels for pod assignment {}
database.tolerations Tolerations for pod assignment []
database.affinity Node/Pod affinities {}
database.podAnnotations Annotations to add to the nginx pod {}
TSDB    
tsdb.timezone The database system timezone 1
tsdb.persistence.enabled Enable the data persistence or not true
tsdb.persistence.resourcePolicy Setting it to keep to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted. Does not affect PVCs created for internal database and redis components. keep
tsdb.persistence.existingClaim Use the existing PVC which must be created manually before bound, and specify the subPath if the PVC is shared with other components  
tsdb.persistence.storageClass Specify the storageClass used to provision the volume. Or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning  
tsdb.persistence.subPath The sub path used in the volume  
tsdb.persistence.accessMode The access mode of the volume ReadWriteOnce
tsdb.persistence.size The size of the volume 5Gi
tsdb.persistence.annotations The annotations of the volume  
tsdb.resources The resources to allocate for container undefined
tsdb.nodeSelector Node labels for pod assignment {}
tsdb.tolerations Tolerations for pod assignment []
tsdb.affinity Node/Pod affinities {}
tsdb.podAnnotations Annotations to add to the nginx pod {}