Deploy Uptrace on K8s with Helm
Kubernetes
This guide requires a Kubernetes cluster. You can create a local K8s cluster using K3s, Kind, or minikube.
Helm
Helm is a package manager for Kubernetes. Think of it like apt
or yum
for your Linux system, but specifically designed for deploying and managing applications on Kubernetes clusters.
A Helm chart is a packaged Kubernetes application that contains all the necessary resources (YAML manifests) and configurations needed to deploy an application on Kubernetes.
Uptrace maintains a Helm chart containing the Uptrace application and all the necessary dependencies, including Redis, PostgreSQL, ClickHouse, and the OpenTelemetry Collector.
To install Uptrace Helm chart:
helm repo add uptrace https://charts.uptrace.dev --force-update
Configuration
Helm uses values.yaml
file to customize the deployment of a Helm chart without modifying the chart's original templates. It acts as a central place to define variables that populate placeholders in Helm templates.
To customize the Uptrace chart, you need to create uptrace-values.yaml
file that overrides the default chart values. To view chart values:
helm show values uptrace/uptrace --devel
Later, you will use the created file to install the chart:
helm install uptrace uptrace/uptrace -f uptrace-values.yaml -n monitoring
Similarly, you can create redis-values.yml
to customize the Redis chart and so on.
To get default *-values.yaml
files, clone the helm-charts repository:
git clone git@github.com:uptrace/helm-charts.git
cd helm-charts
cat uptrace-values.yml
Namespace
A K8s namespace is a virtual cluster or a logical partitioning mechanism that allows you to organize and isolate resources within a single physical Kubernetes cluster.
This guide uses the monitoring
namespace to install Uptrace and all its dependencies. To create a namespace:
kubectl create ns monitoring
At any point later in this guide, you can delete all created resources using the namespace:
kubectl delete ns monitoring
Redis
Uptrace uses Redis for in-memory caching, so you don't need to configure a persistence volume. Any instance with 32MB of free RAM should be fine.
If you don't have a Redis Server already, you can create one using the Bitnami chart:
helm install redis oci://registry-1.docker.io/bitnamicharts/redis -f redis-values.yaml -n monitoring
The redis-values.yml
file is mostly empty, but you can find all available parameters here:
auth:
enabled: false
To connect to the Redis database:
kubectl port-forward service/redis-master 6379:6379 -n monitoring
redis-cli
If you already have a Redis database, just make sure to provide correct credentials in your uptrace-values.yaml
file:
uptrace:
config:
redis_cache:
addrs:
alpha: 'redis-master:6379'
PostgreSQL
Uptrace uses a PostgreSQL database to store metadata such as users, projects, monitors, etc. The metadata is small and a 1GB persistent volume should be sufficient.
If you don't have a PostgreSQL database already, you can create one using CloudNativePG PostgreSQL operator.
To install PostgreSQL operator:
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm upgrade --install cnpg \
--namespace cnpg-system \
--create-namespace \
cnpg/cloudnative-pg
You can verify that with:
kubectl get all -n cnpg-system
If you already have a PostgreSQL database, just make sure to disable the one that comes with Uptrace and provide correct credentials in your uptrace-values.yaml
file:
postgresql:
enabled: false
uptrace:
config:
pg:
addr: 'uptrace-postgresql:5432'
user: uptrace
password: uptrace
database: uptrace
ClickHouse
Uptrace uses a ClickHouse database to store observability data such as spans, logs, events, and metrics. You can start with a pod that has 4 CPU 1GB RAM 10GB disk of space and scale it vertically as needed.
If you don't have a ClickHouse database already, you can create one using ClickHouse Operator by Altinity:
kubectl apply -f https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle.yaml
Check that clickhouse-operator
is running:
kubectl get pods -n kube-system
If you already have a ClickHouse database, just make sure to disable the one that comes with Uptrace and provide correct credentials in your uptrace-values.yaml
file:
clickhouse:
enabled: false
uptrace:
config:
ch_cluster:
cluster: 'uptrace1'
# Whether to use ClickHouse replication.
# Cluster name is required when replication is enabled.
replicated: false
# Whether to use ClickHouse distributed tables.
distributed: false
shards:
- replicas:
- addr: 'uptrace-clickhouse:9000'
database: uptrace
user: uptrace
password: uptrace
dial_timeout: 3s
write_timeout: 5s
max_retries: 3
max_execution_time: 15s
query_settings:
session_timezone: UTC
async_insert: 1
query_cache_nondeterministic_function_handling: 'save'
allow_suspicious_types_in_group_by: 1
allow_suspicious_types_in_order_by: 1
If there is an issue with ClickHouse running in K8s, check this troubleshooting guide.
OpenTelemetry Collector
This chart uses OpenTelemetry Operator to deploy OpenTelemetry Collector which collects pod metrics.
To install certmagic:
helm repo add jetstack https://charts.jetstack.io --force-update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.17.2 \
--set crds.enabled=true
To install OpenTelemetry Operator:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts --force-update
helm install otel-operator open-telemetry/opentelemetry-operator \
--set "manager.collectorImage.repository=otel/opentelemetry-collector-k8s" \
--set "manager.collectorImage.tag=0.123.0" \
--set admissionWebhooks.certManager.enabled=false \
--set admissionWebhooks.autoGenerateCert.enabled=true \
--namespace opentelemetry \
--create-namespace
You can also disable OpenTelemetry Collector in your uptrace-values.yaml
file:
otelcol:
enabled: false
otelcolDaemonset:
enabled: false
Uptrace
Once you have all the dependencies, you can install Uptrace using overrides from the uptrace-values.yaml
file:
helm install uptrace uptrace/uptrace -f uptrace-values.yaml -n monitoring --devel
You can the view available resources:
kubectl get all -n monitoring
And inspect Uptrace logs:
kubectl logs uptrace-0 -n monitoring
To connect to the ClickHouse database:
kubectl port-forward service/chi-uptrace1-uptrace1-0-0 9000:9000 -n monitoring
clickhouse-client
Ingress
If all is good, you can start using Uptrace at http://uptrace.local with login admin@uptrace.local
and password admin
.
But first, you need to add the uptrace.local
domain to your /etc/hosts
file:
127.0.0.1 uptrace.local
Minikube
With Minikube, you need to enable ingress controller:
minikube addons enable ingress
Then, make sure the pods are running:
kubectl get pods -n ingress-nginx
Minikube does not listen on 127.0.0.1
and instead provides the IP via CLI:
minikube ip
Then use the Minikube IP to update your /etc/hosts
file:
$(minikube ip) uptrace.local
AWS EKS
To deploy Uptrace on AWS EKS and provide external access using the AWS LB Controller annotations:
service:
type: LoadBalancer
port: 80
loadBalancerSourceRanges:
- '0.0.0.0/0'
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: 'external'
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: 'ip'
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: 'preserve_client_ip.enabled=true'
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'http'
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: 'http'
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: '80'
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: '/'
Scaling
You can scale Uptrace by increasing the number of replicas in your uptrace-values.yaml
file:
uptrace:
replicaCount: 2
And re-deploying Uptrace:
helm upgrade uptrace uptrace/uptrace -f uptrace-values.yaml -n monitoring --devel
Upgrading
To upgrade to the latest available version:
helm repo update uptrace
helm upgrade uptrace uptrace/uptrace -f uptrace-values.yaml -n monitoring --devel
Uninstallation
To uninstall Uptrace release:
helm uninstall uptrace -n monitoring
To delete the whole namespace:
kubectl delete namespace monitoring