Uptrace Configuration
You can customize Uptrace settings and change the generated ClickHouse database schema with a single YAML config file.
Config file
All Uptrace configuration is done with a single YAML file that can be downloaded from GitHub:
wget https://raw.githubusercontent.com/uptrace/uptrace/master/config/uptrace.dist.yml
mv uptrace.dist.yml /etc/uptrace/config.yml
Config location
You can specify the location of Uptrace config using a CLI arg:
uptrace --config=/path/to/uptrace.yml serve
Or using an env variable:
UPTRACE_CONFIG=/path/to/uptrace.yml uptrace serve
When you don't explicitly specify the config location, Uptrace will try to use the config at /etc/uptrace/config.yml
.
PostgreSQL
Uptrace requires a PostgreSQL database to store metadata such as metric names and alerts. It is typically very small, occupying only a few megabytes of disk space.
You can configure the PostgreSQL database credentials in the config:
##
## PostgreSQL db that is used to store metadata such us metric names, dashboards, alerts,
## and so on.
##
pg:
addr: localhost:5432
user: uptrace
password: uptrace
database: uptrace
# TLS configuration. Uncomment to enable.
# tls:
# insecure_skip_verify: true # only for self-signed certificates
See TLS for details.
Environment variables
You can use environment variables in the YAML config file, for example:
pg:
addr: ${UPTRACE_PG_ADDR}
user: ${UPTRACE_PG_USER}
password: ${UPTRACE_PG_PASSWORD}
database: ${UPTRACE_PG_DATABASE:uptrace}
Environment variables can have a default value, for example, ${ENV_VAR_NAME:default_value}
.
Environment variables are expanded before parsing the YAML content using the os.Expand function.
ClickHouse
You can configure the ClickHouse database credentials in the config:
ch_cluster:
cluster: 'uptrace1'
# Whether to use ClickHouse replication.
# Cluster name is required when replication is enabled.
replicated: false
# Whether to use ClickHouse distributed tables.
distributed: false
shards:
- replicas:
- addr: localhost:9000
user: default
password:
database: uptrace
To use TLS connections, you need to enable the secure TCP port (9440
) in the ClickHouse config:
<?xml version="1.0" ?>
<clickhouse>
<tcp_port_secure>9440</tcp_port_secure>
<openSSL>
<server>
<certificateFile>/etc/clickhouse-server/server.crt</certificateFile>
<privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile>
</server>
</openSSL>
</clickhouse>
And then use the port in the Uptrace config:
ch_cluster:
cluster: 'uptrace1'
# Whether to use ClickHouse replication.
# Cluster name is required when replication is enabled.
replicated: false
# Whether to use ClickHouse distributed tables.
distributed: false
shards:
- replicas:
- addr: localhost:9440
user: default
password:
database: uptrace
tls:
# Only for self-signed certificates.
insecure_skip_verify: true
See TLS for details.
ClickHouse schema
The options described below allow to change the ClickHouse schema generated by Uptrace. For changes to take effect, you must reset the ClickHouse database:
uptrace ch reset
Retention
You can configure data retention for spans and metrics like this:
ch_schema:
spans:
# Delete spans data after 14 days.
ttl_delete: 14 DAY
storage_policy: 'default'
metrics:
# Delete metrics data after 30 days.
ttl_delete: 30 DAY
storage_policy: 'default'
That will cause Uptrace to set TTL toDate(time) + INTERVAL 14 DAY DELETE
and SETTINGS storage_policy = 'default'
when creating ClickHouse tables.
Compression
You can configure the compression method used in ClickHouse tables like this:
ch_schema:
# Compression codec, for example, LZ4, ZSTD(1), or Default.
compression: ZSTD(1)
Replication
To start replicating ClickHouse tables, you need to:
- Configure ClickHouse cluster to have at least 3 replicas:
Don't turn off internal_replication
. If you have internal_replication = false
and a replicated table, you will get duplicates because the distributed table inserts data into all replicas while the underlying replicated table also replicates the data.
<clickhouse>
<remote_servers>
<uptrace1>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>clickhouse-1</host>
<port>9000</port>
</replica>
<replica>
<host>clickhouse-2</host>
<port>9000</port>
</replica>
<replica>
<host>clickhouse-3</host>
<port>9000</port>
</replica>
</shard>
</uptrace1>
</remote_servers>
</clickhouse>
- Update the
uptrace.yml
config file using the cluster name from the previous step:
ch_cluster:
cluster: 'uptrace1'
# Whether to use ClickHouse replication.
# Cluster name is required when replication is enabled.
replicated: true
- Reset the ClickHouse database to apply changes:
uptrace ch reset
You can verify that replication is working as expected using clickhouse-client
:
SELECT
database,
table,
is_leader,
replica_is_active
FROM system.replicas
Query id: 8c8fd156-5d8a-4b9f-a2b1-a4af5d498a70
┌─database─┬─table───────────┬─is_leader─┬─replica_is_active────────────────────────┐
│ uptrace │ spans_data │ 1 │ {'replica1':1,'replica2':1,'replica3':1} │
│ uptrace │ spans_index │ 1 │ {'replica1':1,'replica2':1,'replica3':1} │
└──────────┴─────────────────┴───────────┴──────────────────────────────────────────┘
S3 storage
ClickHouse supports S3-like storage out-of-the-box and allows to move data to S3 using TTL statements on tables, for example:
CREATE TABLE test (...)
TTL toDate(time) + INTERVAL 30 DAY DELETE,
toDate(time) + INTERVAL 10 DAY TO VOLUME 's3'
First, you need to create a ClickHouse storage policy by editing config.xml
:
<clickhouse>
<storage_configuration>
<disks>
<default>
<!-- <keep_free_space_bytes>2147483648</keep_free_space_bytes> -->
</default>
<s3>
<type>s3</type>
<endpoint>http://[BUCKET_NAME].s3.amazonaws.com/prefix/</endpoint>
<access_key_id>FIXME</access_key_id>
<secret_access_key>FIXME</secret_access_key>
</s3>
<s3_cache>
<type>cache</type>
<disk>s3</disk>
<path>/mnt/ssd1/clickhouse/disks/s3_cache/</path>
<max_size>50Gi</max_size>
</s3_cache>
</disks>
<policies>
<tiered>
<move_factor>0.1</move_factor>
<volumes>
<default>
<disk>default</disk>
</default>
<s3>
<disk>s3_cache</disk>
<prefer_not_to_merge>true</prefer_not_to_merge>
<perform_ttl_move_on_insert>0</perform_ttl_move_on_insert>
</s3>
</volumes>
</tiered>
</policies>
</storage_configuration>
</clickhouse>
Then, use the following commands to update tables TTL to start moving data to the S3 volume:
ALTER TABLE spans_index MODIFY SETTING storage_policy = 'tiered';
ALTER TABLE spans_index MODIFY TTL toDate(time) + INTERVAL 14 DAY DELETE,
toDate(time) + INTERVAL 7 DAY TO VOLUME 's3';
You will need to repeat the commands above for every table.
Managing users
On the first startup, Uptrace creates the default user with login admin@localhost.xxx
and password admin
.
You can configure the default users in the auth
section of the config:
auth:
# Disable auth using login and password.
#disabled: true
# The following users will be created on the first startup.
users:
- name: Admin
email: admin@localhost.xxx
password: admin
You can also connect Uptrace to Okta, Keycloak, Cloudflare, and Google.
Sending emails
To send email notifications, you need to configure a SMTP mailer:
##
## To receive email notifications, configure a mailer.
## https://uptrace.dev/features/alerting
##
mailer:
smtp:
# Whether to use this mailer for sending emails.
enabled: true
# SMTP server host.
host: localhost
# SMTP server port.
port: 1025
# Username for authentication.
username: mailhog
# Password for authentication.
password: mailhog
# Uncomment to disable opportunistic TLS.
#tls: { disabled: true }
# Emails will be send from this address.
from: 'uptrace@localhost'
Note that Gmail does not allow to use your real password in mailer.smtp.password
. Intead, you should generate an app password for Gmail:
- In Gmail, click on your avatar -> "Manage your Google Account".
- On the left, click on "Security".
- Scroll to "Signing in to Google" and click on "App password".
See Gmail documentation for details.
Changing ports
By default, Uptrace listens on ports 14317
(OTLP/gRPC) and 14318
(OTLP/HTTP) to not conflict with the corresponding OpenTelemetry Collector ports: 4317
and 4318
.
You can change the ports in the config, for example:
listen:
# OTLP/gRPC API.
grpc:
addr: ':4317'
# tls:
# cert_file: config/tls/uptrace.crt
# key_file: config/tls/uptrace.key
# OTLP/HTTP API and Uptrace API with Vue UI.
http:
addr: ':4318'
# tls:
# cert_file: config/tls/uptrace.crt
# key_file: config/tls/uptrace.key
You can also change the Uptrace domain for Vue-powered UI and DSN/endpoints:
site:
# Overrides public URL for Vue-powered UI.
addr: 'http://uptrace.mydomain.com'
Don't forget to restart Uptrace:
sudo systemctl restart uptrace
TLS
Let's Encrypt
Uptrace supports Let's Encrypt certificates using the certmagic library:
##
## TLS certificate issuance and renewal using Let's Encrypt.
##
certmagic:
# Use Let's Encrypt to obtain certificates.
enabled: false
# Use Let's Encrypt staging environment.
staging_ca: false
http_challenge_addr: ':80'
TLS Server
First, generate a self-signed certificate replacing localhost
with your domain:
openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes \
-keyout uptrace.key -out uptrace.crt -subj "/CN=localhost" \
-addext "subjectAltName=DNS:localhost"
Then, add the certificate to the list of trusted certificates:
sudo cp uptrace.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
Finally, configure Uptrace to start using the certificate you just created:
listen:
# OTLP/gRPC API.
grpc:
addr: ':14317'
tls:
cert_file: config/tls/uptrace.crt
key_file: config/tls/uptrace.key
#ca_file path/to/ca_file
# OTLP/HTTP and Uptrace API with UI.
http:
addr: ':14318'
tls:
cert_file: config/tls/uptrace.crt
key_file: config/tls/uptrace.key
#ca_file path/to/ca_file
You can also change the Uptrace domain for Vue-powered UI:
site:
# Overrides public URL for Vue-powered UI.
addr: 'https://uptrace.mydomain.com'
Don't forget to restart Uptrace:
sudo systemctl restart uptrace
TLS Client
You can also use TLS when connecting to PostgreSQL, ClickHouse, and Kafka:
ch:
# Use the host/system certificate.
tls: {}
ch:
# Use TLS, but don't verify the server's certificate chain and host name.
tls:
insecure_skip_verify: true
ch:
# Load certificate from the file.
tls:
cert_file: path/to/uptrace.crt
key_file: path/to/uptrace.key
#ca_file path/to/ca_file
To disable TLS:
ch:
tls: null
Instead of enabling insecure_skip_verify
, you can also override the server name specified in the server's certificate:
ch:
addr: tm849a32za.us-central1.gcp.clickhouse.cloud:9440
# Use TLS, but override the server name.
# Required by ClickHouse Cloud.
tls:
server_name_override: 'tm849a32za.us-central1.gcp.clickhouse.cloud'
Reverse proxy
If you are running Uptrace behind a proxy such as Nginx or Haproxy, you will need to configure the domain name so that Uptrace knows how to render links and redirects properly:
site:
addr: 'https://uptrace.mydomain.com'
You can also run Uptrace behind a subpath, for example, http://mydomain.com/uptrace
:
site:
addr: 'https://mydomain.com/uptrace'
Scaling
Most of the time Uptrace performance will be limited by the ClickHouse database performance. When scaling ClickHouse, prefer vertical scaling over horizontal scaling.
Here are some quotes from the ClickHouse blog:
We commonly find successful deployments with ClickHouse deployed on servers with hundreds of cores, terabytes of RAM, and petabytes of disk space.
Scaling vertically first has a number of benefits, principally cost efficiency, lower cost of ownership (with respect to operations), and better query performance due to the minimization of data on the network for operations such as JOINs.
With this in mind, Uptrace Community Edition was also designed for vertical scaling. It can support very large deployments with billions of requests per day and millions of time series.
If vertical scaling does not work in your case, you can consider purchasing the Premium edition, which also supports horizontal scaling.