What is OpenTelemetry Collector?
OpenTelemetry Collector is a vendor-agnostic proxy/middleman between your application and a distributed tracing tool such as Uptrace or Jaeger. Collector receives telemetry data, processes it, and then exports the data to tracing tools that can store it permanently.
OpenTelemetry Collector is written in Go and licensed under Apache 2.0 license which allows you to change the source code and install custom extensions. That comes at a cost of running and maintaining your own OpenTelemetry Collector instances.
When to use OpenTelemetry Collector?
Most of the time, sending telemetry data directly to a backend is the great way to get started with OpenTelemetry. But you may want to deploy a collector alongside your services to get batching, retries, sensitive data filtering, and more.
The most prominent OpenTelemetry Collector feature is the ability to operate on whole traces instead of individual spans. To achieve that, OpenTelemetry Collector buffers the received spans and groups them by a trace id. That is the key requirement to implement tail-based sampling.
OpenTelemetry Collector distributes pre-compiled binaries for Linux, MacOS, and Windows.
otelcol-contrib binary with the associated systemd service, run the following command replacing
0.56.0 with the desired version and
amd64 with the desired architecture:
wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.56.0/otelcol-contrib_0.56.0_linux_amd64.deb sudo dpkg -i otelcol-contrib_0.56.0_linux_amd64.deb
wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.56.0/otelcol_0.56.0_linux_amd64.rpm sudo rpm -ivh otelcol_0.56.0_linux_amd64.rpm
You can check the status of the installed service with:
sudo systemctl status otelcol-contrib
And check the logs with:
sudo journalctl -u otelcol-contrib -f
You can edit the config at
/etc/otelcol-contrib/config.yaml and restart OpenTelemetry Collector:
sudo systemctl restart otelcol-contrib
Compiling from sources
You can also compile OpenTelemetry Collector locally:
git clone https://github.com/open-telemetry/opentelemetry-collector-contrib.git cd opentelemetry-collector-contrib make install-tools make otelcontribcol ./bin/otelcontribcol_linux_amd64 --config ./examples/local/otel-config.yaml
By default, you can find the config file at
/etc/otelcol-contrib/config.yaml, for example:
Don't repeat a common mistake by configuring a receiver or an exporter without adding it to the
service processing pipeline. Such receivers and exporters are ignored.
# receivers configure how data gets into the Collector. receivers: otlp: protocols: grpc: http: # processors specify what happens with the received data. processors: resourcedetection: detectors: [system] batch: send_batch_size: 10000 timeout: 10s # exporters configure how to send processed data to one or more backends. exporters: otlp: endpoint: otlp.uptrace.dev:4317 headers: uptrace-dsn: 'https://<token>@uptrace.dev/<project_id>' # service pulls the configured receivers, processors, and exporters together into # processing pipelines. Unused receivers/processors/exporters are ignored. service: pipelines: traces: receivers: [otlp] processors: [batch] exporters: [otlp]
Extensions provide additional capabilities for OpenTelemetry Collector and do not require direct access to telemetry data, for example, Health Check extension responds to health check requests.
extensions: # Health Check extension responds to health check requests health_check: # PProf extension allows fetching Collector's performance profile pprof: # zPages extension enables in-process diagnostics zpages: # Memory Ballast extension configures memory ballast for the process memory_ballast: size_mib: 512
Exporting metrics to Prometheus
See exporting OpenTelemetry Collector metrics to Prometheus.
Exporting data to Uptrace
exporters: otlp: endpoint: otlp.uptrace.dev:4317 headers: # Copy your project DSN here uptrace-dsn: 'https://<token>@uptrace.dev/<project_id>'
hostmetricsreceiver is a Collector plugin that gathers various metrics about the host system, for example, CPU, RAM, disk metrics and more.
To start collecting host metrics, you need to install Collector on each system you want to monitor and add the following lines to the Collector config:
receivers: hostmetrics: collection_interval: 10s scrapers: cpu: disk: filesystem: load: memory: network: paging:
If you are using unusual filesystems, you may want to configure the receiver more thoroughly, for example:
receivers: hostmetrics: collection_interval: 10s scrapers: cpu: disk: load: filesystem: include_fs_types: match_type: strict fs_types: [ext3, ext4] memory: network: paging:
Also see Collector Configurator.