OpenTelemetry Rust distro for Uptrace
This document explains how to configure OpenTelemetry Rust SDK to export spans and metrics to Uptrace using OTLP/gRPC. OpenTelemetry Rust provides comprehensive observability for Rust applications with excellent performance and zero-cost abstractions.
OTLP Exporter
Uptrace fully supports the OpenTelemetry Protocol (OTLP) over both gRPC and HTTP transports.
If you already have an OTLP exporter configured, you can continue using it with Uptrace by simply pointing it to the Uptrace OTLP endpoint.
Connecting to Uptrace
Choose an OTLP endpoint from the table below and pass your DSN via the uptrace-dsn
header for authentication:
Transport | Endpoint | Port |
---|---|---|
gRPC | https://api.uptrace.dev:4317 | 4317 |
HTTP | https://api.uptrace.dev | 443 |
When using HTTP transport, you often need to specify the full URL for each signal type:
https://api.uptrace.dev/v1/traces
https://api.uptrace.dev/v1/logs
https://api.uptrace.dev/v1/metrics
Note: Most OpenTelemetry SDKs support both transports. Use HTTP unless you're already familiar with gRPC.
Recommended Settings
For performance and reliability, we recommend:
- Use
BatchSpanProcessor
andBatchLogProcessor
for batching spans and logs, reducing the number of export requests. - Enable
gzip
compression to reduce bandwidth usage. - Prefer
delta
metrics temporality (Uptrace converts cumulative metrics automatically). - Use Protobuf encoding instead of JSON (Protobuf is more efficient and widely supported).
- Use HTTP transport for simplicity and fewer configuration issues (unless you're already familiar with gRPC).
- Optionally, use the AWS X-Ray ID generator to produce trace IDs compatible with AWS X-Ray.
Common Environment Variables
You can use environment variables to configure resource attributes and propagators::
Variable | Description |
---|---|
OTEL_RESOURCE_ATTRIBUTES | Comma-separated resource attributes, e.g., service.name=myservice,service.version=1.0.0 . |
OTEL_SERVICE_NAME=myservice | Sets the service.name attribute (overrides OTEL_RESOURCE_ATTRIBUTES ). |
OTEL_PROPAGATORS | Comma-separated list of context propagators (default: tracecontext,baggage ). |
Most language SDKs allow configuring the OTLP exporter entirely via environment variables:
# Endpoint (choose HTTP or gRPC)
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.uptrace.dev" # HTTP
#export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.uptrace.dev:4317" # gRPC
# Pass DSN for authentication
export OTEL_EXPORTER_OTLP_HEADERS="uptrace-dsn=<FIXME>"
# Performance optimizations
export OTEL_EXPORTER_OTLP_COMPRESSION=gzip
export OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION=BASE2_EXPONENTIAL_BUCKET_HISTOGRAM
export OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE=DELTA
Configure BatchSpanProcessor
to balance throughput and payload size:
export OTEL_BSP_EXPORT_TIMEOUT=10000 # Max export timeout (ms)
export OTEL_BSP_MAX_EXPORT_BATCH_SIZE=10000 # Avoid >32MB payloads
export OTEL_BSP_MAX_QUEUE_SIZE=30000 # Adjust for available memory
export OTEL_BSP_MAX_CONCURRENT_EXPORTS=2 # Parallel exports
Exporting Traces
The following example demonstrates how to export OpenTelemetry traces to Uptrace. You can find the complete example here.
Dependencies
Add these dependencies to your Cargo.toml
:
[dependencies]
tokio = { version = "1", features = ["full"] }
tonic = { version = "0.13.1", features = ["tls-native-roots", "gzip"] }
opentelemetry = "0.30.0"
opentelemetry_sdk = { version = "0.30.0", features = ["rt-tokio"] }
opentelemetry-otlp = { version = "0.30.0", features = ["grpc-tonic", "gzip-tonic", "tls-roots", "trace"] }
opentelemetry-resource-detectors = "0.9"
Implementation
Run the following code with UPTRACE_DSN=<YOUR_DSN> cargo run
, passing the DSN in an environment variable:
use std::thread;
use std::time::Duration;
use tonic::metadata::MetadataMap;
use opentelemetry::trace::{TraceContextExt, Tracer};
use opentelemetry::{global, KeyValue};
use opentelemetry_otlp::{WithExportConfig, WithTonicConfig};
use opentelemetry_resource_detectors::{
HostResourceDetector, OsResourceDetector, ProcessResourceDetector,
};
use opentelemetry_sdk::Resource;
use opentelemetry_sdk::{
propagation::TraceContextPropagator,
trace::{
BatchConfigBuilder, BatchSpanProcessor, RandomIdGenerator, Sampler, SdkTracerProvider,
},
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
// Read Uptrace DSN from environment (format: https://uptrace.dev/get#dsn)
let dsn = std::env::var("UPTRACE_DSN").expect("Error: UPTRACE_DSN not found");
println!("Using DSN: {}", dsn);
let provider = build_tracer_provider(dsn)?;
global::set_tracer_provider(provider.clone());
global::set_text_map_propagator(TraceContextPropagator::new());
let tracer = global::tracer("example");
tracer.in_span("root-span", |cx| {
thread::sleep(Duration::from_millis(5));
tracer.in_span("GET /posts/:id", |cx| {
thread::sleep(Duration::from_millis(10));
let span = cx.span();
span.set_attribute(KeyValue::new("http.method", "GET"));
span.set_attribute(KeyValue::new("http.route", "/posts/:id"));
span.set_attribute(KeyValue::new("http.url", "http://localhost:8080/posts/123"));
span.set_attribute(KeyValue::new("http.status_code", 200));
});
tracer.in_span("SELECT", |cx| {
thread::sleep(Duration::from_millis(20));
let span = cx.span();
span.set_attribute(KeyValue::new("db.system", "mysql"));
span.set_attribute(KeyValue::new(
"db.statement",
"SELECT * FROM posts LIMIT 100",
));
});
let span = cx.span();
println!(
"View trace: https://app.uptrace.dev/traces/{}",
span.span_context().trace_id().to_string()
);
});
// Flush and shutdown the provider to ensure all data is exported
provider.force_flush()?;
provider.shutdown()?;
Ok(())
}
fn build_tracer_provider(
dsn: String,
) -> Result<SdkTracerProvider, Box<dyn std::error::Error + Send + Sync + 'static>> {
// Configure gRPC metadata with Uptrace DSN
let mut metadata = MetadataMap::with_capacity(1);
metadata.insert("uptrace-dsn", dsn.parse().unwrap());
// Create OTLP span exporter
let exporter = opentelemetry_otlp::SpanExporter::builder()
.with_tonic()
.with_tls_config(tonic::transport::ClientTlsConfig::new().with_native_roots())
.with_endpoint("https://api.uptrace.dev:4317")
.with_metadata(metadata)
.with_timeout(Duration::from_secs(10))
.build()?;
let batch_config = BatchConfigBuilder::default()
.with_max_queue_size(4096)
.with_max_export_batch_size(1024)
.with_scheduled_delay(Duration::from_secs(5))
.build();
let batch = BatchSpanProcessor::builder(exporter)
.with_batch_config(batch_config)
.build();
// Build the tracer provider
let provider = SdkTracerProvider::builder()
.with_span_processor(batch)
.with_resource(build_resource())
.with_sampler(Sampler::AlwaysOn)
.with_id_generator(RandomIdGenerator::default())
.build();
Ok(provider)
}
fn build_resource() -> Resource {
Resource::builder()
.with_detector(Box::new(OsResourceDetector))
.with_detector(Box::new(HostResourceDetector::default()))
.with_detector(Box::new(ProcessResourceDetector))
.with_attributes([
KeyValue::new("service.version", "1.2.3"),
KeyValue::new("deployment.environment", "production"),
])
.build()
}
Exporting Logs
The following example shows how to export OpenTelemetry logs to Uptrace using the tracing ecosystem. You can find the complete example here.
Additional Dependencies
Add these additional dependencies for logging:
[dependencies]
tokio = { version = "1", features = ["full"] }
tonic = { version = "0.13.1", features = ["tls-native-roots", "gzip"] }
opentelemetry = "0.30.0"
opentelemetry_sdk = { version = "0.30.0", features = ["rt-tokio", "logs"] }
opentelemetry-otlp = { version = "0.30.0", features = ["grpc-tonic", "gzip-tonic", "tls-roots", "logs"] }
opentelemetry-resource-detectors = "0.9.0"
opentelemetry-appender-tracing = "0.30.1"
tracing = { version = ">=0.1.40", features = ["std"]}
tracing-subscriber = { version = "0.3", features = ["env-filter","registry", "std", "fmt"] }
Implementation
Run the following code with UPTRACE_DSN=<YOUR_DSN> cargo run
, passing the DSN in an environment variable:
use tonic::metadata::MetadataMap;
use opentelemetry::KeyValue;
use opentelemetry_appender_tracing::layer;
use opentelemetry_otlp::{WithExportConfig, WithTonicConfig};
use opentelemetry_resource_detectors::{
HostResourceDetector, OsResourceDetector, ProcessResourceDetector,
};
use opentelemetry_sdk::logs::SdkLoggerProvider;
use opentelemetry_sdk::Resource;
use tracing::error;
use tracing_subscriber::{prelude::*, EnvFilter};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
// Read Uptrace DSN from environment (format: https://uptrace.dev/get#dsn)
let dsn = std::env::var("UPTRACE_DSN").expect("Error: UPTRACE_DSN not found");
println!("Using DSN: {}", dsn);
// Initialize the OpenTelemetry LoggerProvider
let provider = init_logger_provider(dsn)?;
let filter_otel = EnvFilter::new("info")
.add_directive("hyper=off".parse().unwrap())
.add_directive("tonic=off".parse().unwrap())
.add_directive("h2=off".parse().unwrap())
.add_directive("reqwest=off".parse().unwrap());
let otel_layer = layer::OpenTelemetryTracingBridge::new(&provider).with_filter(filter_otel);
// Create a tracing::Fmt layer to print logs to stdout
// Default filter is `info` level and above, with `debug` and above for OpenTelemetry crates
let filter_fmt = EnvFilter::new("info").add_directive("opentelemetry=debug".parse().unwrap());
let fmt_layer = tracing_subscriber::fmt::layer()
.with_thread_names(true)
.with_filter(filter_fmt);
tracing_subscriber::registry()
.with(otel_layer)
.with(fmt_layer)
.init();
// Emit a test log event (this will be exported to Uptrace)
error!(
name: "my-event-name",
target: "my-system",
event_id = 20,
user_name = "otel",
user_email = "otel@opentelemetry.io",
message = "This is an example message"
);
// Flush and shutdown the provider to ensure all data is exported
provider.force_flush()?;
provider.shutdown()?;
Ok(())
}
fn init_logger_provider(
dsn: String,
) -> Result<SdkLoggerProvider, Box<dyn std::error::Error + Send + Sync + 'static>> {
// Configure gRPC metadata with Uptrace DSN
let mut metadata = MetadataMap::with_capacity(1);
metadata.insert("uptrace-dsn", dsn.parse().unwrap());
// Configure the OTLP log exporter (gRPC + TLS)
let exporter = opentelemetry_otlp::LogExporter::builder()
.with_tonic()
.with_tls_config(tonic::transport::ClientTlsConfig::new().with_native_roots())
.with_endpoint("https://api.uptrace.dev:4317")
.with_metadata(metadata)
.build()?;
// Build the logger provider with resource attributes
let provider = SdkLoggerProvider::builder()
.with_resource(build_resource())
.with_batch_exporter(exporter)
.build();
Ok(provider)
}
fn build_resource() -> Resource {
Resource::builder()
.with_detector(Box::new(OsResourceDetector))
.with_detector(Box::new(HostResourceDetector::default()))
.with_detector(Box::new(ProcessResourceDetector))
.with_attributes([
KeyValue::new("service.version", "1.2.3"),
KeyValue::new("deployment.environment", "production"),
])
.build()
}
Exporting Metrics
The following example demonstrates how to export OpenTelemetry metrics to Uptrace. You can find the complete example here.
Additional Dependencies
Add these additional dependencies for metrics:
[dependencies]
tokio = { version = "1", features = ["full"] }
tonic = { version = "0.13", features = ["tls-native-roots", "gzip"] }
opentelemetry = { version = "0.30", features = ["metrics"] }
opentelemetry_sdk = { version = "0.30", features = ["rt-tokio", "metrics"] }
opentelemetry-otlp = { version = "0.30", features = ["grpc-tonic", "gzip-tonic", "tls-roots", "metrics"] }
opentelemetry-resource-detectors = "0.9"
Implementation
Run the following code with UPTRACE_DSN=<YOUR_DSN> cargo run
, passing the DSN in an environment variable:
use std::time::Duration;
use tonic::metadata::MetadataMap;
use opentelemetry::{global, KeyValue};
use opentelemetry_otlp::{WithExportConfig, WithTonicConfig};
use opentelemetry_resource_detectors::{
HostResourceDetector, OsResourceDetector, ProcessResourceDetector,
};
use opentelemetry_sdk::metrics::{PeriodicReader, SdkMeterProvider, Temporality};
use opentelemetry_sdk::Resource;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
// Read Uptrace DSN from environment (format: https://uptrace.dev/get#dsn)
let dsn = std::env::var("UPTRACE_DSN").expect("Error: UPTRACE_DSN not found");
println!("Using DSN: {}", dsn);
// Initialize the OpenTelemetry MeterProvider
let provider = init_meter_provider(dsn)?;
global::set_meter_provider(provider.clone());
// Create a meter and a histogram instrument
let meter = global::meter("app_or_crate_name");
let histogram = meter.f64_histogram("ex.com.three").build();
// Record some sample metrics
for i in 1..100000 {
histogram.record(0.5 + (i as f64) * 0.01, &[]);
tokio::time::sleep(Duration::from_millis(100)).await;
}
// Flush and shutdown the provider to ensure all data is exported
provider.force_flush()?;
provider.shutdown()?;
Ok(())
}
fn init_meter_provider(
dsn: String,
) -> Result<SdkMeterProvider, Box<dyn std::error::Error + Send + Sync + 'static>> {
// Configure gRPC metadata with Uptrace DSN
let mut metadata = MetadataMap::with_capacity(1);
metadata.insert("uptrace-dsn", dsn.parse().unwrap());
// Create OTLP metric exporter
let exporter = opentelemetry_otlp::MetricExporter::builder()
.with_tonic()
.with_tls_config(tonic::transport::ClientTlsConfig::new().with_native_roots())
.with_endpoint("https://api.uptrace.dev:4317")
.with_metadata(metadata)
.with_temporality(Temporality::Delta)
.build()?;
// Create periodic reader for exporting metrics
let reader = PeriodicReader::builder(exporter)
.with_interval(Duration::from_secs(15))
.build();
// Build the MeterProvider with reader
let provider = opentelemetry_sdk::metrics::SdkMeterProvider::builder()
.with_reader(reader)
.with_resource(build_resource())
.build();
Ok(provider)
}
fn build_resource() -> Resource {
Resource::builder()
.with_detector(Box::new(OsResourceDetector))
.with_detector(Box::new(HostResourceDetector::default()))
.with_detector(Box::new(ProcessResourceDetector))
.with_attributes([
KeyValue::new("service.version", "1.2.3"),
KeyValue::new("deployment.environment", "production"),
])
.build()
}
Integration with Rust Tracing
tokio-rs/tracing is a framework for instrumenting Rust programs to collect structured, event-based diagnostic information. It is widely used by popular Rust frameworks and libraries.
You can integrate tokio-rs/tracing with OpenTelemetry using the tracing_opentelemetry crate:
Additional Dependencies
[dependencies]
tracing-opentelemetry = "0.21"
Implementation
use tracing::{error, span};
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::Registry;
// Configure OpenTelemetry (using the tracer provider from the traces example)
let tracer = build_tracer_provider(dsn)?.tracer("example");
// Create a tracing layer with the configured tracer
let telemetry = tracing_opentelemetry::layer().with_tracer(tracer);
// Use the tracing subscriber `Registry`, or any other subscriber
// that implements `LookupSpan`
let subscriber = Registry::default().with(telemetry);
// Trace executed code
tracing::subscriber::with_default(subscriber, || {
// Spans will be sent to the configured OpenTelemetry exporter
let root = span!(tracing::Level::TRACE, "app_start", work_units = 2);
let _enter = root.enter();
error!("This event will be logged in the root span.");
});
Auto-Instrumentation
OpenTelemetry Rust supports automatic instrumentation for popular libraries through the tracing
ecosystem:
Supported Libraries
Library | Instrumentation | Installation |
---|---|---|
HTTP Clients | ||
reqwest | Automatic spans for HTTP requests | tracing-opentelemetry |
hyper | HTTP server/client spans | Built-in tracing support |
Web Frameworks | ||
axum | Request/response tracing | Built-in tracing support |
actix-web | HTTP request spans | tracing-actix-web |
warp | Request filtering traces | Built-in tracing support |
Databases | ||
sqlx | SQL query tracing | Built-in tracing support |
diesel | Database operation spans | Manual instrumentation |
tokio-postgres | PostgreSQL query traces | Built-in tracing support |
Tracing Integration Setup
To use automatic instrumentation with tracing
:
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
// Initialize tracer
let tracer = init_tracer()?;
// Create tracing layer
let telemetry = tracing_opentelemetry::layer().with_tracer(tracer);
// Initialize subscriber
tracing_subscriber::registry()
.with(telemetry)
.with(tracing_subscriber::EnvFilter::from_default_env())
.with(tracing_subscriber::fmt::layer())
.init();
// Your application code here
run_app().await?;
// Shutdown
opentelemetry::global::shutdown_tracer_provider();
Ok(())
}
Framework Integration
Axum Integration
use axum::{extract::Path, http::StatusCode, response::Json, routing::get, Router};
use serde_json::{json, Value};
use tracing::{info, instrument};
#[tokio::main]
async fn main() {
// Initialize tracing (from above)
init_tracing().await;
let app = Router::new()
.route("/users/:id", get(get_user))
.route("/health", get(health_check));
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, app).await.unwrap();
}
#[instrument(skip_all, fields(user_id = %user_id))]
async fn get_user(Path(user_id): Path<u32>) -> Result<Json<Value>, StatusCode> {
info!("Fetching user data");
// Simulate database call
let user = fetch_user_from_db(user_id).await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
Ok(Json(json!({ "id": user_id, "name": user.name })))
}
#[instrument]
async fn health_check() -> &'static str {
"OK"
}
Actix-Web Integration
use actix_web::{web, App, HttpResponse, HttpServer, Result};
use tracing::{info, instrument};
#[tokio::main]
async fn main() -> std::io::Result<()> {
init_tracing().await;
HttpServer::new(|| {
App::new()
.wrap(tracing_actix_web::TracingLogger::default())
.route("/users/{id}", web::get().to(get_user))
})
.bind("127.0.0.1:8080")?
.run()
.await
}
#[instrument(skip_all)]
async fn get_user(path: web::Path<u32>) -> Result<HttpResponse> {
let user_id = path.into_inner();
info!(user_id, "Fetching user");
Ok(HttpResponse::Ok().json(serde_json::json!({
"id": user_id,
"name": "John Doe"
})))
}
Troubleshooting
Common Issues
Issue: "Channel is full" errors
OpenTelemetry trace error occurred. cannot send span to the batch span processor because the channel is full
Solution: Increase batch processor queue size:
export OTEL_BSP_MAX_QUEUE_SIZE=30000
export OTEL_BSP_MAX_EXPORT_BATCH_SIZE=10000
export OTEL_BSP_MAX_CONCURRENT_EXPORTS=2
Issue: High memory usage
Solution: Adjust export frequency and batch size:
export OTEL_BSP_SCHEDULE_DELAY=1000 # Export every 1 second
export OTEL_BSP_MAX_EXPORT_BATCH_SIZE=5000
Issue: Missing spans in distributed traces
Solution: Ensure proper context propagation:
use opentelemetry::propagation::Extractor;
use opentelemetry_http::HeaderExtractor;
// Extract context from HTTP headers
let parent_cx = opentelemetry::global::get_text_map_propagator(|propagator| {
propagator.extract(&HeaderExtractor(request.headers()))
});
// Use extracted context
let span = tracer.start_with_context("operation", &parent_cx);
Debug Mode
Enable debug logging to troubleshoot issues:
// Set error handler
opentelemetry::global::set_error_handler(|error| {
eprintln!("OpenTelemetry error: {:#}", error);
}).expect("Failed to set error handler");
// Enable tracing logs
std::env::set_var("RUST_LOG", "opentelemetry=debug,opentelemetry_otlp=debug");
tracing_subscriber::fmt::init();
Health Checks
Implement health checks for OpenTelemetry:
use opentelemetry::trace::{TraceContextExt, Tracer};
use std::time::{Duration, Instant};
pub async fn check_telemetry_health() -> Result<(), String> {
let tracer = opentelemetry::global::tracer("health-check");
let start = Instant::now();
tracer.in_span("health-check", |cx| {
let span = cx.span();
span.set_attribute(opentelemetry::KeyValue::new("check.type", "telemetry"));
// Simulate some work
std::thread::sleep(Duration::from_millis(1));
});
let duration = start.elapsed();
if duration > Duration::from_secs(1) {
return Err("Telemetry is slow".to_string());
}
Ok(())
}
Performance Tuning
For high-throughput applications:
use opentelemetry_sdk::trace::{BatchConfig, Sampler};
let batch_config = BatchConfig::default()
.with_max_queue_size(100000) // Large queue for high throughput
.with_max_export_batch_size(16384) // Larger batches
.with_scheduled_delay(Duration::from_millis(2000)) // More frequent exports
.with_max_concurrent_exports(4); // More concurrent exports
// Use head-based sampling for production
let sampler = Sampler::TraceIdRatioBased(0.1); // Sample 10% of traces
What's Next?
Now that you have OpenTelemetry Rust set up, explore these advanced topics:
Core APIs
- Tracing API - Advanced span management, attributes, events, and context propagation
- Metrics API - Counters, histograms, gauges, and custom metrics
- Propagation - Distributed tracing across services and async boundaries
Advanced Topics
- Sampling - Production sampling strategies and custom samplers
- Resource Detectors - Automatic resource detection for containers and cloud environments