OpenTelemetry Logs for Rust

This document covers OpenTelemetry Logs for Rust, focusing on the tracing ecosystem which is the standard approach for structured logging in Rust. To learn how to install and configure OpenTelemetry Rust SDK, see Getting started with OpenTelemetry Rust.

Prerequisites

Make sure your exporter is configured before you start instrumenting code. Follow Getting started with OpenTelemetry Rust or set up Direct OTLP Configuration first.

If you are not familiar with logs terminology like structured logging or log-trace correlation, read the introduction to OpenTelemetry Logs first.

Overview

OpenTelemetry provides two approaches for collecting logs in Rust:

  1. Log bridges (recommended): Integrate with the tracing ecosystem to automatically capture logs and correlate them with traces.
  2. Logs API: Use the native OpenTelemetry Logs API directly for maximum control.

Log bridges are the recommended approach because they allow you to use familiar logging APIs while automatically adding trace context (trace_id, span_id) to your logs.

Tracing integration

The tracing crate is Rust's de facto standard for structured logging and instrumentation. OpenTelemetry provides an official bridge via opentelemetry-appender-tracing.

Installation

Add these dependencies to your Cargo.toml:

toml
[dependencies]
tokio = { version = "1", features = ["full"] }
tonic = { version = "0.13.1", features = ["tls-native-roots", "gzip"] }
opentelemetry = "0.30.0"
opentelemetry_sdk = { version = "0.30.0", features = ["rt-tokio", "logs"] }
opentelemetry-otlp = { version = "0.30.0", features = ["grpc-tonic", "gzip-tonic", "tls-roots", "logs"] }
opentelemetry-resource-detectors = "0.9.0"
opentelemetry-appender-tracing = "0.30.1"
tracing = { version = ">=0.1.40", features = ["std"]}
tracing-subscriber = { version = "0.3", features = ["env-filter", "registry", "std", "fmt"] }

Basic configuration

rust
use opentelemetry_appender_tracing::layer::OpenTelemetryTracingBridge;
use tracing_subscriber::{prelude::*, EnvFilter};

// Initialize the OpenTelemetry LoggerProvider
let provider = init_logger_provider(dsn)?;

// Create the OpenTelemetry tracing bridge layer
let filter_otel = EnvFilter::new("info")
    .add_directive("hyper=off".parse().unwrap())
    .add_directive("tonic=off".parse().unwrap());
let otel_layer = OpenTelemetryTracingBridge::new(&provider).with_filter(filter_otel);

// Create a fmt layer for stdout output
let fmt_layer = tracing_subscriber::fmt::layer()
    .with_thread_names(true);

// Combine layers
tracing_subscriber::registry()
    .with(otel_layer)
    .with(fmt_layer)
    .init();

Complete example

rust
use tonic::metadata::MetadataMap;

use opentelemetry::KeyValue;
use opentelemetry_appender_tracing::layer::OpenTelemetryTracingBridge;
use opentelemetry_otlp::{WithExportConfig, WithTonicConfig};
use opentelemetry_resource_detectors::{
    HostResourceDetector, OsResourceDetector, ProcessResourceDetector,
};
use opentelemetry_sdk::logs::SdkLoggerProvider;
use opentelemetry_sdk::Resource;

use tracing::{error, info, warn};
use tracing_subscriber::{prelude::*, EnvFilter};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
    // Read Uptrace DSN from environment
    let dsn = std::env::var("UPTRACE_DSN").expect("Error: UPTRACE_DSN not found");
    println!("Using DSN: {}", dsn);

    // Initialize the OpenTelemetry LoggerProvider
    let provider = init_logger_provider(dsn)?;

    // Configure the OpenTelemetry tracing bridge
    let filter_otel = EnvFilter::new("info")
        .add_directive("hyper=off".parse().unwrap())
        .add_directive("tonic=off".parse().unwrap())
        .add_directive("h2=off".parse().unwrap())
        .add_directive("reqwest=off".parse().unwrap());
    let otel_layer = OpenTelemetryTracingBridge::new(&provider).with_filter(filter_otel);

    // Create a fmt layer for stdout output
    let filter_fmt = EnvFilter::new("info").add_directive("opentelemetry=debug".parse().unwrap());
    let fmt_layer = tracing_subscriber::fmt::layer()
        .with_thread_names(true)
        .with_filter(filter_fmt);

    // Initialize the tracing subscriber
    tracing_subscriber::registry()
        .with(otel_layer)
        .with(fmt_layer)
        .init();

    // Emit log events (these will be exported to Uptrace)
    info!(
        target: "my-system",
        user_id = "12345",
        action = "login",
        "User logged in successfully"
    );

    warn!(
        target: "my-system",
        endpoint = "/api/users",
        latency_ms = 250,
        "Slow API response detected"
    );

    error!(
        name: "my-event-name",
        target: "my-system",
        event_id = 20,
        user_name = "otel",
        user_email = "otel@opentelemetry.io",
        message = "This is an example error message"
    );

    // Flush and shutdown the provider
    provider.force_flush()?;
    provider.shutdown()?;

    Ok(())
}

fn init_logger_provider(
    dsn: String,
) -> Result<SdkLoggerProvider, Box<dyn std::error::Error + Send + Sync + 'static>> {
    let mut metadata = MetadataMap::with_capacity(1);
    metadata.insert("uptrace-dsn", dsn.parse().unwrap());

    let exporter = opentelemetry_otlp::LogExporter::builder()
        .with_tonic()
        .with_tls_config(tonic::transport::ClientTlsConfig::new().with_native_roots())
        .with_endpoint("https://api.uptrace.dev:4317")
        .with_metadata(metadata)
        .build()?;

    let provider = SdkLoggerProvider::builder()
        .with_resource(build_resource())
        .with_batch_exporter(exporter)
        .build();

    Ok(provider)
}

fn build_resource() -> Resource {
    Resource::builder()
        .with_detector(Box::new(OsResourceDetector))
        .with_detector(Box::new(HostResourceDetector::default()))
        .with_detector(Box::new(ProcessResourceDetector))
        .with_attributes([
            KeyValue::new("service.name", "my-rust-service"),
            KeyValue::new("service.version", "1.0.0"),
            KeyValue::new("deployment.environment", "production"),
        ])
        .build()
}

See GitHub example for the complete code.

Log-trace correlation

When you emit a log within an active trace span, OpenTelemetry automatically includes:

  • trace_id: Links log to the entire distributed trace
  • span_id: Links log to the specific operation
  • trace_flags: Indicates if the trace is sampled

This enables bidirectional navigation between logs and traces in your observability backend.

Combining logs and traces

To correlate logs with traces, use both the tracing-opentelemetry and opentelemetry-appender-tracing crates:

rust
use opentelemetry::{global, trace::Tracer};
use opentelemetry_appender_tracing::layer::OpenTelemetryTracingBridge;
use tracing::{info, instrument};
use tracing_subscriber::{prelude::*, EnvFilter};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
    // Initialize tracing provider
    let tracer_provider = init_tracer_provider(dsn.clone())?;
    global::set_tracer_provider(tracer_provider);

    // Initialize logging provider
    let logger_provider = init_logger_provider(dsn)?;

    // Create tracing layer for spans
    let tracer = global::tracer("my-app");
    let telemetry_layer = tracing_opentelemetry::layer().with_tracer(tracer);

    // Create logging layer
    let otel_log_layer = OpenTelemetryTracingBridge::new(&logger_provider);

    // Combine all layers
    tracing_subscriber::registry()
        .with(telemetry_layer)
        .with(otel_log_layer)
        .with(tracing_subscriber::fmt::layer())
        .init();

    // Logs within spans are automatically correlated
    process_request().await;

    Ok(())
}

#[instrument]
async fn process_request() {
    info!("Processing request"); // Automatically includes trace_id and span_id

    fetch_data().await;

    info!("Request completed");
}

#[instrument]
async fn fetch_data() {
    info!("Fetching data from database");
    // Database operation...
}

Manual correlation

If you can't use log bridges, manually inject trace context:

rust
use opentelemetry::trace::TraceContextExt;
use tracing::info;

fn log_with_context(message: &str) {
    let current_context = opentelemetry::Context::current();
    let span = current_context.span();

    if span.span_context().is_valid() {
        let span_context = span.span_context();
        info!(
            trace_id = %span_context.trace_id(),
            span_id = %span_context.span_id(),
            "{}",
            message
        );
    } else {
        info!("{}", message);
    }
}

Filtering logs

Control which logs are exported using EnvFilter:

rust
use tracing_subscriber::EnvFilter;

// Only export info and above
let filter = EnvFilter::new("info");

// Exclude noisy crates
let filter = EnvFilter::new("info")
    .add_directive("hyper=off".parse().unwrap())
    .add_directive("tonic=off".parse().unwrap())
    .add_directive("h2=off".parse().unwrap())
    .add_directive("reqwest=off".parse().unwrap());

// Per-module filtering
let filter = EnvFilter::new("info")
    .add_directive("my_app=debug".parse().unwrap())
    .add_directive("my_app::db=trace".parse().unwrap());

Environment variable configuration

bash
# Set log level via environment variable
export RUST_LOG=info

# Complex filtering
export RUST_LOG="info,my_app=debug,hyper=off"

Best practices

Use structured fields

Use key-value pairs for structured logging to enable filtering:

rust
use tracing::info;

// Good: Structured fields
info!(
    user_id = "12345",
    action = "login",
    duration_ms = 45,
    "User authenticated successfully"
);

// Bad: Formatting into message string
info!("User 12345 authenticated via login in 45ms");

Use appropriate log levels

rust
use tracing::{trace, debug, info, warn, error};

// Trace: Very detailed diagnostic information
trace!(buffer_size = 1024, "Reading from buffer");

// Debug: Useful for debugging
debug!(query = "SELECT * FROM users", "Executing database query");

// Info: Normal operational messages
info!(user_id = "12345", "User logged in");

// Warn: Potentially problematic situations
warn!(retry_count = 3, "Retrying failed operation");

// Error: Error events
error!(error = ?err, "Failed to process request");

Avoid logging sensitive data

Never log passwords, tokens, or PII:

rust
// Bad: Logging sensitive data
info!(password = password, "User login attempt");

// Good: Redact sensitive fields
info!(user_id = user_id, "User login attempt");

// Good: Use debug formatting for errors (may contain sensitive info)
error!(error = ?err, "Authentication failed");

Use spans for context

Prefer spans over logs for tracking operations:

rust
use tracing::{info, instrument, Span};

// Good: Use spans to track operations
#[instrument(skip(password))]
async fn authenticate_user(username: &str, password: &str) -> Result<User, AuthError> {
    info!("Starting authentication");

    let user = fetch_user(username).await?;
    verify_password(&user, password)?;

    info!("Authentication successful");
    Ok(user)
}

// The span automatically captures timing, success/failure, and correlates all logs

Performance considerations

Batch processing

The OpenTelemetry SDK batches logs for efficient export:

rust
use std::time::Duration;
use opentelemetry_sdk::logs::{BatchConfig, SdkLoggerProvider};

let batch_config = BatchConfig::default()
    .with_max_queue_size(10000)
    .with_max_export_batch_size(1000)
    .with_scheduled_delay(Duration::from_secs(5));

Sampling

For high-volume applications, consider sampling logs:

rust
use tracing_subscriber::{filter::LevelFilter, Layer};

// Only log errors in production
let filter = if cfg!(debug_assertions) {
    LevelFilter::DEBUG
} else {
    LevelFilter::ERROR
};

Async runtime integration

Ensure you're using the correct runtime features:

rust
// For tokio runtime
opentelemetry_sdk = { version = "0.30", features = ["rt-tokio", "logs"] }

What's next?