OpenTelemetry Logs for Python
This document covers OpenTelemetry Logs for Python, focusing on the standard library's logging module.
Prerequisites
Make sure your exporter is configured before you start instrumenting code. Follow Getting started with OpenTelemetry Python or set up Direct OTLP Configuration first.
If you are not familiar with logs terminology like structured logging or log-trace correlation, read the introduction to OpenTelemetry Logs first.
Overview
OpenTelemetry provides two approaches for collecting logs in Python:
- Log bridges (recommended): Integrate with existing logging libraries to automatically capture logs and correlate them with traces.
- Logs API: Use the native OpenTelemetry Logs API directly for maximum control.
Log bridges are the recommended approach because they allow you to use familiar logging APIs while automatically adding trace context (trace_id, span_id) to your logs.
Python logging integration
The standard library's logging module is Python's built-in logging framework. To configure the OTLP log exporter and attach the OpenTelemetry logging handler, use the setup from Direct OTLP Configuration and then continue below for usage patterns.
Using uptrace-python
If you're using uptrace-python, logging is automatically configured when you call configure_opentelemetry():
import logging
import uptrace
from opentelemetry import trace
# Configure OpenTelemetry (includes logging setup)
uptrace.configure_opentelemetry(
service_name="myservice",
service_version="1.0.0",
)
# Create logger - logs are automatically sent to Uptrace
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# Create tracer
tracer = trace.get_tracer("myservice")
with tracer.start_as_current_span("my-operation") as span:
# This log includes trace_id and span_id automatically
logger.info("Operation started", extra={"key": "value"})
Auto-instrumentation
You can enable logging auto-instrumentation using environment variables:
export OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
export OTEL_LOGS_EXPORTER=otlp
opentelemetry-instrument python your_app.py
This automatically captures logs from the Python logging module and exports them via OTLP.
Other logging libraries
structlog
structlog is a popular structured logging library for Python. You can integrate it with OpenTelemetry:
import structlog
from opentelemetry import trace
def add_trace_context(logger, method_name, event_dict):
"""Add trace context to structlog events."""
span = trace.get_current_span()
if span.get_span_context().is_valid:
ctx = span.get_span_context()
event_dict["trace_id"] = format(ctx.trace_id, "032x")
event_dict["span_id"] = format(ctx.span_id, "016x")
return event_dict
# Configure structlog with trace context processor
structlog.configure(
processors=[
add_trace_context,
structlog.stdlib.add_log_level,
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.JSONRenderer(),
],
)
logger = structlog.get_logger()
# Logs include trace context when inside a span
with tracer.start_as_current_span("my-operation"):
logger.info("Processing request", user_id="12345")
loguru
loguru is another popular logging library. Add trace context using a custom format:
from loguru import logger
from opentelemetry import trace
def trace_context_format(record):
"""Add trace context to loguru records."""
span = trace.get_current_span()
if span.get_span_context().is_valid:
ctx = span.get_span_context()
record["extra"]["trace_id"] = format(ctx.trace_id, "032x")
record["extra"]["span_id"] = format(ctx.span_id, "016x")
return record
# Configure loguru with trace context
logger = logger.patch(trace_context_format)
logger.add(
sink=lambda msg: print(msg, end=""),
format="{time} | {level} | trace_id={extra[trace_id]} | {message}",
)
Log-trace correlation
When you emit a log within an active trace span, OpenTelemetry automatically includes:
- trace_id: Links log to the entire distributed trace
- span_id: Links log to the specific operation
- trace_flags: Indicates if the trace is sampled
This enables bidirectional navigation between logs and traces in your observability backend.
Manual correlation
If you can't use log bridges, manually inject trace context:
import logging
from opentelemetry import trace
def log_with_context(logger, level, msg, **kwargs):
"""Log with trace context."""
span = trace.get_current_span()
if span.get_span_context().is_valid:
ctx = span.get_span_context()
kwargs["trace_id"] = format(ctx.trace_id, "032x")
kwargs["span_id"] = format(ctx.span_id, "016x")
logger.log(level, msg, extra=kwargs)
# Usage
logger = logging.getLogger(__name__)
log_with_context(logger, logging.INFO, "User logged in", user_id="12345")
Accessing trace context
Get trace context information for custom logging:
from opentelemetry import trace
def get_trace_context():
"""Get current trace context as a dictionary."""
span = trace.get_current_span()
ctx = span.get_span_context()
if not ctx.is_valid:
return {}
return {
"trace_id": format(ctx.trace_id, "032x"),
"span_id": format(ctx.span_id, "016x"),
"trace_flags": format(ctx.trace_flags, "02x"),
"is_remote": ctx.is_remote,
}
# Use in your logging
with tracer.start_as_current_span("my-operation"):
context = get_trace_context()
print(f"Current trace: {context}")
Best practices
Use structured logging
Use the extra parameter to add structured fields:
# Good: Structured fields enable filtering and analysis
logger.info("Database query executed", extra={
"query_type": "SELECT",
"table": "users",
"duration_ms": 45,
"rows_affected": 1,
})
# Avoid: Unstructured messages are harder to analyze
logger.info(f"SELECT query on users took 45ms and returned 1 row")
Log within span context
Always log within an active span for automatic correlation:
# Good: Logs are correlated with trace
with tracer.start_as_current_span("process-order") as span:
logger.info("Processing order", extra={"order_id": order_id})
process_order(order_id)
logger.info("Order processed successfully")
# Less useful: Logs not correlated with trace
logger.info("Processing order", extra={"order_id": order_id})
with tracer.start_as_current_span("process-order"):
process_order(order_id)
Use appropriate log levels
Choose log levels based on the information type:
logger.debug("Detailed debugging information") # Development only
logger.info("General operational events") # Normal operations
logger.warning("Unexpected but handled events") # Potential issues
logger.error("Errors that need attention") # Failures
logger.critical("System-level failures") # Critical issues
Avoid logging sensitive data
Never log passwords, tokens, or PII:
# Bad: Logging sensitive data
logger.info("User login", extra={"password": password, "token": token})
# Good: Redact sensitive fields
logger.info("User login", extra={"user_id": user_id, "ip_address": ip})