OpenTelemetry Python distro for Uptrace

This document explains how to configure OpenTelemetry Python SDK to export spans and metrics to Uptrace using OTLP/HTTP.

To learn about OpenTelemetry API, see OpenTelemetry Python Tracing APIopen in new window and OpenTelemetry Python Metrics APIopen in new window.

Uptrace Python

uptrace-pythonopen in new window is a thin wrapper over OpenTelemetry Pythonopen in new window that configures OpenTelemetry SDK to export data to Uptrace. It does not add any new functionality and is provided only for your convenience.

To install uptrace-python:

pip install uptrace

Configuration

You can configure Uptrace client using a DSN (Data Source Name) from the project settings page. Add the following code to the app main file (manage.py for Django):

WARNING

Gunicorn and uWSGI servers require special care. See application servers for details.

import uptrace
from opentelemetry import trace

# copy your project DSN here or use UPTRACE_DSN env var
uptrace.configure_opentelemetry(
  dsn="https://FIXME@api.uptrace.dev?grpc=4317",
  service_name="myservice",
  service_version="1.0.0",
  deployment_environment="production",
)

tracer = trace.get_tracer("app_or_package_name", "1.0.0")

The following configuration options are supported.

OptionDescription
dsnA data source that is used to connect to uptrace.dev. For example, https://<token>@api.uptrace.dev?grpc=4317.
service_nameservice.name resource attribute. For example, myservice.
service_versionservice.version resource attribute. For example, 1.0.0.
deployment_environmentdeployment.environment resource attribute. For example, production.
resource_attributesAny other resource attributes.
resourceResource contains attributes representing an entity that produces telemetry. Resource attributes are copied to all spans and events.

You can also use environment variables to configure the client:

Env varDescription
UPTRACE_DSNA data source that is used to connect to uptrace.dev. For example, https://<token>@uptrace.dev/<project_id>.
OTEL_RESOURCE_ATTRIBUTESKey-value pairs to be used as resource attributes. For example, service.name=myservice,service.version=1.0.0.
OTEL_SERVICE_NAME=myserviceSets the value of the service.name resource attribute. Takes precedence over OTEL_RESOURCE_ATTRIBUTES.
OTEL_PROPAGATORSPropagators to be used as a comma separated list. The default is tracecontext,baggage.

See OpenTelemetry documentationopen in new window for details.

Quickstart

Spend 5 minutes to install OpenTelemetry distro, generate your first trace, and click the link in your terminal to view the trace.

pip install uptrace
#!/usr/bin/env python3

import logging

import uptrace
from opentelemetry import trace

# Configure OpenTelemetry with sensible defaults.
uptrace.configure_opentelemetry(
    # Set dsn or UPTRACE_DSN env var.
    dsn="https://FIXME@api.uptrace.dev?grpc=4317",
    service_name="myservice",
    service_version="1.0.0",
)

# Create a tracer. Usually, tracer is a global variable.
tracer = trace.get_tracer("app_or_package_name", "1.0.0")

# Create a root span (a trace) to measure some operation.
with tracer.start_as_current_span("main-operation") as main:
    with tracer.start_as_current_span("GET /posts/:id") as child1:
        child1.set_attribute("http.method", "GET")
        child1.set_attribute("http.route", "/posts/:id")
        child1.set_attribute("http.url", "http://localhost:8080/posts/123")
        child1.set_attribute("http.status_code", 200)
        child1.record_exception(ValueError("error1"))

    with tracer.start_as_current_span("SELECT") as child2:
        child2.set_attribute("db.system", "mysql")
        child2.set_attribute("db.statement", "SELECT * FROM posts LIMIT 100")

    logging.error("Jackdaws love my big sphinx of quartz.")

    print("trace:", uptrace.trace_url(main))

# Send buffered spans and free resources.
uptrace.shutdown()
  • Step 3. Run the code to get a link for the generated trace:
python3 main.py
trace: https://uptrace.dev/traces/<trace_id>
  • Step 4. Follow the link to view the trace:

Basic trace

Already using OTLP exporter?

If you are already using OTLP exporter, you can continue to use it with Uptrace by changing some configuration options.

To maximize performance and efficiency, consider the following recommendations when configuring OpenTelemetry SDK.

RecommendationSignalsSignificance
Use BatchSpanProcessor to export multiple spans in a single request.AllEssential
Enable gzip compression to compress the data before sending and reduce the traffic cost.AllEssential
Prefer delta metrics temporality, because such metrics are smaller and Uptrace must convert cumulative metrics to delta anyway.MetricsRecommended
Prefer Protobuf encoding over JSON.AllRecommended
Use AWS X-Ray ID generator for OpenTelemetry.Traces, LogsOptional

To configure OpenTelemetry to send data to Uptrace, use the provided endpoint and pass the DSN via uptrace-dsn header:

TransportEndpointPort
gRPChttps://otlp.uptrace.dev:43174317
HTTPShttps://otlp.uptrace.dev443

Most languages allow to configure OTLP exporter using environment variables:

# Uncomment the appropriate protocol for your programming language.
# Only for OTLP/gRPC
#export OTEL_EXPORTER_OTLP_ENDPOINT="https://otlp.uptrace.dev:4317"
# Only for OTLP/HTTP
#export OTEL_EXPORTER_OTLP_ENDPOINT="https://otlp.uptrace.dev"

# Pass Uptrace DSN in gRPC/HTTP headers.
export OTEL_EXPORTER_OTLP_HEADERS="uptrace-dsn=https://FIXME@api.uptrace.dev?grpc=4317"

# Enable gzip compression.
export OTEL_EXPORTER_OTLP_COMPRESSION=gzip

# Enable exponential histograms.
export OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION=BASE2_EXPONENTIAL_BUCKET_HISTOGRAM

# Prefer delta temporality.
export OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE=DELTA

When configuring BatchSpanProcessor, use the following settings:

# Maximum allowed time to export data in milliseconds.
export OTEL_BSP_EXPORT_TIMEOUT=10000

# Maximum batch size.
# Using larger batch sizes can be problematic,
# because Uptrace rejects requests larger than 20MB.
export OTEL_BSP_MAX_EXPORT_BATCH_SIZE=10000

# Maximum queue size.
# Increase queue size if you have lots of RAM, for example,
# `10000 * number_of_gigabytes`.
export OTEL_BSP_MAX_QUEUE_SIZE=30000

# Max concurrent exports.
# Setting this to the number of available CPUs might be a good idea.
export OTEL_BSP_MAX_CONCURRENT_EXPORTS=2

Exporting traces

Hereopen in new window is how you can export OpenTelemetry traces to Uptrace following the recommendations above:

#!/usr/bin/env python3

import os

import grpc
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
    OTLPSpanExporter,
)
from opentelemetry.sdk.extension.aws.trace import AwsXRayIdGenerator

dsn = os.environ.get("UPTRACE_DSN")
print("using DSN:", dsn)

resource = Resource(
    attributes={"service.name": "myservice", "service.version": "1.0.0"}
)
tracer_provider = TracerProvider(
    resource=resource,
    id_generator=AwsXRayIdGenerator(),
)
trace.set_tracer_provider(tracer_provider)

exporter = OTLPSpanExporter(
    endpoint="otlp.uptrace.dev:4317",
    # Set the Uptrace dsn here or use UPTRACE_DSN env var.
    headers=(("uptrace-dsn", dsn),),
    timeout=5,
    compression=grpc.Compression.Gzip,
)

span_processor = BatchSpanProcessor(
    exporter,
    max_queue_size=1000,
    max_export_batch_size=1000,
)
tracer_provider.add_span_processor(span_processor)

tracer = trace.get_tracer("app_or_package_name", "1.0.0")

with tracer.start_as_current_span("main") as span:
    trace_id = span.get_span_context().trace_id
    print(f"trace id: {trace_id:0{32}x}")

# Send buffered spans.
trace.get_tracer_provider().shutdown()

Exporting metrics

Hereopen in new window is how you can export OpenTelemetry metrics to Uptrace following the recommendations above:

#!/usr/bin/env python3

import os
import time

import grpc
from opentelemetry import metrics
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import (
    OTLPMetricExporter,
)
from opentelemetry.sdk import metrics as sdkmetrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import (
    AggregationTemporality,
    PeriodicExportingMetricReader,
)
from opentelemetry.sdk.resources import Resource

dsn = os.environ.get("UPTRACE_DSN")
print("using DSN:", dsn)

temporality_delta = {
    sdkmetrics.Counter: AggregationTemporality.DELTA,
    sdkmetrics.UpDownCounter: AggregationTemporality.DELTA,
    sdkmetrics.Histogram: AggregationTemporality.DELTA,
    sdkmetrics.ObservableCounter: AggregationTemporality.DELTA,
    sdkmetrics.ObservableUpDownCounter: AggregationTemporality.DELTA,
    sdkmetrics.ObservableGauge: AggregationTemporality.DELTA,
}

exporter = OTLPMetricExporter(
    endpoint="otlp.uptrace.dev:4317",
    headers=(("uptrace-dsn", dsn),),
    timeout=5,
    compression=grpc.Compression.Gzip,
    preferred_temporality=temporality_delta,
)
reader = PeriodicExportingMetricReader(exporter)

resource = Resource(
    attributes={"service.name": "myservice", "service.version": "1.0.0"}
)
provider = MeterProvider(metric_readers=[reader], resource=resource)
metrics.set_meter_provider(provider)

meter = metrics.get_meter("github.com/uptrace/uptrace-python", "1.0.0")
counter = meter.create_counter("some.prefix.counter", description="TODO")

while True:
    counter.add(1)
    time.sleep(1)

Exporting logs

Hereopen in new window is how you can export OpenTelemetry logs to Uptrace following the recommendations above:

#!/usr/bin/env python3

import os
import logging

import grpc
from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
    OTLPLogExporter,
)
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.sdk.resources import Resource

dsn = os.environ.get("UPTRACE_DSN")
print("using DSN:", dsn)

resource = Resource(
    attributes={"service.name": "myservice", "service.version": "1.0.0"}
)
logger_provider = LoggerProvider(resource=resource)
set_logger_provider(logger_provider)

exporter = OTLPLogExporter(
    endpoint="otlp.uptrace.dev:4317",
    headers=(("uptrace-dsn", dsn),),
    timeout=5,
    compression=grpc.Compression.Gzip,
)
logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))

handler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)
logging.getLogger().addHandler(handler)

logger = logging.getLogger("myapp.area1")
logger.error("Hyderabad, we have a major problem.")

logger_provider.shutdown()

Logging level

By default, OpenTelemetry logging handler uses logging.NOTSET level which defaults to WARNING level. You can override the level when configuring OpenTelemetry:

import logging

uptrace.configure_opentelemetry(
    ...
    logging_level=logging.ERROR,
)

You can also specify the logging level when you create a logger:

import logging

logger = logging.getLogger(__name__)
logger.setLevel(logging.ERROR)

Async functions

If you are using asynchronous Python functions, use the following decorator to start spans:

from collections.abc import AsyncGenerator
from contextlib import asynccontextmanager
from typing import Any

from opentelemetry.trace import Tracer


@asynccontextmanager
async def start_as_current_span_async(
    *args: Any,
    tracer: Tracer,
    **kwargs: Any,
) -> AsyncGenerator[None, None]:
    """Start a new span and set it as the current span.

    Args:
        *args: Arguments to pass to the tracer.start_as_current_span method
        tracer: Tracer to use to start the span
        **kwargs: Keyword arguments to pass to the tracer.start_as_current_span method

    Yields:
        None
    """
    with tracer.start_as_current_span(*args, **kwargs):
        yield

You can use it like this:

from opentelemetry.trace import get_tracer

tracer = get_tracer(__name__)

@start_as_current_span_async(tracer=tracer, name='my_func')
async def my_func() -> ...:
    ...

That is a temporal solution until #3270open in new window and #3271open in new window issues are resolved.

Application servers

Because BatchSpanProcessor spawns a background thread to export spans to OpenTelemetry backendsopen in new window, it does not work well with application servers like Gunicorn and uWSGI that use the pre-forking model to serve requests. During the fork, the child process inherits the lock held by the parent process and a deadlock occurs.

To workaround that issue, you should configure OpenTelemetry from post-fork hooks provided by Gunicorn and uWSGI.

Gunicorn

With Gunicorn, use the post_forkopen in new window hook:

import uptrace

def post_fork(server, worker):
    uptrace.configure_opentelemetry(...)

See flask-gunicornopen in new window as an example.

uvicorn

If you are using Gunicorn + uvicorn with async frameworks like FastAPI:

import uptrace

def post_fork(server, worker):
    uptrace.configure_opentelemetry(...)

workers = 4
worker_class = "uvicorn.workers.UvicornWorker"

uWSGI

With uWSGI, use the postforkopen in new window decorator:

from uwsgidecorators import postfork
import uptrace

@postfork
def init_tracing():
    uptrace.configure_opentelemetry(...)

See flask-uwsgiopen in new window as an example.

SSL errors

If you are getting SSL errors like this:

ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED

Try to use different root certificates as a workaroundopen in new window:

export GRPC_DEFAULT_SSL_ROOTS_FILE_PATH=/etc/ssl/certs/ca-certificates.crt

What's next?

Next, instrument more operations to get a more detailed picture. Try to prioritize network calls, disk operations, database queries, error and logs.

You can also create your own instrumentations using OpenTelemetry Python Tracing APIopen in new window.

Last Updated: 10/5/2024, 6:38:58 AM
Get insights and updates in your inbox: