OpenTelemetry Django: Traces, Metrics and Database Monitoring
OpenTelemetry instruments Django applications to collect request traces, database query spans, and custom metrics and exports them to any OTLP-compatible backend. This guide covers automatic and manual instrumentation, WSGI and ASGI setup, database monitoring, and Celery integration.
For a complete overview of Python instrumentation options, see the OpenTelemetry Python guide.
Quick start: For zero-code setup without modifying your application, see the Python zero-code instrumentation guide.
Installation
Install the required packages. Python 3.9 or later is required.
pip install \
opentelemetry-api==1.41.0 \
opentelemetry-sdk==1.41.0 \
opentelemetry-instrumentation-django==0.62b0 \
opentelemetry-exporter-otlp==1.41.0
Check pypi.org/project/opentelemetry-api for newer versions before pinning.
Database instrumentation packages
Install the package matching your database backend:
# PostgreSQL (psycopg2)
pip install opentelemetry-instrumentation-psycopg2==0.62b0
# PostgreSQL (psycopg3)
pip install opentelemetry-instrumentation-psycopg==0.62b0
# MySQL (mysqlclient / dbapi)
pip install opentelemetry-instrumentation-dbapi==0.62b0
# SQLite
pip install opentelemetry-instrumentation-sqlite3==0.62b0
opentelemetry-instrumentation-psycopg2 is for the psycopg2 driver; opentelemetry-instrumentation-psycopg (without the 2) is for psycopg3.
Manual instrumentation
Django instrumentation relies on DJANGO_SETTINGS_MODULE being set before DjangoInstrumentor is called. Initialise everything in manage.py so the environment variable is already present:
# manage.py
import os
import sys
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.django import DjangoInstrumentor
def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
resource = Resource(attributes={SERVICE_NAME: 'my-django-app'})
provider = TracerProvider(resource=resource)
provider.add_span_processor(
BatchSpanProcessor(
OTLPSpanExporter(endpoint='http://localhost:4317')
)
)
trace.set_tracer_provider(provider)
DjangoInstrumentor().instrument()
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
Replace http://localhost:4317 with your collector or OTLP backend address.
DjangoInstrumentor configuration
DjangoInstrumentor accepts several parameters that control what gets traced:
from opentelemetry.instrumentation.django import DjangoInstrumentor
DjangoInstrumentor().instrument(
is_sql_commentor_enabled=True, # annotate SQL queries with view/route context
request_hook=request_hook, # called after span is created, before middleware
response_hook=response_hook, # called before span closes, after middleware
)
request_hook fires before Django middleware runs — only the bare HttpRequest is available at this point. response_hook fires after all middleware has completed, so attributes like request.user and request.site are available there:
def response_hook(span, request, response):
if request.user.is_authenticated:
span.set_attribute('user.id', str(request.user.id))
span.set_attribute('user.email', request.user.email)
To exclude specific URLs from tracing, set OTEL_PYTHON_DJANGO_EXCLUDED_URLS:
export OTEL_PYTHON_DJANGO_EXCLUDED_URLS="health,/metrics,^/static"
Auto-instrumentation
For zero-code setup, use opentelemetry-distro which installs the bootstrap tool and auto-configures common options:
pip install opentelemetry-distro==0.61b0
# Scans installed packages and installs matching instrumentation libraries
opentelemetry-bootstrap -a install
# Run with auto-instrumentation
opentelemetry-instrument python manage.py runserver --noreload
The --noreload flag prevents Django's auto-reloader from forking the process and causing DjangoInstrumentor to run twice, which produces duplicate spans.
Configure the exporter and service name via environment variables:
export OTEL_SERVICE_NAME=my-django-app
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
export OTEL_EXPORTER_OTLP_PROTOCOL=grpc
WSGI and ASGI instrumentation
For WSGI servers (Gunicorn, uWSGI), call DjangoInstrumentor().instrument() inside a post-fork hook so each worker initialises its own tracer provider. Calling it at module level before forking means child processes inherit a provider whose exporters may share file descriptors or connections with the parent — this causes lost spans and race conditions.
Gunicorn (gunicorn.conf.py):
from opentelemetry.instrumentation.django import DjangoInstrumentor
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
def post_fork(server, worker):
provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(provider)
DjangoInstrumentor().instrument()
uWSGI (app.py or startup module):
from uwsgidecorators import postfork
from opentelemetry.instrumentation.django import DjangoInstrumentor
@postfork
def init_tracing():
# same provider setup as above
DjangoInstrumentor().instrument()
For ASGI servers (Channels, Daphne, Uvicorn), install the ASGI instrumentation in addition and run bootstrap to pick up the ASGI middleware:
pip install opentelemetry-instrumentation-asgi==0.61b0
opentelemetry-bootstrap -a install
DjangoInstrumentor().instrument() still handles the Django layer; opentelemetry-instrumentation-asgi handles the lower ASGI layer underneath.
Database query monitoring
OpenTelemetry captures each query as a child span with attributes including db.statement (the SQL text), db.operation (SELECT, INSERT, etc.), db.name, and the span duration. These attributes let you identify slow queries directly in a trace view.
Initialise the database instrumentor early in your startup, before any database connections are made:
from opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor
from opentelemetry.instrumentation.django import DjangoInstrumentor
Psycopg2Instrumentor().instrument()
DjangoInstrumentor().instrument()
For Gunicorn or uWSGI deployments, call both instrumentors inside the post-fork hook.
Detecting N+1 queries: In your trace view, look for many repeated identical db.statement spans within a single request trace. If SELECT * FROM app_product WHERE id = ? appears 50 times inside one HTTP span, Django's ORM is issuing a query per related object instead of using select_related or prefetch_related.
For SQL commentor configuration, see DjangoInstrumentor configuration.
Custom spans
OpenTelemetry automatically traces HTTP requests. Add start_as_current_span for specific operations that need their own span:
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
def process_order(request, order_id):
with tracer.start_as_current_span('order.process') as span:
span.set_attribute('order.id', order_id)
span.set_attribute('user.id', str(request.user.id))
result = run_processing_pipeline(order_id)
span.set_attribute('order.status', result.status)
return result
For error capture, record the exception on the current span:
from opentelemetry.trace import StatusCode
def my_view(request):
span = trace.get_current_span()
try:
result = risky_operation()
return JsonResponse({'result': result})
except Exception as e:
span.record_exception(e)
span.set_status(StatusCode.ERROR, str(e))
return JsonResponse({'error': 'failed'}, status=500)
Django metrics
OpenTelemetry collects metrics alongside traces using the same SDK. Configure a MeterProvider with an OTLP exporter in your application startup:
from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
reader = PeriodicExportingMetricReader(
OTLPMetricExporter(endpoint='http://localhost:4317')
)
metrics.set_meter_provider(MeterProvider(metric_readers=[reader]))
Once configured, create instruments in your views or services:
meter = metrics.get_meter(__name__)
request_counter = meter.create_counter(
name='django.requests.total',
description='Total HTTP requests',
)
def my_view(request):
request_counter.add(1, {'http.method': request.method, 'http.route': request.path})
...
For full metrics documentation including histograms and gauges, see the OpenTelemetry Python guide.
Celery task tracing
OpenTelemetry propagates trace context automatically from a Django HTTP request into a Celery task, so the HTTP span and the task span appear in the same trace.
Install the Celery instrumentation:
pip install opentelemetry-instrumentation-celery==0.61b0
For Celery workers, initialise the instrumentor inside the worker_process_init signal. Calling CeleryInstrumentor().instrument() at module level causes BatchSpanProcessor to initialise before the process forks, which breaks the exporter thread in child workers:
# celery.py
from celery import Celery
from celery.signals import worker_process_init
from opentelemetry.instrumentation.celery import CeleryInstrumentor
app = Celery('myproject')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
@worker_process_init.connect(weak=False)
def init_celery_tracing(*args, **kwargs):
# Initialise your TracerProvider here, then:
CeleryInstrumentor().instrument()
Context propagation works automatically — when a view calls task.delay(), the current trace ID is embedded in the task headers and restored in the worker process, linking the task span to the originating HTTP request.
Docker deployment
# docker-compose.yml
services:
django:
build: .
ports:
- '8000:8000'
environment:
- DJANGO_SETTINGS_MODULE=myproject.settings
- OTEL_SERVICE_NAME=my-django-app
- OTEL_EXPORTER_OTLP_ENDPOINT=http://collector:4317
- OTEL_EXPORTER_OTLP_PROTOCOL=grpc
depends_on:
- db
- collector
collector:
image: otel/opentelemetry-collector-contrib:latest
command: ['--config=/etc/otel-collector-config.yaml']
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- '4317:4317'
- '4318:4318'
db:
image: postgres:16
environment:
- POSTGRES_DB=mydb
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
For collector configuration patterns, see the OpenTelemetry Docker guide.
Production configuration
At high request volumes, sampling every trace is too expensive. Configure a ratio-based sampler via environment variables:
export OTEL_TRACES_SAMPLER=parentbased_traceidratio
export OTEL_TRACES_SAMPLER_ARG=0.1
Or in code when building the TracerProvider:
from opentelemetry.sdk.trace.sampling import ParentBasedTraceIdRatio
provider = TracerProvider(
sampler=ParentBasedTraceIdRatio(0.1),
resource=resource
)
parentbased_traceidratio at 0.1 samples 10% of new root traces while respecting the sampling decision from upstream services so distributed traces stay coherent.
Troubleshooting
No spans appearing
Verify DJANGO_SETTINGS_MODULE is set before DjangoInstrumentor().instrument() is called. Check the OTLP endpoint is reachable: curl -v http://your-collector:4317. Enable debug logging:
import logging
logging.getLogger('opentelemetry').setLevel(logging.DEBUG)
Duplicate spans in development
Always use --noreload with manage.py runserver. Without it, Django restarts the process after the initial import, causing DjangoInstrumentor to run twice.
Worker spans not appearing with Gunicorn or uWSGI
DjangoInstrumentor().instrument() must be called inside the post-fork hook, not at module level. See the WSGI and ASGI instrumentation section.
Duplicate SQL comments when using psycopg2
If both DjangoInstrumentor(is_sql_commentor_enabled=True) and Psycopg2Instrumentor(enable_commenter=True) are active simultaneously, SQL comments will be appended twice. Enable SQL commentor on only one of them.
What is Uptrace?
Uptrace is an open source APM for OpenTelemetry that supports distributed tracing, metrics, and logs. You can use it to monitor applications and troubleshoot issues.
Uptrace comes with an intuitive query builder, rich dashboards, alerting rules with notifications, and integrations for most languages and frameworks. It can process billions of spans on a single server at a fraction of the cost of hosted alternatives.
Try it via the cloud demo (no login required) or run it locally with Docker. Source code on GitHub.
FAQ
- Which Python version does OpenTelemetry Django support? Python 3.9 or later. The
0.61b0release dropped support for Python 3.7 and 3.8. Check pypi.org/project/opentelemetry-instrumentation-django for the current minimum version. - How do I instrument Django with ASGI? Install
opentelemetry-instrumentation-asgi==0.61b0, then runopentelemetry-bootstrap -a install.DjangoInstrumentor().instrument()still handles the Django layer; the ASGI package handles the lower runtime layer. See the WSGI and ASGI instrumentation section. - Can I trace Celery tasks from Django requests? Yes. Install
opentelemetry-instrumentation-celery==0.61b0and initialiseCeleryInstrumentor().instrument()insideworker_process_init. Trace context propagates automatically from the Django request span into the task span — no manual wiring needed. - How do I avoid double instrumentation in development? Use
--noreloadwhen running the development server:python manage.py runserver --noreload. Django's auto-reloader forks the process after the initial import, causingDjangoInstrumentorto run twice if initialised at module level.
What's next
- OpenTelemetry Python guide — full Python SDK documentation
- OpenTelemetry Celery guide — detailed Celery instrumentation
- OpenTelemetry PostgreSQL guide — PostgreSQL monitoring patterns
- OpenTelemetry Flask guide — Flask instrumentation
- OpenTelemetry FastAPI guide — FastAPI instrumentation