OpenTelemetry NGINX Instrumentation
The ngx_otel_module adds distributed tracing to NGINX and NGINX Plus. It automatically traces HTTP requests, propagates W3C trace context to upstream services, and exports spans via OTLP — with only 10-15% overhead compared to 50%+ for third-party alternatives.
Note on examples: This guide uses Uptrace as the OpenTelemetry backend. OpenTelemetry is vendor-neutral — replace the endpoint and headers with any OTLP-compatible backend (Jaeger, Grafana Tempo, etc.).
Quick Start
| Step | Action | Detail |
|---|---|---|
| 1. Install | Add the dynamic module | apt install nginx-module-otel |
| 2. Load | Add to nginx.conf | load_module modules/ngx_otel_module.so; |
| 3. Configure | Point at your collector | otel_exporter { endpoint localhost:4317; } |
| 4. Enable | Turn on per-location | otel_trace on; otel_trace_context propagate; |
Installation
The OpenTelemetry module is available for both NGINX Open Source and NGINX Plus as a dynamic module.
NGINX Open Source
For NGINX Open Source, build the module from source or use pre-built packages:
From Package Repository (Ubuntu/Debian):
# Add NGINX repository
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" \
| sudo tee /etc/apt/sources.list.d/nginx.list
# Install NGINX and OpenTelemetry module
sudo apt update
sudo apt install nginx nginx-module-otel
From Source:
# Clone the repository
git clone https://github.com/nginxinc/nginx-otel.git
cd nginx-otel
# Build as dynamic module
./configure --with-compat --add-dynamic-module=nginx-otel
make modules
# Copy module to NGINX modules directory
sudo cp objs/ngx_otel_module.so /etc/nginx/modules/
NGINX Plus
For NGINX Plus subscribers, install the pre-built module:
sudo apt install nginx-plus-module-otel
Verify Installation
Confirm the module is available:
nginx -V 2>&1 | grep otel
Configuration
Loading the Module
Add the module to your main NGINX configuration (nginx.conf):
load_module modules/ngx_otel_module.so;
events {}
http {
# OpenTelemetry configuration goes here
}
Basic Configuration
Configure the exporter and enable tracing:
load_module modules/ngx_otel_module.so;
http {
# Configure OTel exporter
otel_exporter {
endpoint localhost:4317;
}
# Set service name
otel_service_name "nginx-gateway";
server {
listen 80;
server_name example.com;
location / {
# Enable tracing for this location
otel_trace on;
# Inject trace context into upstream requests
otel_trace_context propagate;
proxy_pass http://backend;
}
}
}
Exporter Configuration
The otel_exporter directive configures OTLP/gRPC export parameters:
http {
otel_exporter {
# OTLP endpoint
endpoint https://api.uptrace.dev:4317;
# Custom headers for authentication
header uptrace-dsn "https://<secret>@api.uptrace.dev?grpc=4317";
# TLS configuration
trusted_certificate /etc/nginx/ssl/ca-bundle.crt;
# Export interval (default: 5s)
interval 5s;
# Batch size (default: 512 spans)
batch_size 512;
# Pending batches per worker (default: 4)
batch_count 4;
}
}
Trace Context Propagation
Control how NGINX handles W3C trace context headers:
server {
location /api {
otel_trace on;
# extract - Use existing trace context from request
# inject - Add new context, overwriting existing headers
# propagate - Update existing context (extract + inject)
# ignore - Skip header processing
otel_trace_context propagate;
proxy_pass http://backend;
}
}
| Mode | Behavior |
|---|---|
extract | Uses trace context from incoming request headers |
inject | Creates new trace context, overwrites existing |
propagate | Combines extract and inject (recommended) |
ignore | No trace context processing |
Custom Span Names and Attributes
Customize span names and add attributes:
server {
location /checkout {
otel_trace on;
# Custom span name
otel_span_name "checkout_request";
# Add custom attributes
otel_span_attr "http.endpoint" "/checkout";
otel_span_attr "user.region" "$geoip_country_code";
otel_span_attr "cache.status" "$upstream_cache_status";
proxy_pass http://checkout-service;
}
}
Advanced Patterns
Dynamic Trace Sampling
Implement percentage-based sampling using NGINX variables:
http {
otel_exporter {
endpoint localhost:4317;
}
# Sample 10% of traces
split_clients "$otel_trace_id" $trace_sampler {
10% on;
* off;
}
server {
location / {
# Dynamic sampling based on variable
otel_trace $trace_sampler;
otel_trace_context propagate;
proxy_pass http://backend;
}
}
}
Conditional Tracing
Enable tracing based on request properties:
http {
# Trace only requests with specific header
map $http_x_debug_trace $enable_tracing {
default off;
"true" on;
"1" on;
}
server {
location / {
otel_trace $enable_tracing;
otel_trace_context propagate;
proxy_pass http://backend;
}
}
}
Multi-Environment Configuration
Configure different exporters per environment:
http {
# Production exporter
otel_exporter {
endpoint https://prod-otel-collector:4317;
header authorization "Bearer ${PROD_TOKEN}";
trusted_certificate /etc/nginx/ssl/prod-ca.crt;
}
otel_service_name "nginx-prod";
server {
listen 443 ssl;
server_name api.example.com;
location / {
otel_trace on;
otel_trace_context propagate;
# Add environment tag
otel_span_attr "deployment.environment" "production";
proxy_pass http://backend;
}
}
}
Embedded Variables
Access trace identifiers in NGINX configuration:
server {
location / {
otel_trace on;
# Add trace ID to response headers
add_header X-Trace-ID $otel_trace_id;
# Log trace information
access_log /var/log/nginx/access.log combined;
log_format trace '$remote_addr - $otel_trace_id [$time_local] '
'"$request" $status $body_bytes_sent';
proxy_pass http://backend;
# Pass trace ID to upstream
proxy_set_header X-Trace-ID $otel_trace_id;
proxy_set_header X-Span-ID $otel_span_id;
}
}
Available variables:
| Variable | Description |
|---|---|
$otel_trace_id | Current trace identifier |
$otel_span_id | Current span identifier |
$otel_parent_id | Parent span identifier |
$otel_parent_sampled | Parent sampling decision (0 or 1) |
NGINX Ingress Controller
For Kubernetes environments, enable OpenTelemetry in the NGINX Ingress Controller:
Installation
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
data:
enable-opentelemetry: "true"
otlp-collector-host: "otel-collector.observability.svc.cluster.local"
otlp-collector-port: "4317"
otel-service-name: "nginx-ingress"
otel-sampler: "AlwaysOn"
otel-sampler-ratio: "1.0"
Helm Installation
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.config.enable-opentelemetry=true \
--set controller.config.otlp-collector-host=otel-collector.observability.svc \
--set controller.config.otlp-collector-port=4317 \
--set controller.config.otel-service-name=nginx-ingress
Per-Ingress Configuration
Configure tracing for specific Ingress resources:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/enable-opentelemetry: "true"
nginx.ingress.kubernetes.io/opentelemetry-operation-name: "api-gateway"
nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span: "true"
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8080
Docker Configuration
Dockerfile
# nginx:1.27-otel bundles ngx_otel_module.so — no extra install needed
FROM nginx:1.27-otel
# Copy custom configuration
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
The -otel variant is published alongside every official NGINX image and includes the pre-built ngx_otel_module.so. If you need a custom build, clone nginx-otel and compile the module against your NGINX version.
docker-compose.yml
services:
nginx:
build: .
ports:
- "80:80"
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317
depends_on:
- otel-collector
- backend
backend:
image: your-backend-app:latest
expose:
- "8080"
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317:4317"
- "4318:4318"
Performance Considerations
The ngx_otel_module is optimized for production use with minimal overhead:
- Request latency overhead: 10-15% (vs 50%+ for other solutions)
- Memory overhead: ~2MB per worker process
- CPU overhead: Negligible for typical workloads
Optimization Tips
- Use appropriate sampling: Not all traffic needs tracingnginx
split_clients "$otel_trace_id" $sampler { 1% on; # Sample 1% in high-traffic scenarios * off; } - Adjust batch size: Larger batches reduce export frequencynginx
otel_exporter { endpoint localhost:4317; batch_size 1024; # Increase from default 512 interval 10s; # Increase from default 5s } - Selective location tracing: Only trace critical pathsnginx
location /api { otel_trace on; # Trace API calls proxy_pass http://api-backend; } location /static { # No tracing for static content root /var/www/static; }
What is Uptrace?
Uptrace is an OpenTelemetry APM that stores traces, metrics, and logs in a single platform. Try the cloud demo (no login required) or run it locally with Docker.
To send NGINX traces to Uptrace Cloud, configure the exporter with your project DSN:
otel_exporter {
endpoint https://api.uptrace.dev:4317;
header uptrace-dsn "https://<secret>@api.uptrace.dev?grpc=4317";
}
otel_service_name "nginx-gateway";
FAQ
Does OpenTelemetry work with NGINX Open Source and NGINX Plus? Yes, the ngx_otel_module supports both NGINX Open Source and NGINX Plus. The module is available as a pre-built package for NGINX Open Source and comes bundled with NGINX Plus subscriptions. Both versions provide the same OpenTelemetry capabilities including distributed tracing, context propagation, and OTLP export.
What's the performance impact of the NGINX OpenTelemetry module? The official ngx_otel_module introduces only 10-15% overhead compared to 50%+ for third-party solutions. This minimal impact is achieved through kernel-level optimizations and efficient eBPF-free implementation. For high-traffic scenarios, use sampling to reduce overhead further while maintaining observability coverage.
Can I use OpenTelemetry with NGINX Ingress Controller on Kubernetes? Yes, the NGINX Ingress Controller supports OpenTelemetry through ConfigMap settings or Helm values. Enable it cluster-wide or per-Ingress using annotations. The controller automatically instruments all HTTP/HTTPS traffic passing through the ingress, making it ideal for Kubernetes service mesh observability without sidecar proxies.
How does NGINX propagate trace context to upstream services? NGINX uses the otel_trace_context directive with W3C Trace Context standard. Set it to propagate mode to automatically inject traceparent and tracestate headers into upstream requests. This enables distributed tracing across your entire service architecture, even if upstream services use different instrumentation libraries.
Does the OpenTelemetry module support encrypted HTTPS traffic? Yes, the module captures traces for both HTTP and HTTPS traffic transparently. TLS/SSL termination at NGINX doesn't affect tracing - spans are created after decryption, so you get full visibility into request processing regardless of encryption. TLS configuration for the OTLP exporter is supported via the trusted_certificate directive.
Can I selectively trace specific routes or disable tracing for static content? Yes, use NGINX's location blocks to control tracing granularly. Enable otel_trace on for API endpoints while leaving it off for static content like /static/ or /assets/. You can also use NGINX variables and the map directive to implement complex conditional tracing based on request headers, paths, or other criteria.
What's Next?
With NGINX instrumented, the next step is tracing the backend services it proxies to — so you get end-to-end visibility from the edge to the database:
- Python backends: Django, Flask, FastAPI
- Go backends: OpenTelemetry Go, Gin, Echo
- Node.js backends: Express
- Kubernetes: OpenTelemetry Kubernetes for cluster-wide observability
- Collector: Route NGINX spans through the OpenTelemetry Collector for filtering, sampling, and multi-backend export