OpenTelemetry Context Propagation: W3C TraceContext & Troubleshooting Guide
Context propagation is the mechanism that enables distributed tracing by passing trace information (trace IDs, span IDs, and other metadata) across service boundaries. Without proper context propagation, traces fragment into disconnected spans, making it impossible to track requests through microservices architectures.
OpenTelemetry implements context propagation through standardized protocols, primarily W3C Trace Context, ensuring trace continuity across services regardless of programming language or framework.
How Context Propagation Works
When a request flows through a distributed system, each service needs to know which trace it belongs to. Context propagation solves this by:
- Serializing trace context into a standard format (headers)
- Injecting the serialized context into outgoing requests
- Extracting context from incoming requests
- Deserializing the context back into usable trace information
Without context propagation, each service would create an independent trace, losing the connection between related operations.
W3C Trace Context
W3C Trace Context is the recommended standard for propagating trace information across services. It defines two HTTP headers:
traceparent Header
The traceparent header carries the essential trace context information:
traceparent: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01
This header contains four fields separated by dashes:
| Field | Example | Description |
|---|---|---|
| version | 00 | Protocol version (currently always 00) |
| trace-id | 0af7651916cd43dd8448eb211c80319c | 128-bit trace identifier (32 hex characters) |
| parent-id | b7ad6b7169203331 | 64-bit span identifier (16 hex characters) |
| trace-flags | 01 | 8-bit flags (01 = sampled, 00 = not sampled) |
Breaking down the fields:
- Version: Future-proofs the protocol by indicating which version of the specification is being used
- Trace ID: Globally unique identifier shared by all spans in a single trace
- Parent ID: The span ID from the calling service, becomes the parent span ID for the new span
- Trace Flags: Indicates whether the trace is sampled (should be recorded) or not
tracestate Header
The tracestate header carries vendor-specific trace information:
tracestate: congo=t61rcWkgMzE,rojo=00f067aa0ba902b7
This header:
- Allows multiple vendors to add their own key-value pairs
- Entries are comma-separated
- Keys must be lowercase
- Maximum of 32 entries
- Used for vendor-specific features like additional sampling info
Example with both headers:
GET /api/users/123 HTTP/1.1
Host: api.example.com
traceparent: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01
tracestate: uptrace=t61rcWkgMzE,other=value123
Propagators
Propagators are responsible for serializing and deserializing context across process boundaries. OpenTelemetry supports multiple propagator formats for compatibility with different systems.
Built-in Propagators
TraceContext Propagator
The default W3C Trace Context propagator (recommended):
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
)
// Set W3C Trace Context as the global propagator
otel.SetTextMapPropagator(
propagation.TraceContext{},
)
Baggage Propagator
Propagates baggage key-value pairs across services:
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
)
// Add baggage propagation
otel.SetTextMapPropagator(
propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{},
propagation.Baggage{},
),
)
B3 Propagator
Legacy Zipkin B3 format for backward compatibility:
import (
"go.opentelemetry.io/contrib/propagators/b3"
"go.opentelemetry.io/otel"
)
// Use B3 propagation (legacy Zipkin format)
otel.SetTextMapPropagator(b3.New())
// Or combine with W3C for compatibility
otel.SetTextMapPropagator(
propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{},
b3.New(),
),
)
B3 Header Format:
X-B3-TraceId: 0af7651916cd43dd8448eb211c80319c
X-B3-SpanId: b7ad6b7169203331
X-B3-Sampled: 1
X-B3-ParentSpanId: 00f067aa0ba902b7
Choosing a Propagator
| Propagator | Use When |
|---|---|
| W3C Trace Context | Default choice for new systems |
| W3C + Baggage | Need to pass custom context data |
| B3 | Integrating with legacy Zipkin systems |
| Composite | Supporting multiple propagation formats |
Manual Context Propagation
While most instrumentation libraries handle propagation automatically, you may need manual propagation for:
- Custom protocols (WebSocket, gRPC streams)
- Message queues
- Unsupported frameworks
- Custom middleware
HTTP Client Injection
Inject trace context into outgoing HTTP requests:
import (
"net/http"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
)
func makeRequest(ctx context.Context, url string) error {
// Create a span for this operation
ctx, span := tracer.Start(ctx, "http_request")
defer span.End()
// Create HTTP request
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return err
}
// Inject trace context into request headers
otel.GetTextMapPropagator().Inject(ctx, propagation.HeaderCarrier(req.Header))
// Make the request
resp, err := http.DefaultClient.Do(req)
if err != nil {
span.RecordError(err)
return err
}
defer resp.Body.Close()
return nil
}
HTTP Server Extraction
Extract trace context from incoming HTTP requests:
import (
"net/http"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
)
func handler(w http.ResponseWriter, r *http.Request) {
// Extract trace context from incoming request headers
ctx := otel.GetTextMapPropagator().Extract(r.Context(),
propagation.HeaderCarrier(r.Header))
// Create span with extracted context
ctx, span := tracer.Start(ctx, "handle_request",
trace.WithSpanKind(trace.SpanKindServer),
)
defer span.End()
// Process request with traced context
result := processRequest(ctx, r)
w.WriteHeader(http.StatusOK)
w.Write([]byte(result))
}
Message Queue Propagation
Propagate context through message queues:
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
)
// Producer: Inject context into message headers
func publishMessage(ctx context.Context, topic string, payload []byte) error {
ctx, span := tracer.Start(ctx, "publish_message",
trace.WithSpanKind(trace.SpanKindProducer),
)
defer span.End()
headers := make(map[string]string)
// Inject trace context into message headers
otel.GetTextMapPropagator().Inject(ctx, propagation.MapCarrier(headers))
// Publish message with headers
return kafka.Publish(topic, payload, headers)
}
// Consumer: Extract context from message headers
func consumeMessage(msg *kafka.Message) error {
// Extract trace context from message headers
ctx := otel.GetTextMapPropagator().Extract(context.Background(),
propagation.MapCarrier(msg.Headers))
ctx, span := tracer.Start(ctx, "consume_message",
trace.WithSpanKind(trace.SpanKindConsumer),
)
defer span.End()
return processMessage(ctx, msg.Payload)
}
Baggage
Baggage is a context propagation mechanism for distributing arbitrary key-value pairs alongside trace context. Unlike span attributes (which only exist within a single span), baggage propagates across service boundaries.
Use Cases
Baggage is useful for:
- User identification: Propagate user ID, tenant ID, or session ID
- Feature flags: Pass feature toggle states across services
- Request metadata: Carry custom request properties (API version, client type)
- Business context: Propagate order ID, transaction ID, or other domain identifiers in microservices monitoring
Working with Baggage
import (
"go.opentelemetry.io/otel/baggage"
)
// Set baggage values
func setUserContext(ctx context.Context, userID, tier string) context.Context {
member1, _ := baggage.NewMember("user.id", userID)
member2, _ := baggage.NewMember("user.tier", tier)
bag, _ := baggage.New(member1, member2)
return baggage.ContextWithBaggage(ctx, bag)
}
// Retrieve baggage values
func getUserTier(ctx context.Context) string {
bag := baggage.FromContext(ctx)
return bag.Member("user.tier").Value()
}
// Use in a handler
func handler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Set baggage
ctx = setUserContext(ctx, "user-123", "premium")
// Baggage automatically propagates to downstream services
callDownstreamService(ctx)
}
Baggage Best Practices
- Keep it small: Baggage is transmitted with every request
- Limit to essential data only
- Avoid large values or many keys
- Consider network overhead
- Sensitive data: Never put secrets or PII in baggage
- Baggage may be logged or exposed
- Use encryption if necessary
- Consider privacy regulations
- Naming conventions: Use namespaced keys
user.id,request.client_type- Avoid generic names like
idortype
- Size limits: W3C Baggage spec recommends:
- Maximum 180 characters per entry
- Maximum 8KB total baggage size
Troubleshooting Broken Traces
When traces don't connect properly across services in your observability setup, follow this systematic approach:
Symptom: Disconnected Spans
Problem: Spans appear in the tracing backend but aren't connected into a single trace.
Diagnosis:
- Verify headers are present:
# Check if traceparent header is being sent
curl -v http://your-service/endpoint 2>&1 | grep -i traceparent
# Or use this to inspect all headers
curl -v http://your-service/endpoint 2>&1 | grep -i "^> "
- Check propagator configuration:
// Add debug logging
import "go.opentelemetry.io/otel"
propagator := otel.GetTextMapPropagator()
fmt.Printf("Configured propagator: %T\n", propagator)
// Verify it's set to W3C Trace Context
// Output should be: *propagation.traceContext
- Verify extraction/injection:
// Add logging to verify injection
func makeRequest(ctx context.Context, url string) {
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
otel.GetTextMapPropagator().Inject(ctx, propagation.HeaderCarrier(req.Header))
// Log headers to verify injection
fmt.Printf("Outgoing headers: %v\n", req.Header)
// Should see: traceparent: [00-...]
}
// Add logging to verify extraction
func handler(w http.ResponseWriter, r *http.Request) {
// Log incoming headers
fmt.Printf("Incoming headers: %v\n", r.Header)
ctx := otel.GetTextMapPropagator().Extract(r.Context(),
propagation.HeaderCarrier(r.Header))
span := trace.SpanFromContext(ctx)
fmt.Printf("Extracted span context: %v\n", span.SpanContext())
}
Symptom: Missing traceparent Header
Common Causes:
- Propagator not configured globally:
// ❌ Bad: Propagator not set
// OpenTelemetry SDK initialized but propagator never configured
// ✅ Good: Set propagator globally
otel.SetTextMapPropagator(
propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{},
propagation.Baggage{},
),
)
- Custom HTTP client not instrumented:
// ❌ Bad: Using raw HTTP client
client := &http.Client{}
resp, err := client.Do(req)
// ✅ Good: Use instrumented client
import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
client := &http.Client{
Transport: otelhttp.NewTransport(http.DefaultTransport),
}
- Middleware order issues:
# ❌ Bad: Tracing middleware runs after request processing
app.middleware('http')(other_middleware)
app.middleware('http')(tracing_middleware)
# ✅ Good: Tracing middleware runs first
app.middleware('http')(tracing_middleware)
app.middleware('http')(other_middleware)
Symptom: Context Lost in Async Operations
Problem: Traces break when using goroutines, threads, or async operations.
Diagnosis and Fix:
// ❌ Bad: Context lost in goroutine
go func() {
// This creates a new trace, not a child span
ctx, span := tracer.Start(context.Background(), "async_work")
defer span.End()
doWork(ctx)
}()
// ✅ Good: Pass context explicitly
go func(ctx context.Context) {
// This creates a child span of the parent trace
ctx, span := tracer.Start(ctx, "async_work")
defer span.End()
doWork(ctx)
}(ctx)
Symptom: Traces Break at Service Boundaries
Problem: Traces work within services but break between services.
Diagnosis Checklist:
- Verify both services use compatible propagators:
- Both should use W3C Trace Context
- Or both should support the same legacy format (B3)
- Check HTTP client instrumentation:
# Enable debug logging to see if headers are sent
export OTEL_LOG_LEVEL=debug
# Look for lines like:
# "Injecting context into headers: traceparent=00-..."
- Verify header passthrough in proxies/gateways:
# Nginx: Ensure headers are passed through
proxy_pass_request_headers on;
proxy_set_header traceparent $http_traceparent;
proxy_set_header tracestate $http_tracestate;
For Kubernetes environments, see OpenTelemetry Kubernetes monitoring for proxy configuration.
- Check for header filtering:
// Some HTTP libraries filter "unsafe" headers
// Ensure traceparent and tracestate are allowed
// AWS API Gateway example - requires explicit configuration
// to forward traceparent headers
Symptom: Inconsistent Trace IDs
Problem: Same request shows different trace IDs in different services.
Root Causes:
- Multiple propagators creating conflicts:
// ❌ Bad: Different propagators in different services
// Service A uses W3C Trace Context
// Service B uses B3 propagator
// Result: Incompatible, creates new traces
// ✅ Good: Use composite propagator in both
otel.SetTextMapPropagator(
propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{}, // Primary format
b3.New(), // Backward compatibility
),
)
- Service creating new trace instead of continuing:
# ❌ Bad: Creating new trace
@app.route('/api/endpoint')
def handler():
# This creates a new trace, ignoring incoming context
with tracer.start_as_current_span("handler"):
process()
# ✅ Good: Extract and use incoming context
@app.route('/api/endpoint')
def handler():
ctx = extract(request.headers)
with tracer.start_as_current_span("handler", context=ctx):
process()
Debug Tools
For comprehensive troubleshooting in polyglot microservices, use these debug tools:
OpenTelemetry Debug Logging:
Configure OpenTelemetry environment variables to enable detailed logging:
# Enable debug logging (see OpenTelemetry env vars guide for more options)
export OTEL_LOG_LEVEL=debug
# Python specific
export OTEL_PYTHON_LOG_LEVEL=debug
# Go - set in code
HTTP Header Inspection Tools:
# tcpdump to inspect HTTP headers
sudo tcpdump -i any -A 'tcp port 80' | grep -A 10 traceparent
# mitmproxy for HTTPS inspection
mitmproxy --mode reverse:http://your-service:8080 --showhost
# curl with verbose output
curl -v -H "traceparent: 00-12345678901234567890123456789012-1234567890123456-01" \
http://your-service/endpoint
Best Practices
1. Always Use Composite Propagators
Support multiple formats for maximum compatibility across your distributed tracing tools:
otel.SetTextMapPropagator(
propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{}, // W3C standard
propagation.Baggage{}, // Baggage support
),
)
2. Initialize Propagators Early
Set up propagation before any HTTP clients or servers start:
func main() {
// Initialize OpenTelemetry first
initTracing()
// Then start your application
startServer()
}
3. Use Instrumentation Libraries
Prefer auto-instrumentation over manual propagation:
- HTTP: Use framework-specific OpenTelemetry middleware (Express, Flask, Gin)
- gRPC: Use OpenTelemetry gRPC interceptors
- Message queues: Use instrumented client libraries (Kafka, RabbitMQ)
4. Test Context Propagation
Add tests to verify propagation works:
func TestContextPropagation(t *testing.T) {
// Create a span
ctx, span := tracer.Start(context.Background(), "test")
defer span.End()
// Create request and inject context
req := httptest.NewRequest("GET", "/test", nil)
otel.GetTextMapPropagator().Inject(ctx, propagation.HeaderCarrier(req.Header))
// Verify traceparent header exists
traceparent := req.Header.Get("traceparent")
if traceparent == "" {
t.Error("traceparent header not set")
}
// Verify trace ID matches
if !strings.Contains(traceparent, span.SpanContext().TraceID().String()) {
t.Error("trace ID mismatch")
}
}
5. Monitor Propagation Health
Track metrics to detect propagation issues:
- Percentage of traces with single span (broken propagation)
- Services reporting orphaned spans
- Mismatched trace ID counts between services
Next Steps
Context propagation is fundamental to distributed tracing. Once you have solid context propagation:
- Learn about OpenTelemetry sampling strategies
- Explore OpenTelemetry architecture
- Set up OpenTelemetry Collector for advanced routing
- Review OpenTelemetry APM tools for trace visualization
- Implement structured logging to correlate logs with traces
For language-specific implementation details:
For framework-specific guides:
- Kubernetes monitoring with context propagation
- Docker tracing setup
- Spring Boot Monitoring setup