What is Distributed Tracing? Concepts & OpenTelemetry Implementation
Distributed tracing is an observability technique that tracks requests as they flow through distributed systems, providing visibility into how different services interact to fulfill user requests. It creates a complete view of a request's journey across microservices, APIs, and databases, recording timing, dependencies, and failures along the way.
With distributed tracing, you can analyze the timing of each operation, monitor logs and errors as they occur in real-time, and identify bottlenecks across your entire system. This technique is particularly valuable in microservices architectures where applications consist of multiple independent services working together.
How Distributed Tracing Works
Modern applications built on microservices or serverless architectures rely on multiple services interacting to fulfill a single user request. This complexity makes it challenging to identify performance bottlenecks, diagnose issues, and analyze overall system behavior.
Distributed tracing addresses these challenges by creating a trace—a representation of a single request's journey through various services and components. Each trace consists of interconnected spans, where each span represents an individual operation within a specific service or component.
When a request enters a service, the trace context propagates with the request through trace headers, allowing downstream services to participate in the same trace. As the request flows through the system, each service generates its own span and updates the trace context with information about the operation's duration, metadata, and relevant context.
Distributed tracing tools use the generated trace data to provide visibility into system behavior, identify performance issues, assist with debugging, and help ensure the reliability and scalability of distributed applications.
Span Kinds
OpenTelemetry defines five span kinds that describe how services interact within a trace:
| Span Kind | Type | When to Use | Common Examples |
|---|---|---|---|
| Server | Synchronous | Handling incoming requests | HTTP server, gRPC server, GraphQL resolvers |
| Client | Synchronous | Making outbound requests | HTTP client, database queries, Redis calls |
| Producer | Asynchronous | Publishing messages (ends when message accepted) | Kafka publish, RabbitMQ send, SQS enqueue |
| Consumer | Asynchronous | Processing messages (from receive to completion) | Kafka consume, background job processing |
| Internal | In-process | Operations within a service (no network calls) | Business logic, calculations, data transform |
Choosing the correct span kind ensures accurate visualization in trace waterfalls and helps backends understand service dependencies.
Getting Started with OpenTelemetry Tracing
The easiest way to get started is to choose an OpenTelemetry APM and follow its documentation. Many vendors offer pre-configured OpenTelemetry distributions that simplify the setup process.
Some vendors, such as Uptrace and SkyWalking, allow you to try their products without creating an account.
Uptrace is an open source APM for OpenTelemetry with an intuitive query builder, rich dashboards, automatic alerts, and integrations for most languages and frameworks. It helps developers and operators gain insight into the latency, errors, and dependencies of their distributed applications, identify performance bottlenecks, debug problems, and optimize overall system performance.
You can get started with Uptrace by downloading a DEB/RPM package or a pre-compiled Go binary.
Core Concepts
Spans
A span represents a unit of work in a trace, such as a remote procedure call (RPC), database query, or in-process function call. Each span contains:
- A span name (operation name)
- A parent span ID (except for root spans)
- A span kind
- Start and end timestamps
- A status indicating success or failure
- Key-value attributes describing the operation
- A timeline of events
- Links to other spans
- A span context that propagates trace ID and other data between services
A trace is a tree of spans showing the path of a request through an application. The root span is the first span in a trace.
Span Names
OpenTelemetry backends use span names and attributes to group similar spans together. To ensure proper grouping, use short, concise names. Keep the total number of unique span names below 1,000 to avoid creating excessive span groups that can degrade performance.
Good span names (short, distinctive, and groupable):
| Span name | Comment |
|---|---|
GET /projects/:id | Route name with parameter placeholders |
select_project | Function name without arguments |
SELECT * FROM projects WHERE id = ? | Database query with placeholders |
Poor span names (contain variable parameters):
| Span name | Comment |
|---|---|
GET /projects/42 | Contains variable parameter 42 |
select_project(42) | Contains variable argument 42 |
SELECT * FROM projects WHERE id = 42 | Contains variable value 42 |
Span Kind
Span kind describes the relationship between spans in a trace and helps systems understand how services interact. It must be one of the following values:
Server
Server spans represent synchronous request handling on the server side. The span covers the time from receiving a request to sending a response.
Common use cases:
- HTTP server request handlers
- gRPC server methods
- GraphQL resolvers
- Websocket message handlers
Examples:
_, span := tracer.Start(ctx, "handle_request",
trace.WithSpanKind(trace.SpanKindServer),
trace.WithAttributes(
semconv.HTTPMethod("GET"),
semconv.HTTPRoute("/api/users/:id"),
))
defer span.End()
Client
Client spans represent synchronous outbound requests from the client side. The span covers the time from sending a request to receiving a response.
Common use cases:
- HTTP client requests
- gRPC client calls
- Database queries
- Cache operations (Redis, Memcached)
Examples:
_, span := tracer.Start(ctx, "database_query",
trace.WithSpanKind(trace.SpanKindClient),
trace.WithAttributes(
semconv.DBSystemPostgreSQL,
semconv.DBQueryText("SELECT * FROM users WHERE id = ?"),
))
defer span.End()
Producer
Producer spans represent asynchronous message creation and sending operations. The span ends when the message is accepted by the messaging system (not when it's consumed).
Common use cases:
- Publishing to Kafka topics
- Sending messages to RabbitMQ
- Publishing to AWS SQS/SNS
- Enqueueing background jobs
Examples:
_, span := tracer.Start(ctx, "publish_event",
trace.WithSpanKind(trace.SpanKindProducer),
trace.WithAttributes(
semconv.MessagingSystemKafka,
semconv.MessagingDestinationName("user.events"),
))
defer span.End()
Consumer
Consumer spans represent asynchronous message receipt and processing operations. The span covers the time from receiving a message to completing its processing.
Common use cases:
- Consuming from Kafka topics
- Processing messages from RabbitMQ
- Receiving from AWS SQS
- Background job processing
Examples:
_, span := tracer.Start(ctx, "process_message",
trace.WithSpanKind(trace.SpanKindConsumer),
trace.WithAttributes(
semconv.MessagingSystemKafka,
semconv.MessagingOperationProcess,
))
defer span.End()
Internal
Internal spans represent in-process operations that don't involve external services or network calls.
Common use cases:
- Application business logic
- Data transformation functions
- Internal calculations
- In-memory operations
Examples:
_, span := tracer.Start(ctx, "calculate_total",
trace.WithSpanKind(trace.SpanKindInternal),
trace.WithAttributes(
attribute.Int("item_count", len(items)),
))
defer span.End()
Span Kind in Traces: In a typical trace waterfall, you'll see client and server spans paired together (the client span calling a service creates a server span on that service), with internal spans showing work within each service, and producer/consumer spans showing asynchronous message flows.
Status Code
Status code indicates whether an operation succeeded or failed:
ok– Successerror– Failureunset– Default value, allowing backends to assign status
Attributes
Attributes provide contextual information about spans. For example, an HTTP endpoint might have attributes like http.method = GET and http.route = /projects/:id.
While you can name attributes freely, use semantic attribute conventions for common operations to ensure consistency across systems.
Events
Events are timestamped annotations with attributes that lack an end time (and therefore no duration). They typically represent exceptions, errors, logs, and messages, though you can create custom events as well.
Context
Span context carries information about a span as it propagates through different components and services. It includes:
- Trace ID: Globally unique identifier for the entire trace (128-bit / 16 bytes, shared by all spans in the trace)
- Span ID: Unique identifier for a specific span within a trace (64-bit / 8 bytes)
- Trace flags: Properties such as sampling status (8-bit field, where
01= sampled) - Trace state: Optional vendor-specific or application-specific data
Context maintains continuity and correlation of spans within a distributed system, allowing services to associate their spans with the correct trace and providing end-to-end visibility.
Span Structure Example
Here's a complete JSON representation of a span showing all key fields:
{
"traceId": "5b8efff798038103d269b633813fc60c",
"spanId": "eee19b7ec3c1b174",
"parentSpanId": "eee19b7ec3c1b173",
"name": "GET /api/users/:id",
"kind": "SERVER",
"startTimeUnixNano": 1704067200000000000,
"endTimeUnixNano": 1704067200150000000,
"attributes": [
{
"key": "http.method",
"value": { "stringValue": "GET" }
},
{
"key": "http.route",
"value": { "stringValue": "/api/users/:id" }
},
{
"key": "http.status_code",
"value": { "intValue": 200 }
},
{
"key": "service.name",
"value": { "stringValue": "user-service" }
}
],
"events": [
{
"timeUnixNano": 1704067200050000000,
"name": "database.query.start",
"attributes": [
{
"key": "db.statement",
"value": { "stringValue": "SELECT * FROM users WHERE id = ?" }
}
]
},
{
"timeUnixNano": 1704067200100000000,
"name": "cache.lookup",
"attributes": [
{
"key": "cache.hit",
"value": { "boolValue": true }
}
]
}
],
"status": {
"code": "STATUS_CODE_OK"
},
"resource": {
"attributes": [
{
"key": "service.name",
"value": { "stringValue": "user-service" }
},
{
"key": "service.version",
"value": { "stringValue": "1.2.3" }
},
{
"key": "host.name",
"value": { "stringValue": "prod-server-01" }
}
]
}
}
This span shows:
- Duration: 150ms (from start to end time)
- Parent relationship: Connected to parent span via
parentSpanId - Attributes: HTTP request details and service information
- Events: Two timestamped events during execution (database query and cache lookup)
- Status: Successful operation
- Resource: Service and host metadata
Context Propagation
Context propagation ensures that trace IDs, span IDs, and other metadata consistently propagate across services and components. OpenTelemetry handles both in-process and distributed propagation.
For a comprehensive guide on context propagation, including W3C TraceContext, propagators, baggage, and troubleshooting broken traces, see the OpenTelemetry Context Propagation guide.
In-Process Propagation
- Implicit: Automatic storage in thread-local variables (Java, Python, Ruby, Node.js)
- Explicit: Manual passing of context as function arguments (Go)
Distributed Propagation
OpenTelemetry supports several protocols for serializing and passing context data:
- W3C Trace Context (recommended, enabled by default): Uses
traceparentheader
Example:traceparent=00-84b54e9330faae5350f0dd8673c98146-279fa73bc935cc05-01 - B3 Zipkin: Uses headers starting with
x-b3-
Example:X-B3-TraceId
W3C Trace Context Format
The traceparent header contains four fields separated by dashes:
traceparent: 00-5b8efff798038103d269b633813fc60c-eee19b7ec3c1b174-01
││ │ │ └─ Trace flags (01 = sampled, 00 = not sampled)
││ │ └──────────────────── Parent ID (16 hex chars, 8 bytes)
││ └───────────────────────────────────────────────────── Trace ID (32 hex chars, 16 bytes)
│└─────────────────────────────────────────────────────── Version (00 - current W3C standard)
Example HTTP Request with Context:
GET /api/users/123 HTTP/1.1
Host: api.example.com
traceparent: 00-5b8efff798038103d269b633813fc60c-eee19b7ec3c1b174-01
tracestate: uptrace=t61rcWkgMzE
Manual Context Propagation
While instrumentation libraries handle propagation automatically, you may need to manually propagate context for custom protocols or unsupported frameworks.
HTTP Client Example:
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
)
// Create a new span
ctx, span := tracer.Start(ctx, "external_api_call")
defer span.End()
// Create HTTP request
req, _ := http.NewRequestWithContext(ctx, "GET", "https://api.example.com/data", nil)
// Inject trace context into request headers
otel.GetTextMapPropagator().Inject(ctx, propagation.HeaderCarrier(req.Header))
// Make the request
resp, err := http.DefaultClient.Do(req)
HTTP Server Example:
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
)
func handler(w http.ResponseWriter, r *http.Request) {
// Extract trace context from incoming request headers
ctx := otel.GetTextMapPropagator().Extract(r.Context(),
propagation.HeaderCarrier(r.Header))
// Create span with extracted context
ctx, span := tracer.Start(ctx, "handle_request")
defer span.End()
// Process request with traced context
processRequest(ctx, r)
}
Troubleshooting Context Propagation
Verify headers are present:
# Check if traceparent header is being sent
curl -v https://api.example.com/endpoint | grep traceparent
Common propagation issues:
- Missing propagator configuration: Ensure propagator is set globallygo
otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator( propagation.TraceContext{}, propagation.Baggage{}, )) - Custom HTTP client not instrumented: Use instrumented HTTP client or manually inject context
- Async operations losing context: Explicitly pass context to goroutines/threadsgo
// ✅ Good: Pass context explicitly go func(ctx context.Context) { _, span := tracer.Start(ctx, "async_work") defer span.End() // work... }(ctx) // ❌ Bad: Context lost go func() { _, span := tracer.Start(context.Background(), "async_work") defer span.End() // work... }() - Middleware order: Ensure tracing middleware runs before other middleware that creates spans
Baggage
Baggage propagates custom key-value pairs between services, similar to span context. It allows you to associate contextual information (such as user IDs or session IDs) with requests or transactions.
Baggage provides a standardized way to pass relevant data throughout the system, enabling better observability and analysis without relying on ad hoc mechanisms or manual instrumentation.
Instrumentation
OpenTelemetry instrumentations are plugins for popular frameworks and libraries that use the OpenTelemetry API to record important operations such as HTTP requests, database queries, logs, and errors.
What to Instrument
Focus instrumentation efforts on operations that provide the most value:
- Network operations: HTTP requests, RPC calls
- Filesystem operations: Reading and writing files
- Database queries: Combined network and filesystem operations
- Errors and logs: Using structured logging
Manual Instrumentation
While automatic instrumentation covers common frameworks, manual instrumentation gives you fine-grained control over what gets traced. Here are comprehensive examples for creating and managing spans.
Creating Spans
import (
"context"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/trace"
)
// Get tracer (typically done once at startup)
tracer := otel.Tracer("my-service")
func processOrder(ctx context.Context, orderID string) error {
// Create a span
ctx, span := tracer.Start(ctx, "process_order",
trace.WithSpanKind(trace.SpanKindInternal),
)
defer span.End()
// Add attributes
span.SetAttributes(
attribute.String("order.id", orderID),
attribute.String("customer.tier", "premium"),
)
// Do work
if err := validateOrder(ctx, orderID); err != nil {
// Record error
span.RecordError(err)
span.SetStatus(codes.Error, "order validation failed")
return err
}
// Record event
span.AddEvent("order_validated",
trace.WithAttributes(
attribute.String("validation.result", "success"),
))
span.SetStatus(codes.Ok, "order processed successfully")
return nil
}
Creating Nested Spans
Nested spans show parent-child relationships and help visualize the breakdown of operations.
func processOrder(ctx context.Context, orderID string) error {
ctx, span := tracer.Start(ctx, "process_order")
defer span.End()
// Child span 1: Validate
if err := validateOrder(ctx, orderID); err != nil {
return err
}
// Child span 2: Calculate
total, err := calculateTotal(ctx, orderID)
if err != nil {
return err
}
// Child span 3: Save
return saveOrder(ctx, orderID, total)
}
func validateOrder(ctx context.Context, orderID string) error {
ctx, span := tracer.Start(ctx, "validate_order")
defer span.End()
// Validation logic
return nil
}
func calculateTotal(ctx context.Context, orderID string) (float64, error) {
ctx, span := tracer.Start(ctx, "calculate_total")
defer span.End()
// Calculation logic
return 99.99, nil
}
func saveOrder(ctx context.Context, orderID string, total float64) error {
ctx, span := tracer.Start(ctx, "save_order",
trace.WithSpanKind(trace.SpanKindClient),
)
defer span.End()
span.SetAttributes(
attribute.Float64("order.total", total),
attribute.String("db.system", "postgresql"),
)
// Database save logic
return nil
}
The resulting trace will show:
process_order (200ms)
├── validate_order (50ms)
├── calculate_total (30ms)
└── save_order (120ms)
Adding Semantic Attributes
Use semantic conventions for consistent attribute naming:
import "go.opentelemetry.io/otel/semconv/v1.24.0"
// HTTP attributes
span.SetAttributes(
semconv.HTTPMethod("GET"),
semconv.HTTPRoute("/api/users/:id"),
semconv.HTTPStatusCode(200),
)
// Database attributes
span.SetAttributes(
semconv.DBSystemPostgreSQL,
semconv.DBNamespace("production"),
semconv.DBQueryText("SELECT * FROM users WHERE id = ?"),
)
// Messaging attributes
span.SetAttributes(
semconv.MessagingSystemKafka,
semconv.MessagingDestinationName("user.events"),
)
// RPC attributes
span.SetAttributes(
semconv.RPCSystemGRPC,
semconv.RPCService("UserService"),
semconv.RPCMethod("GetUser"),
)
Recording Events and Errors
Events capture point-in-time occurrences within a span:
// Record a simple event
span.AddEvent("cache_miss")
// Event with attributes
span.AddEvent("retry_attempt",
trace.WithAttributes(
attribute.Int("attempt.number", 3),
attribute.String("retry.reason", "connection_timeout"),
))
// Record an error
if err != nil {
span.RecordError(err,
trace.WithAttributes(
attribute.String("error.type", "ValidationError"),
))
span.SetStatus(codes.Error, err.Error())
}
Best Practices
Initialize Early
Initialize OpenTelemetry before importing libraries that require instrumentation to ensure accurate trace capture.
Balance Automatic and Manual Instrumentation
While automatic instrumentation provides a good starting point, manual instrumentation offers more control for specific scenarios.
Focus on Critical Components
Instrument components critical for performance, reliability, or user experience. Be selective to avoid unnecessary overhead.
Follow Semantic Conventions
Use standardized attribute names, span names, and tags as defined by the OpenTelemetry specification to ensure consistency and interoperability.
Implement Smart Sampling
Consider tail-based sampling to manage trace data volume while capturing critical traces.
Troubleshooting
Missing Spans
Problem: Expected spans don't appear in your tracing backend.
Common Causes:
- SDK not initialized before application startup
- Instrumentation libraries misconfigured
- Overly aggressive sampling
- Export endpoint unreachable
Solutions:
- Verify initialization order
- Check auto-instrumentation package installation
- Temporarily set sampling to 100% for debugging
- Test backend connectivity and credentials
- Enable debug logging
Broken Context Propagation
Problem: Spans appear disconnected or traces fragment across services.
Common Causes:
- Context not propagated between services
- Uninstrumented custom protocols
- Async operations breaking context
- Missing trace headers
Solutions:
- Verify HTTP client/server instrumentation
- Manually manage context for custom protocols
- Use explicit context management for async operations
- Confirm trace headers are present in requests
- Configure propagation for all communication protocols
Performance Overhead
Problem: Application performance degrades after enabling tracing.
Common Causes:
- Over-instrumentation
- Synchronous export blocking threads
- Large attributes or excessive events
- High sampling rates
Solutions:
- Use asynchronous batch exporters
- Implement appropriate sampling (1-5% for high-traffic applications)
- Remove unnecessary spans
- Limit attribute sizes
- Consider tail-based sampling
High Cardinality Issues
Problem: Too many unique span names or attribute values cause storage issues.
Common Causes:
- Variable data in span names
- Unlimited attribute values
- Auto-generated unique identifiers
Solutions:
- Use parameterized span names
- Normalize or bucket attribute values
- Follow semantic conventions for naming
Export Failures
Problem: Spans generate but don't reach the backend.
Common Causes:
- Network connectivity issues
- Authentication problems
- Backend unavailability
- Buffer overflow
Solutions:
- Monitor exporter metrics and logs
- Implement retry with exponential backoff
- Verify endpoints and authentication
- Adjust batch size and timeout settings
- Set up export failure alerts
Memory Issues
Problem: Memory leaks or high usage.
Common Causes:
- Spans not properly exported
- Data accumulation in buffers
- Long-running spans holding references
Solutions:
- Ensure proper span lifecycle management
- Configure appropriate export intervals
- Review attribute sizes
- Monitor buffer sizes
- Implement resource cleanup
Next Steps
Distributed tracing provides valuable insights for understanding end-to-end application behavior, identifying performance issues, and optimizing system resources.
Explore the OpenTelemetry tracing API for your programming language: