OpenTelemetry Golang gRPC monitoring [otelgrpc]

Vladimir Mihailenco
December 01, 2025
8 min read

OpenTelemetry gRPC instrumentation (otelgrpc) provides automatic tracing and metrics collection for gRPC clients and servers in Go, capturing RPC method details, status codes, and timing information without manual instrumentation.

Quick Setup

StepActionCode/Command
1. InstallInstall otelgrpc packagego get go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc
2. ClientAdd StatsHandler to clientgrpc.WithStatsHandler(otelgrpc.NewClientHandler())
3. ServerAdd StatsHandler to servergrpc.StatsHandler(otelgrpc.NewServerHandler())
4. VerifyCheck your backend for tracesTraces collected automatically

What's collected:

  • Traces: Full RPC call traces with method, service, and status code
  • Metrics: Request duration, message size, call counts
  • Context propagation: Automatic trace context across services via gRPC metadata

Complete Working Example

Below is a full example showing a gRPC server and client instrumented with otelgrpc. It assumes you have a protobuf service defined like this:

protobuf
// greeter.proto
syntax = "proto3";
package greeter;
option go_package = "example/greeter";

service Greeter {
  rpc SayHello (HelloRequest) returns (HelloReply);
}

message HelloRequest {
  string name = 1;
}

message HelloReply {
  string message = 1;
}

With generated Go code in greeter/, the instrumented server and client look like this:

go
package main

import (
    "context"
    "fmt"
    "log"
    "net"
    "time"

    "google.golang.org/grpc"
    "google.golang.org/grpc/credentials/insecure"
    "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
    "go.opentelemetry.io/otel"
    "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
    "go.opentelemetry.io/otel/sdk/resource"
    sdktrace "go.opentelemetry.io/otel/sdk/trace"
    semconv "go.opentelemetry.io/otel/semconv/v1.26.0"

    pb "example/greeter"
)

// Server implementation
type greeterServer struct {
    pb.UnimplementedGreeterServer
}

func (s *greeterServer) SayHello(ctx context.Context, req *pb.HelloRequest) (*pb.HelloReply, error) {
    return &pb.HelloReply{Message: "Hello " + req.GetName()}, nil
}

func main() {
    ctx := context.Background()

    // Initialize the OTLP trace exporter.
    exporter, err := otlptracegrpc.New(ctx)
    if err != nil {
        log.Fatalf("failed to create exporter: %v", err)
    }

    // Create a TracerProvider with the exporter.
    tp := sdktrace.NewTracerProvider(
        sdktrace.WithBatcher(exporter),
        sdktrace.WithResource(resource.NewWithAttributes(
            semconv.SchemaURL,
            semconv.ServiceName("grpc-example"),
        )),
    )
    defer tp.Shutdown(ctx)
    otel.SetTracerProvider(tp)

    // Start the instrumented gRPC server in a goroutine.
    go func() {
        lis, err := net.Listen("tcp", ":9090")
        if err != nil {
            log.Fatalf("failed to listen: %v", err)
        }

        server := grpc.NewServer(
            grpc.StatsHandler(otelgrpc.NewServerHandler()),
        )
        pb.RegisterGreeterServer(server, &greeterServer{})

        log.Println("gRPC server listening on :9090")
        if err := server.Serve(lis); err != nil {
            log.Fatalf("failed to serve: %v", err)
        }
    }()

    // Give the server a moment to start.
    time.Sleep(100 * time.Millisecond)

    // Create an instrumented gRPC client.
    conn, err := grpc.NewClient("localhost:9090",
        grpc.WithTransportCredentials(insecure.NewCredentials()),
        grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
    )
    if err != nil {
        log.Fatalf("failed to connect: %v", err)
    }
    defer conn.Close()

    client := pb.NewGreeterClient(conn)

    resp, err := client.SayHello(context.Background(), &pb.HelloRequest{Name: "World"})
    if err != nil {
        log.Fatalf("SayHello failed: %v", err)
    }
    fmt.Println(resp.GetMessage())
}

This produces connected client and server spans for every SayHello call, with RPC method, service name, and status code attributes attached automatically.

What is gRPC?

gRPC is a high-performance, cross-platform Remote Procedure Call (RPC) framework originally developed by Google. It uses HTTP/2 for transport and Protocol Buffers for serialization, making it well-suited for microservice communication where low latency and strong typing are important.

gRPC Instrumentation

To install otelgrpc instrumentation:

shell
go get go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc

The otelgrpc package uses the gRPC StatsHandler interface, which provides more accurate telemetry than the older interceptor approach. StatsHandler has access to lower-level transport events and produces better timing data.

Usage

Instrumenting gRPC Client

go
import (
    "crypto/tls"
    "google.golang.org/grpc"
    "google.golang.org/grpc/credentials"
    "google.golang.org/grpc/credentials/insecure"
    "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
)

// For insecure connections (development)
conn, err := grpc.NewClient(target,
    grpc.WithTransportCredentials(insecure.NewCredentials()),
    grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
)

// For TLS connections (production)
conn, err := grpc.NewClient(target,
    grpc.WithTransportCredentials(credentials.NewTLS(&tls.Config{})),
    grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
)

Note: grpc.WithInsecure() is deprecated. Use grpc.WithTransportCredentials(insecure.NewCredentials()) for insecure connections.

Instrumenting gRPC Server

go
import (
    "google.golang.org/grpc"
    "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
)

server := grpc.NewServer(
    grpc.StatsHandler(otelgrpc.NewServerHandler()),
)

Filtering Methods

Use WithFilter to exclude specific RPC methods from instrumentation. This is useful for health checks and other high-frequency, low-value calls that would otherwise generate noise:

go
import (
    "google.golang.org/grpc"
    "google.golang.org/grpc/stats"
    "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
)

serverHandler := otelgrpc.NewServerHandler(
    otelgrpc.WithFilter(func(info *stats.RPCTagInfo) bool {
        // Return false to exclude a method from tracing.
        return info.FullMethodName != "/grpc.health.v1.Health/Check"
    }),
)

server := grpc.NewServer(
    grpc.StatsHandler(serverHandler),
)

You can combine multiple filters. For example, exclude both health checks and reflection:

go
serverHandler := otelgrpc.NewServerHandler(
    otelgrpc.WithFilter(func(info *stats.RPCTagInfo) bool {
        switch info.FullMethodName {
        case "/grpc.health.v1.Health/Check",
            "/grpc.health.v1.Health/Watch",
            "/grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo":
            return false
        }
        return true
    }),
)

Streaming RPCs

The otelgrpc StatsHandler instruments streaming RPCs automatically. Each stream gets its own span, and individual message send/receive events can be recorded within that span.

For server-streaming, client-streaming, or bidirectional-streaming RPCs, the instrumentation creates a span that covers the entire stream lifetime, from open to close:

go
server := grpc.NewServer(
    grpc.StatsHandler(otelgrpc.NewServerHandler(
        otelgrpc.WithMessageEvents(otelgrpc.ReceivedEvents, otelgrpc.SentEvents),
    )),
)

With message events enabled, each message sent or received on the stream is recorded as a span event with the message sequence number and size. This gives you visibility into streaming throughput and message patterns.

Recording Message Events

By default, otelgrpc does not record individual message send/receive events on spans. You can enable them with WithMessageEvents:

go
// Record message send/receive events on client spans
clientHandler := otelgrpc.NewClientHandler(
    otelgrpc.WithMessageEvents(otelgrpc.ReceivedEvents, otelgrpc.SentEvents),
)

// Record only received events on server spans
serverHandler := otelgrpc.NewServerHandler(
    otelgrpc.WithMessageEvents(otelgrpc.ReceivedEvents),
)

Each message event includes message.id (sequence number) and message.uncompressed_size attributes.

Performance note: Message events add overhead for high-throughput services because each message creates an additional span event. For services handling thousands of RPCs per second, consider enabling them selectively or only in staging environments.

Configuration Options

You can configure the otelgrpc handlers with various options:

go
import (
    "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
)

clientHandler := otelgrpc.NewClientHandler(
    otelgrpc.WithTracerProvider(tracerProvider),
    otelgrpc.WithMeterProvider(meterProvider),
    otelgrpc.WithMessageEvents(otelgrpc.ReceivedEvents),
)

conn, err := grpc.NewClient(target,
    grpc.WithTransportCredentials(insecure.NewCredentials()),
    grpc.WithStatsHandler(clientHandler),
)

Available options:

OptionDescription
WithTracerProvider()Use a custom TracerProvider instead of the global one
WithMeterProvider()Use a custom MeterProvider instead of the global one
WithMessageEvents()Record sent/received message events on spans
WithMetricAttributes()Add custom attributes to collected metrics
WithFilter()Exclude specific methods from instrumentation
WithPropagators()Specify custom propagators for context propagation

Metadata and Baggage

gRPC metadata and OpenTelemetry baggage serve different purposes but both propagate key-value pairs across service boundaries.

gRPC metadata is transport-level. Use it for request-scoped values that your gRPC handlers need to access directly:

go
import "google.golang.org/grpc/metadata"

// Client side: attach metadata to outgoing call
md := metadata.Pairs(
    "request-id", "abc-123",
    "user-id", "user-456",
)
ctx := metadata.NewOutgoingContext(context.Background(), md)

resp, err := client.SayHello(ctx, &pb.HelloRequest{Name: "World"})
go
import "google.golang.org/grpc/metadata"

// Server side: extract metadata from incoming request
if md, ok := metadata.FromIncomingContext(ctx); ok {
    if vals := md.Get("request-id"); len(vals) > 0 {
        fmt.Println("request-id:", vals[0])
    }
}

OpenTelemetry baggage is propagated through the trace context and available to any instrumented service in the call chain:

go
import "go.opentelemetry.io/otel/baggage"

// Read baggage from context (propagated automatically by otelgrpc)
bag := baggage.FromContext(ctx)
val := bag.Member("tenant.id").Value()

Use gRPC metadata when the values are only needed by the immediate caller/callee. Use OpenTelemetry baggage when values need to propagate through an entire distributed trace.

Error Handling

The otelgrpc instrumentation automatically maps gRPC status codes to OpenTelemetry span statuses:

  • codes.OK sets span status to Unset (success)
  • Any non-OK code (e.g., NotFound, Internal, Unavailable) sets span status to Error

The gRPC status code is always recorded as the rpc.grpc.status_code attribute, so you can filter and group spans by status in your backend.

If you need to record additional error details within a handler, use the span API directly:

go
import (
    "google.golang.org/grpc/codes"
    "google.golang.org/grpc/status"
    "go.opentelemetry.io/otel/trace"
    otelcodes "go.opentelemetry.io/otel/codes"
)

func (s *myServer) GetUser(ctx context.Context, req *pb.GetUserRequest) (*pb.User, error) {
    span := trace.SpanFromContext(ctx)

    user, err := s.db.FindUser(ctx, req.GetId())
    if err != nil {
        // Record the error on the span for extra detail.
        span.RecordError(err)
        span.SetStatus(otelcodes.Error, "database lookup failed")
        return nil, status.Errorf(codes.Internal, "failed to fetch user: %v", err)
    }
    if user == nil {
        return nil, status.Error(codes.NotFound, "user not found")
    }

    return user, nil
}

Collected Metrics

The otelgrpc instrumentation automatically collects the following metrics:

Metrics Summary

MetricTypeDescription
rpc.client.durationHistogramDuration of outbound RPC calls
rpc.client.request.sizeHistogramSize of outbound request messages
rpc.client.response.sizeHistogramSize of outbound response messages
rpc.client.requests_per_rpcHistogramMessages sent per RPC (streaming)
rpc.client.responses_per_rpcHistogramMessages received per RPC (streaming)
rpc.server.durationHistogramDuration of inbound RPC calls
rpc.server.request.sizeHistogramSize of inbound request messages
rpc.server.response.sizeHistogramSize of inbound response messages
rpc.server.requests_per_rpcHistogramMessages received per RPC (streaming)
rpc.server.responses_per_rpcHistogramMessages sent per RPC (streaming)

Common Attributes

All metrics include these attributes from the OpenTelemetry RPC semantic conventions:

AttributeDescriptionExample
rpc.systemRPC system identifiergrpc
rpc.serviceFull name of the RPC servicegreeter.Greeter
rpc.methodName of the RPC methodSayHello
rpc.grpc.status_codeNumeric gRPC status code0 (OK)

Troubleshooting

Deprecated grpc.WithInsecure() Error

Problem: Code fails with "grpc.WithInsecure is deprecated" or similar error.

Solution: Replace with grpc.WithTransportCredentials(insecure.NewCredentials()):

go
// Old (deprecated)
conn, err := grpc.Dial(target, grpc.WithInsecure())

// New (correct)
import "google.golang.org/grpc/credentials/insecure"

conn, err := grpc.NewClient(target,
    grpc.WithTransportCredentials(insecure.NewCredentials()),
)

Using Deprecated Interceptors

Problem: Old code uses UnaryClientInterceptor() or UnaryServerInterceptor().

Solution: Migrate to StatsHandler-based instrumentation:

go
// Old (deprecated interceptors)
conn, err := grpc.Dial(target,
    grpc.WithUnaryInterceptor(otelgrpc.UnaryClientInterceptor()),
    grpc.WithStreamInterceptor(otelgrpc.StreamClientInterceptor()),
)

// New (StatsHandler)
conn, err := grpc.NewClient(target,
    grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
)

StatsHandler provides better performance and more accurate metrics compared to interceptors.

Missing Trace Context

Problem: Trace context not propagating between client and server.

Solution: Ensure both client and server use otelgrpc instrumentation. The instrumentation uses gRPC metadata to propagate trace context automatically, so no manual context injection is needed:

go
// Client side - automatically injects trace context
conn, err := grpc.NewClient(target,
    grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
)

// Server side - automatically extracts trace context
server := grpc.NewServer(
    grpc.StatsHandler(otelgrpc.NewServerHandler()),
)

If context still does not propagate, verify that you have a global TextMapPropagator set (otelgrpc uses it to inject/extract headers):

go
import "go.opentelemetry.io/otel"
import "go.opentelemetry.io/otel/propagation"

otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(
    propagation.TraceContext{},
    propagation.Baggage{},
))

No Metrics Collected

Problem: Traces appear but metrics are missing.

Solution: Make sure you have a MeterProvider configured. Without one, otelgrpc uses a no-op provider that discards all metrics:

go
import (
    "go.opentelemetry.io/otel"
    sdkmetric "go.opentelemetry.io/otel/sdk/metric"
)

// Set up a MeterProvider with your exporter
meterProvider := sdkmetric.NewMeterProvider(
    sdkmetric.WithReader(sdkmetric.NewPeriodicReader(exporter)),
)
otel.SetMeterProvider(meterProvider)
defer meterProvider.Shutdown(context.Background())

High Cardinality Warning

Problem: Metric cardinality is too high, causing memory or cost issues.

Solution: Avoid adding high-cardinality attributes (like user IDs or request IDs) to metrics via WithMetricAttributes. Stick to low-cardinality values such as service name, method, and status code. Use span attributes for high-cardinality data instead.

What is Uptrace?

Uptrace is an OpenTelemetry APM that supports distributed tracing, metrics, and logs. You can use it to monitor applications and troubleshoot issues.

Uptrace Overview

Uptrace can process billions of spans and metrics on a single server and allows you to monitor your applications at 10x lower cost.

In just a few minutes, you can try Uptrace by visiting the cloud demo (no login required) or running it locally with Docker. The source code is available on GitHub.

What's next?

With otelgrpc in place, your gRPC services have automatic tracing and metrics. Here are some next steps: