Monitor K8s with OpenTelemetry Collector

This guide shows you how to set up comprehensive Kubernetes monitoring using OpenTelemetry Collector for cluster observability and application performance monitoring.

Kubernetes tracing and OpenTelemetry Kubernetes example configurations provide deep insights into your containerized applications, pod performance, and cluster health through practical Kubernetes monitor setup.

Why Monitor Kubernetes with OpenTelemetry

OpenTelemetry provides unified observability across your entire Kubernetes stack - from infrastructure metrics to application traces - through a single, vendor-neutral standard. Unlike proprietary monitoring solutions or tool sprawl (Prometheus for metrics, Jaeger for traces, ELK for logs), OpenTelemetry gives you:

Complete Observability in One Platform:

  • Collect metrics, traces, and logs using the same instrumentation
  • Correlate pod restarts with application errors and user requests
  • Track requests across microservices with distributed tracing
  • Export to any backend (Uptrace, Grafana, Datadog) without vendor lock-in

Kubernetes-Native Integration:

  • Automatic enrichment with K8s metadata (pod names, namespaces, labels)
  • Built-in receivers for cluster metrics (k8s_cluster) and pod metrics (kubeletstats)
  • Service account-based authentication following K8s security best practices
  • Deploy as DaemonSet or Deployment using standard Kubernetes patterns

Production-Ready Scalability:

  • Minimal resource overhead (100-200MB RAM per collector)
  • Efficient batching and sampling for high-volume clusters
  • Support for multi-cluster deployments with centralized observability
  • Auto-discovery of pods and services without manual configuration

Whether you're troubleshooting performance issues, monitoring microservices health, or ensuring SLA compliance, OpenTelemetry provides the visibility you need without locking you into a single vendor's ecosystem.

Prerequisites

Before setting up OpenTelemetry Kubernetes monitoring, ensure you have:

  • Running Kubernetes cluster (v1.24+)
  • kubectl access with cluster admin permissions
  • Helm 3.14+ installed

Verify your cluster is ready:

bash
kubectl cluster-info
kubectl get nodes

For production deployments, you have several options to run Uptrace, including self-hosting on Kubernetes. Learn about the available Uptrace editions to find the best fit for your needs.

What is OpenTelemetry Collector?

OpenTelemetry Collector is an agent that pulls telemetry data from systems you want to monitor and export the collected data to an OpenTelemetry backend.

Otel Collector provides powerful data processing capabilities, allowing you to perform aggregation, filtering, sampling, and enrichment of telemetry data. You can transform and reshape the data to fit your specific monitoring and analysis requirements before sending it to the backend systems.

Installing with Helm

The recommended way to deploy OpenTelemetry Collector in Kubernetes is using the official Helm chart. First, add the OpenTelemetry Helm repository:

bash
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update

Install the collector as a DaemonSet for node-level metrics collection:

bash
helm install otel-collector open-telemetry/opentelemetry-collector \
  --set image.repository=otel/opentelemetry-collector-k8s \
  --set mode=daemonset

For cluster-level metrics, install as a Deployment:

bash
helm install otel-collector-cluster open-telemetry/opentelemetry-collector \
  --set image.repository=otel/opentelemetry-collector-k8s \
  --set mode=deployment \
  --set presets.clusterMetrics.enabled=true \
  --set presets.kubernetesEvents.enabled=true

You can customize the installation by creating a values.yaml file with your configuration and installing with:

bash
helm install otel-collector open-telemetry/opentelemetry-collector -f values.yaml

See Uptrace Helm charts for production-ready examples.

Authentication & RBAC

OpenTelemetry Kubernetes monitoring requires proper authentication to access the Kubernetes API. The collector uses service accounts with specific RBAC permissions.

Create a service account for the collector:

yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: opentelemetry-collector
  namespace: opentelemetry

This service account enables the collector to query cluster resources and collect telemetry data from your Kubernetes environment.

Monitor K8s Cluster Metrics

Configure OpenTelemetry Kubernetes Cluster receiver to collect cluster-level observability data in /etc/otel-contrib-collector/config.yaml using your Uptrace DSN:

yaml
receivers:
  k8s_cluster:
    auth_type: serviceAccount
    collection_interval: 10s
    node_conditions_to_report: [Ready, MemoryPressure]
    allocatable_types_to_report: [cpu, memory]

exporters:
  otlp:
    endpoint: api.uptrace.dev:4317
    headers: { 'uptrace-dsn': '<FIXME>' }

processors:
  resourcedetection:
    detectors: [env, system, k8snode]
  cumulativetodelta:
  batch:
    timeout: 10s

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]
    metrics:
      receivers: [otlp, k8s_cluster]
      processors: [cumulativetodelta, batch, resourcedetection]
      exporters: [otlp]

The Kubernetes cluster receiver collects:

  • Node conditions - Ready status, memory/disk pressure
  • Pod states - Running, pending, failed pods
  • Resource allocation - CPU and memory limits/requests
  • Cluster events - Deployment updates, scaling events

Don't forget to create RBAC permissions for the service account.

See Helm example and official documentation for more details.

Kubernetes Application Monitoring

For comprehensive Kubernetes application monitoring, configure the Kubelet Stats receiver to collect pod and container metrics:

yaml
env:
  - name: K8S_NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName

Configure the receiver to collect kubelet metrics:

yaml
receivers:
  kubeletstats:
    auth_type: serviceAccount
    endpoint: 'https://${env:K8S_NODE_NAME}:10250'
    insecure_skip_verify: true
    collection_interval: 20s
    metric_groups: [pod, container, node]

exporters:
  otlp:
    endpoint: api.uptrace.dev:4317
    headers: { 'uptrace-dsn': '<FIXME>' }

processors:
  resourcedetection:
    detectors: [env, system, k8snode]
  cumulativetodelta:
  batch:
    timeout: 10s

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]
    metrics:
      receivers: [otlp, kubeletstats]
      processors: [cumulativetodelta, batch, resourcedetection]
      exporters: [otlp]

The kubelet receiver provides:

  • Container metrics - CPU usage, memory consumption, restart counts
  • Pod metrics - Network I/O, filesystem usage, volume stats
  • Node metrics - System-level performance data

Create the necessary RBAC configuration with appropriate permissions.

Kubernetes Tracing Setup

Enable distributed tracing for applications running in Kubernetes by configuring OTLP receivers:

yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
        cors:
          allowed_origins: ["*"]

processors:
  k8sattributes:
    auth_type: serviceAccount
    passthrough: false
    extract:
      metadata:
        - k8s.pod.name
        - k8s.pod.uid
        - k8s.deployment.name
        - k8s.namespace.name
        - k8s.node.name
        - k8s.pod.start_time

The k8sattributes processor enriches traces with Kubernetes metadata, enabling correlation between application performance and cluster state.

Kubernetes Example

This OpenTelemetry Kubernetes example demonstrates how to deploy the collector as both DaemonSet and Deployment for complete coverage:

yaml DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-collector-daemonset
spec:
  selector:
    matchLabels:
      app: opentelemetry-collector
  template:
    spec:
      serviceAccount: opentelemetry-collector
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector-k8s:latest
        env:
        - name: K8S_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
yaml Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: opentelemetry-cluster-collector
  template:
    spec:
      serviceAccount: opentelemetry-collector
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector-k8s:latest

This dual approach ensures comprehensive Kubernetes monitor coverage:

  • DaemonSet collects node-level metrics and application traces
  • Deployment handles cluster-level metrics and events

💡 Automation Tip: For automated deployment and management of collectors in Kubernetes, consider using the OpenTelemetry Operator. It simplifies collector lifecycle management through Kubernetes-native CRDs and enables auto-instrumentation for your applications.

Troubleshooting

Common issues when setting up OpenTelemetry Kubernetes monitoring:

Check collector pods status:

bash
kubectl get pods -l app=opentelemetry-collector
kubectl logs -l app=opentelemetry-collector

Verify RBAC permissions:

bash
kubectl auth can-i get nodes --as=system:serviceaccount:opentelemetry:opentelemetry-collector
kubectl auth can-i list pods --as=system:serviceaccount:opentelemetry:opentelemetry-collector

Test API server connectivity:

bash
kubectl exec -it <collector-pod> -- wget -qO- https://kubernetes.default.svc/api/v1/nodes

Common issues:

  • Missing RBAC permissions for service account
  • Network connectivity issues to kubelet API
  • Incorrect service account configuration

OpenTelemetry Backend

Uptrace is a OpenTelemetry backend that supports distributed tracing, metrics, and logs. You can use it to monitor applications and troubleshoot issues.

Uptrace Overview

Uptrace comes with an intuitive query builder, rich dashboards, alerting rules with notifications, and integrations for most languages and frameworks.

Uptrace can process billions of spans and metrics on a single server and allows you to monitor your applications at 10x lower cost.

In just a few minutes, you can try Uptrace by visiting the cloud demo (no login required) or running it locally with Docker. The source code is available on GitHub.

Available Metrics

When telemetry data reaches Uptrace, it automatically generates Kubernetes dashboards from collected data.

Key metrics include:

  • Node metrics - CPU usage, memory consumption, disk I/O, network throughput
  • Pod metrics - Container resource usage, restart counts, phase transitions
  • Cluster metrics - Node readiness, resource allocation, deployment status
  • Application metrics - Request latency, error rates, throughput, custom business metrics

All metrics are enriched with Kubernetes metadata like namespace, pod name, and deployment labels for precise filtering and correlation.

FAQ

How does OpenTelemetry Kubernetes monitoring compare to Prometheus?
OpenTelemetry provides unified observability (metrics, traces, logs) while Prometheus focuses on metrics only. OTel offers better application correlation and vendor flexibility.

Can I monitor multiple Kubernetes clusters?
Yes, deploy collectors in each cluster with unique cluster identifiers and send data to a central observability backend.

What if I need alternatives to Kubernetes?
While this guide focuses on Kubernetes monitoring, you can explore Kubernetes alternatives and apply similar OpenTelemetry monitoring principles.

What's the resource overhead of OpenTelemetry collectors?
Typically 100-200MB memory and 0.1-0.2 CPU cores per collector pod, depending on traffic volume and configuration.

How do I enable auto-instrumentation for applications?
Use the OpenTelemetry Operator to inject instrumentation automatically via annotations on pods and deployments.

What's next?

Kubernetes cluster monitoring is now operational with OpenTelemetry collectors tracking pods, nodes, and services. For containerized application insights, see Docker instrumentation, or add infrastructure monitoring with PostgreSQL and Redis for complete stack visibility. Explore top APM tools for Kubernetes observability.