OpenTelemetry Kubernetes Events Receiver [k8seventsreceiver]

Alexandr Bandurchin
April 05, 2026
11 min read

📋 Part of the OpenTelemetry ecosystem: The Kubernetes Events Receiver is a component of the OpenTelemetry Collector that watches the Kubernetes API and converts cluster events into OpenTelemetry log records. New to OpenTelemetry? Start with What is OpenTelemetry?

When a pod crashes, an image fails to pull, or a node runs out of memory, Kubernetes creates an Event object. These events are the first signal that something is wrong — but by default they're only retained for one hour and disappear silently.

The OpenTelemetry Kubernetes Events Receiver captures every cluster event the moment it happens, converts it into a structured OpenTelemetry log record, and forwards it to your observability backend for long-term storage and alerting.

This guide covers everything from a basic setup to production-grade filtering, RBAC configuration, and integration with the rest of your Kubernetes observability stack.

What are Kubernetes Events?

Kubernetes Events are API objects that record noteworthy occurrences in the cluster — not to be confused with container stdout/stderr logs (those are handled by the Filelog Receiver).

bash
kubectl get events -n default --sort-by='.lastTimestamp'
text
LAST SEEN   TYPE      REASON              OBJECT              MESSAGE
2m          Warning   BackOff             pod/api-7d9f-xk2p   Back-off restarting failed container
5m          Normal    Pulled              pod/api-7d9f-xk2p   Successfully pulled image "app:1.2.3"
8m          Warning   FailedScheduling    pod/worker-abc      0/3 nodes available: insufficient memory
12m         Normal    Scheduled           pod/api-7d9f-xk2p   Successfully assigned default/api to node-2

Events answer questions like:

  • Why did this pod restart 47 times?
  • When exactly did the node become unschedulable?
  • Which image pull failed and why?
  • What happened in the cluster during this outage?

The problem: Kubernetes deletes events after 1 hour by default. The Events Receiver captures them continuously and ships them to long-term storage.

Events vs Container Logs

Kubernetes EventsContainer Logs
SourceKubernetes APIContainer stdout/stderr
ContentCluster lifecycle (scheduling, restarts, OOM)Application output
Retention1 hour by defaultUntil log rotation
Receiverk8s_eventsfilelog with container operator
AccessKubernetes API watchFile on node (/var/log/pods/)

For complete Kubernetes observability you need both — use this guide for cluster events and the Filelog Receiver guide for container logs.

How the Receiver Works

The receiver connects to the Kubernetes API server and opens a watch — a persistent HTTP stream that delivers event updates in real time:

On startup it lists existing events, then watches for new ones. If the connection drops, it automatically reconnects and resumes from the last resource version — no events are missed.

Quick Start

📚 Prerequisites: You need the OpenTelemetry Collector running inside the cluster with appropriate RBAC permissions. See RBAC Setup below.

💡 Backend Configuration: Examples in this guide use Uptrace as the observability backend. OpenTelemetry is vendor-neutral — you can send data to Grafana Cloud, Datadog, or any OTLP-compatible platform. See backend examples at the end of this guide.

Minimal configuration:

yaml
receivers:
  k8s_events:
    auth_type: serviceAccount   # Use pod's ServiceAccount (in-cluster)

processors:
  batch:
    timeout: 10s

exporters:
  otlp/uptrace:
    endpoint: api.uptrace.dev:4317
    headers:
      uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'

service:
  pipelines:
    logs:
      receivers: [k8s_events]
      processors: [batch]
      exporters: [otlp/uptrace]

This collects events from all namespaces. For namespace filtering, see Filtering Events.

RBAC Setup

The Collector needs read access to events and the objects they reference (pods, nodes, deployments, etc.) so it can resolve metadata. Create the required resources before deploying the Collector:

yaml
# 1. ServiceAccount for the Collector pod
apiVersion: v1
kind: ServiceAccount
metadata:
  name: otel-collector
  namespace: monitoring

---

# 2. ClusterRole — events plus the objects they reference
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: otel-collector-events
rules:
  - apiGroups: ['']
    resources:
      - events
      - namespaces
      - namespaces/status
      - nodes
      - nodes/spec
      - pods
      - pods/status
      - replicationcontrollers
      - replicationcontrollers/status
      - resourcequotas
      - services
    verbs: ['get', 'list', 'watch']
  - apiGroups: ['apps']
    resources:
      - daemonsets
      - deployments
      - replicasets
      - statefulsets
    verbs: ['get', 'list', 'watch']
  - apiGroups: ['extensions']
    resources:
      - daemonsets
      - deployments
      - replicasets
    verbs: ['get', 'list', 'watch']
  - apiGroups: ['batch']
    resources:
      - jobs
      - cronjobs
    verbs: ['get', 'list', 'watch']
  - apiGroups: ['autoscaling']
    resources:
      - horizontalpodautoscalers
    verbs: ['get', 'list', 'watch']

---

# 3. Bind the role to the ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otel-collector-events
subjects:
  - kind: ServiceAccount
    name: otel-collector
    namespace: monitoring
roleRef:
  kind: ClusterRole
  name: otel-collector-events
  apiGroup: rbac.authorization.k8s.io

Apply and verify:

bash
kubectl apply -f rbac.yaml

# Verify the ServiceAccount can list events
kubectl auth can-i list events --as=system:serviceaccount:monitoring:otel-collector
# Should return: yes

Reference the ServiceAccount in your Collector Deployment:

yaml
spec:
  template:
    spec:
      serviceAccountName: otel-collector
      containers:
        - name: otel-collector
          image: otel/opentelemetry-collector-contrib:latest

Out-of-Cluster (kubeconfig)

For running the Collector outside the cluster (local development, separate VM):

yaml
receivers:
  k8s_events:
    auth_type: kubeConfig
    # Uses ~/.kube/config by default
    # Or specify: kube_config_path: /path/to/kubeconfig

Log Record Structure

Each Kubernetes event becomes an OpenTelemetry log record with these fields:

Log record attributes (per-event metadata):

AttributeExampleDescription
k8s.event.reasonBackOffWhy the event was generated
k8s.event.actionpullingWhat action was being performed
k8s.event.count47How many times this event occurred
k8s.event.start_time2026-04-05T10:00:00ZWhen first observed
k8s.event.nameapi-7d9f-xk2p.abc123Metadata name of the event object
k8s.event.uida1b2c3...UID of the event object
k8s.namespace.nameproductionNamespace of the event

Resource attributes (shared across all events from the same involved object):

AttributeExampleDescription
k8s.object.kindPodType of the involved object
k8s.object.nameapi-7d9f-xk2pName of the involved object
k8s.object.uidb2c3d4...UID of the involved object
k8s.object.fieldpathspec.containers{api}Field path within the object
k8s.object.api_versionv1API version of the involved object
k8s.object.resource_version12345Resource version

Log record fields:

FieldValue
severityINFO for Normal events, WARN for Warning events
bodyThe event message text

Common Event Reasons Reference

Understanding event reasons lets you build targeted filters and alerts:

ReasonTypeMeaning
ScheduledNormalPod assigned to a node
PulledNormalContainer image pulled successfully
StartedNormalContainer started
CreatedNormalContainer created
BackOffWarningContainer restarting (CrashLoopBackOff)
OOMKillingWarningContainer killed due to out-of-memory
FailedWarningImage pull failed
FailedSchedulingWarningNo node available for pod
EvictedWarningPod evicted (usually low disk/memory on node)
NodeNotReadyWarningNode transitioned to not-ready state
KillingNormalContainer being terminated
UnhealthyWarningLiveness/readiness probe failed
FailedMountWarningVolume mount failed

Filtering Events

By Namespace

Monitor only production and staging:

yaml
receivers:
  k8s_events:
    auth_type: serviceAccount
    namespaces: [production, staging]

By Event Type (Warning Only)

Use the filter processor with OTTL expressions to keep only Warning events and drop Normal noise:

yaml
processors:
  filter/warnings_only:
    error_mode: ignore
    log_conditions:
      - severity_number < SEVERITY_NUMBER_WARN

log_conditions drops any record matching the condition. SEVERITY_NUMBER_WARN = 13 — so this drops INFO (9–12) and below, keeping only WARN and ERROR.

Drop Noisy Reasons

Some events fire constantly and add noise. Drop them in the Collector before they reach your backend:

yaml
processors:
  filter/drop_noise:
    error_mode: ignore
    log_conditions:
      - IsMatch(attributes["k8s.event.reason"], "^(Pulled|Created|Started|Scheduled|Killing)$")

Enrichment with Processors

Add Cluster Identifier

When running multiple clusters, tag events with the cluster name:

yaml
processors:
  resource:
    attributes:
      - action: insert
        key: k8s.cluster.name
        value: production-eu-west-1

service:
  pipelines:
    logs:
      receivers: [k8s_events]
      processors: [resource, batch]
      exporters: [otlp/uptrace]

Attribute Transformation

Rename attributes or extract values for easier querying:

yaml
processors:
  attributes/enrich:
    actions:
      - action: insert
        key: alert.team
        from_attribute: k8s.namespace.name  # Use namespace to route alerts

Deployment: Single Collector for Everything

The most common pattern is a single Deployment that collects both cluster events and ships them alongside metrics and traces:

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  namespace: monitoring
spec:
  replicas: 1    # See note on replicas below
  selector:
    matchLabels:
      app: otel-collector
  template:
    metadata:
      labels:
        app: otel-collector
    spec:
      serviceAccountName: otel-collector
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector-contrib:latest
        args: ["--config=/etc/otelcol-contrib/config.yaml"]
        volumeMounts:
        - name: config
          mountPath: /etc/otelcol-contrib
      volumes:
      - name: config
        configMap:
          name: otel-collector-config

⚠️ Replica count: Without leader election, run exactly one replica — multiple replicas each open a separate watch stream and produce duplicate events. For high availability, configure the k8s_leader_elector extension so only the elected replica watches events.

🔗 OpenTelemetry Operator: For automated Collector lifecycle management in Kubernetes, consider the OpenTelemetry Operator.

Full config combining events + metrics:

yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-config
  namespace: monitoring
data:
  config.yaml: |
    receivers:
      k8s_events:
        auth_type: serviceAccount
        namespaces: [production, staging, default]

      # Also collect cluster metrics
      k8s_cluster:
        auth_type: serviceAccount
        collection_interval: 30s

    processors:
      resource:
        attributes:
          - action: insert
            key: k8s.cluster.name
            value: my-cluster
      batch:
        timeout: 10s

    exporters:
      otlp/uptrace:
        endpoint: api.uptrace.dev:4317
        headers:
          uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'

    service:
      pipelines:
        logs:
          receivers: [k8s_events]
          processors: [resource, batch]
          exporters: [otlp/uptrace]
        metrics:
          receivers: [k8s_cluster]
          processors: [resource, batch]
          exporters: [otlp/uptrace]

Real-World Examples

Alert on CrashLoopBackOff

Forward only crash-related events to a high-priority pipeline:

yaml
receivers:
  k8s_events:
    auth_type: serviceAccount

processors:
  filter/crashes:
    error_mode: ignore
    log_conditions:
      - not IsMatch(attributes["k8s.event.reason"], "^(BackOff|OOMKilling|Evicted|Unhealthy)$")
  batch:
    timeout: 5s

exporters:
  otlp/uptrace:
    endpoint: api.uptrace.dev:4317
    headers:
      uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'

service:
  pipelines:
    logs/alerts:
      receivers: [k8s_events]
      processors: [filter/crashes, batch]
      exporters: [otlp/uptrace]

Separate Pipelines: Warnings vs Informational

yaml
processors:
  filter/warnings:
    error_mode: ignore
    log_conditions:
      - severity_number < SEVERITY_NUMBER_WARN   # drops INFO, keeps WARN+
  filter/normal:
    error_mode: ignore
    log_conditions:
      - severity_number >= SEVERITY_NUMBER_WARN  # drops WARN+, keeps INFO

service:
  pipelines:
    logs/warnings:
      receivers: [k8s_events]
      processors: [filter/warnings, resource, batch]
      exporters: [otlp/uptrace]
    logs/normal:
      receivers: [k8s_events]
      processors: [filter/normal, resource, batch]
      exporters: [otlp/uptrace]

Note: A single receiver can fan out to multiple pipelines. The k8s_events receiver only opens one watch connection regardless of how many pipelines consume it.

Combine Events with Pod Logs

Full-picture Kubernetes observability — events and container logs in one pipeline:

yaml
receivers:
  k8s_events:
    auth_type: serviceAccount

  filelog:
    include: [/var/log/pods/*/*/*.log]
    include_file_path: true
    operators:
      - type: container

processors:
  resource:
    attributes:
      - action: insert
        key: k8s.cluster.name
        value: my-cluster
  batch:
    timeout: 10s

exporters:
  otlp/uptrace:
    endpoint: api.uptrace.dev:4317
    headers:
      uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'

service:
  pipelines:
    logs:
      receivers: [k8s_events, filelog]
      processors: [resource, batch]
      exporters: [otlp/uptrace]

Troubleshooting

No Events Appearing

Check 1: RBAC permissions

bash
kubectl auth can-i list events \
  --as=system:serviceaccount:monitoring:otel-collector
# Must return: yes

kubectl auth can-i watch events \
  --as=system:serviceaccount:monitoring:otel-collector
# Must return: yes

# Also verify access to referenced objects (pods, nodes, etc.)
kubectl auth can-i list pods \
  --as=system:serviceaccount:monitoring:otel-collector
# Must return: yes

If no, re-apply the ClusterRoleBinding from RBAC Setup.

Check 2: Collector logs

bash
kubectl logs -n monitoring deployment/otel-collector | grep -i "event\|error\|k8s"

Check 3: Verify events exist in the cluster

bash
kubectl get events --all-namespaces --sort-by='.lastTimestamp' | tail -20

If there are no events, nothing is happening — trigger one:

bash
kubectl run test-pod --image=does-not-exist:latest
kubectl get events --field-selector involvedObject.name=test-pod

Check 4: Debug exporter

yaml
exporters:
  debug:
    verbosity: detailed

service:
  pipelines:
    logs:
      receivers: [k8s_events]
      exporters: [debug]

Duplicate Events

Cause: Multiple Collector replicas each watching events independently.

Fix: Either use a single replica, or enable leader election so only one replica watches at a time:

yaml
extensions:
  k8s_leader_elector:
    lease_name: otel-k8s-events
    lease_namespace: monitoring

receivers:
  k8s_events:
    auth_type: serviceAccount
    k8s_leader_elector: k8s_leader_elector  # Wire receiver to the extension

service:
  extensions: [k8s_leader_elector]

Events Disappearing After Restart

Cause: The receiver resumes from the last watched resource version, but if the cluster has been running long enough and the version is too old, the API server may reject it.

Behavior: The receiver automatically falls back to listing all current events and continues watching from there. A short gap is expected after restart — this is normal.

Connection Refused to API Server

Cause: The Collector is running outside the cluster but auth_type: serviceAccount is set.

Fix: Use kubeConfig for out-of-cluster deployments:

yaml
receivers:
  k8s_events:
    auth_type: kubeConfig

FAQ

  1. How is this different from tailing Kubernetes pod logs?
    Kubernetes Events record cluster-level lifecycle events (scheduling, restarts, OOM) generated by Kubernetes controllers — not your application's log output. Use the Filelog Receiver for container stdout/stderr and this receiver for cluster events. They complement each other.
  2. Does this receiver miss events if the Collector restarts?
    It minimizes data loss by resuming from the last resource version on reconnect. However, Kubernetes only retains events for 1 hour (default) — if the Collector is down longer than that, old events are gone. For critical environments, reduce downtime by using a Deployment with quick restart and persistent storage for the Collector state.
  3. Can I watch events from all namespaces without listing them?
    Yes — omitting namespaces entirely watches all namespaces:
    yaml
    receivers:
      k8s_events:
        auth_type: serviceAccount
        # No namespaces field = watch all namespaces
    
  4. Why are Warning events showing as WARN severity but Normal events as INFO?
    This is intentional — the receiver maps Kubernetes event type field (NormalINFO, WarningWARN) to OpenTelemetry severity. This lets you filter by severity in your backend.
  5. Can I run this alongside other receivers in the same Collector?
    Yes, and it's recommended. A single Collector instance can run k8s_events, filelog, k8s_cluster, and others simultaneously. They share the batch processor and exporter.
  6. How do I change the 1-hour event retention in Kubernetes?
    Set --event-ttl on the API server (default: 1h0m0s). For example, --event-ttl=4h. This is a kube-apiserver flag — change it in your cluster's control plane configuration. Note: increasing it puts more load on etcd.
  7. What is k8s.event.count?
    Kubernetes deduplicates repeated events — instead of creating thousands of records for a CrashLoopBackOff, it increments the count field on a single event. The receiver exposes this as k8s.event.count. A count of 47 means that event fired 47 times.

Backend Examples

Examples in this guide use Uptrace, but the Collector works with any OTLP-compatible backend:

Uptrace

yaml
exporters:
  otlp/uptrace:
    endpoint: api.uptrace.dev:4317
    headers:
      uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'

Grafana Cloud

yaml
exporters:
  otlp:
    endpoint: otlp-gateway.grafana.net:443
    headers:
      authorization: "Bearer YOUR_GRAFANA_TOKEN"

Jaeger

yaml
exporters:
  otlp:
    endpoint: jaeger-collector:4317
    tls:
      insecure: true

Datadog

yaml
exporters:
  otlp:
    endpoint: trace.agent.datadoghq.com:4317
    headers:
      dd-api-key: "YOUR_DATADOG_API_KEY"

Multiple Backends

yaml
service:
  pipelines:
    logs:
      receivers: [k8s_events]
      processors: [batch]
      exporters: [otlp/uptrace, otlp/datadog]

More backends: See the OpenTelemetry backends guide for a full list of compatible platforms.

What's next?

With the Kubernetes Events Receiver in place, you have full visibility into cluster lifecycle events alongside your application logs and metrics.

Next steps to complete your Kubernetes observability: