OpenTelemetry Kubernetes Events Receiver [k8seventsreceiver]
📋 Part of the OpenTelemetry ecosystem: The Kubernetes Events Receiver is a component of the OpenTelemetry Collector that watches the Kubernetes API and converts cluster events into OpenTelemetry log records. New to OpenTelemetry? Start with What is OpenTelemetry?
When a pod crashes, an image fails to pull, or a node runs out of memory, Kubernetes creates an Event object. These events are the first signal that something is wrong — but by default they're only retained for one hour and disappear silently.
The OpenTelemetry Kubernetes Events Receiver captures every cluster event the moment it happens, converts it into a structured OpenTelemetry log record, and forwards it to your observability backend for long-term storage and alerting.
This guide covers everything from a basic setup to production-grade filtering, RBAC configuration, and integration with the rest of your Kubernetes observability stack.
What are Kubernetes Events?
Kubernetes Events are API objects that record noteworthy occurrences in the cluster — not to be confused with container stdout/stderr logs (those are handled by the Filelog Receiver).
kubectl get events -n default --sort-by='.lastTimestamp'
LAST SEEN TYPE REASON OBJECT MESSAGE
2m Warning BackOff pod/api-7d9f-xk2p Back-off restarting failed container
5m Normal Pulled pod/api-7d9f-xk2p Successfully pulled image "app:1.2.3"
8m Warning FailedScheduling pod/worker-abc 0/3 nodes available: insufficient memory
12m Normal Scheduled pod/api-7d9f-xk2p Successfully assigned default/api to node-2
Events answer questions like:
- Why did this pod restart 47 times?
- When exactly did the node become unschedulable?
- Which image pull failed and why?
- What happened in the cluster during this outage?
The problem: Kubernetes deletes events after 1 hour by default. The Events Receiver captures them continuously and ships them to long-term storage.
Events vs Container Logs
| Kubernetes Events | Container Logs | |
|---|---|---|
| Source | Kubernetes API | Container stdout/stderr |
| Content | Cluster lifecycle (scheduling, restarts, OOM) | Application output |
| Retention | 1 hour by default | Until log rotation |
| Receiver | k8s_events | filelog with container operator |
| Access | Kubernetes API watch | File on node (/var/log/pods/) |
For complete Kubernetes observability you need both — use this guide for cluster events and the Filelog Receiver guide for container logs.
How the Receiver Works
The receiver connects to the Kubernetes API server and opens a watch — a persistent HTTP stream that delivers event updates in real time:
On startup it lists existing events, then watches for new ones. If the connection drops, it automatically reconnects and resumes from the last resource version — no events are missed.
Quick Start
📚 Prerequisites: You need the OpenTelemetry Collector running inside the cluster with appropriate RBAC permissions. See RBAC Setup below.
💡 Backend Configuration: Examples in this guide use Uptrace as the observability backend. OpenTelemetry is vendor-neutral — you can send data to Grafana Cloud, Datadog, or any OTLP-compatible platform. See backend examples at the end of this guide.
Minimal configuration:
receivers:
k8s_events:
auth_type: serviceAccount # Use pod's ServiceAccount (in-cluster)
processors:
batch:
timeout: 10s
exporters:
otlp/uptrace:
endpoint: api.uptrace.dev:4317
headers:
uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'
service:
pipelines:
logs:
receivers: [k8s_events]
processors: [batch]
exporters: [otlp/uptrace]
This collects events from all namespaces. For namespace filtering, see Filtering Events.
RBAC Setup
The Collector needs read access to events and the objects they reference (pods, nodes, deployments, etc.) so it can resolve metadata. Create the required resources before deploying the Collector:
# 1. ServiceAccount for the Collector pod
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector
namespace: monitoring
---
# 2. ClusterRole — events plus the objects they reference
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector-events
rules:
- apiGroups: ['']
resources:
- events
- namespaces
- namespaces/status
- nodes
- nodes/spec
- pods
- pods/status
- replicationcontrollers
- replicationcontrollers/status
- resourcequotas
- services
verbs: ['get', 'list', 'watch']
- apiGroups: ['apps']
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs: ['get', 'list', 'watch']
- apiGroups: ['extensions']
resources:
- daemonsets
- deployments
- replicasets
verbs: ['get', 'list', 'watch']
- apiGroups: ['batch']
resources:
- jobs
- cronjobs
verbs: ['get', 'list', 'watch']
- apiGroups: ['autoscaling']
resources:
- horizontalpodautoscalers
verbs: ['get', 'list', 'watch']
---
# 3. Bind the role to the ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-collector-events
subjects:
- kind: ServiceAccount
name: otel-collector
namespace: monitoring
roleRef:
kind: ClusterRole
name: otel-collector-events
apiGroup: rbac.authorization.k8s.io
Apply and verify:
kubectl apply -f rbac.yaml
# Verify the ServiceAccount can list events
kubectl auth can-i list events --as=system:serviceaccount:monitoring:otel-collector
# Should return: yes
Reference the ServiceAccount in your Collector Deployment:
spec:
template:
spec:
serviceAccountName: otel-collector
containers:
- name: otel-collector
image: otel/opentelemetry-collector-contrib:latest
Out-of-Cluster (kubeconfig)
For running the Collector outside the cluster (local development, separate VM):
receivers:
k8s_events:
auth_type: kubeConfig
# Uses ~/.kube/config by default
# Or specify: kube_config_path: /path/to/kubeconfig
Log Record Structure
Each Kubernetes event becomes an OpenTelemetry log record with these fields:
Log record attributes (per-event metadata):
| Attribute | Example | Description |
|---|---|---|
k8s.event.reason | BackOff | Why the event was generated |
k8s.event.action | pulling | What action was being performed |
k8s.event.count | 47 | How many times this event occurred |
k8s.event.start_time | 2026-04-05T10:00:00Z | When first observed |
k8s.event.name | api-7d9f-xk2p.abc123 | Metadata name of the event object |
k8s.event.uid | a1b2c3... | UID of the event object |
k8s.namespace.name | production | Namespace of the event |
Resource attributes (shared across all events from the same involved object):
| Attribute | Example | Description |
|---|---|---|
k8s.object.kind | Pod | Type of the involved object |
k8s.object.name | api-7d9f-xk2p | Name of the involved object |
k8s.object.uid | b2c3d4... | UID of the involved object |
k8s.object.fieldpath | spec.containers{api} | Field path within the object |
k8s.object.api_version | v1 | API version of the involved object |
k8s.object.resource_version | 12345 | Resource version |
Log record fields:
| Field | Value |
|---|---|
severity | INFO for Normal events, WARN for Warning events |
body | The event message text |
Common Event Reasons Reference
Understanding event reasons lets you build targeted filters and alerts:
| Reason | Type | Meaning |
|---|---|---|
Scheduled | Normal | Pod assigned to a node |
Pulled | Normal | Container image pulled successfully |
Started | Normal | Container started |
Created | Normal | Container created |
BackOff | Warning | Container restarting (CrashLoopBackOff) |
OOMKilling | Warning | Container killed due to out-of-memory |
Failed | Warning | Image pull failed |
FailedScheduling | Warning | No node available for pod |
Evicted | Warning | Pod evicted (usually low disk/memory on node) |
NodeNotReady | Warning | Node transitioned to not-ready state |
Killing | Normal | Container being terminated |
Unhealthy | Warning | Liveness/readiness probe failed |
FailedMount | Warning | Volume mount failed |
Filtering Events
By Namespace
Monitor only production and staging:
receivers:
k8s_events:
auth_type: serviceAccount
namespaces: [production, staging]
By Event Type (Warning Only)
Use the filter processor with OTTL expressions to keep only Warning events and drop Normal noise:
processors:
filter/warnings_only:
error_mode: ignore
log_conditions:
- severity_number < SEVERITY_NUMBER_WARN
log_conditions drops any record matching the condition. SEVERITY_NUMBER_WARN = 13 — so this drops INFO (9–12) and below, keeping only WARN and ERROR.
Drop Noisy Reasons
Some events fire constantly and add noise. Drop them in the Collector before they reach your backend:
processors:
filter/drop_noise:
error_mode: ignore
log_conditions:
- IsMatch(attributes["k8s.event.reason"], "^(Pulled|Created|Started|Scheduled|Killing)$")
Enrichment with Processors
Add Cluster Identifier
When running multiple clusters, tag events with the cluster name:
processors:
resource:
attributes:
- action: insert
key: k8s.cluster.name
value: production-eu-west-1
service:
pipelines:
logs:
receivers: [k8s_events]
processors: [resource, batch]
exporters: [otlp/uptrace]
Attribute Transformation
Rename attributes or extract values for easier querying:
processors:
attributes/enrich:
actions:
- action: insert
key: alert.team
from_attribute: k8s.namespace.name # Use namespace to route alerts
Deployment: Single Collector for Everything
The most common pattern is a single Deployment that collects both cluster events and ships them alongside metrics and traces:
apiVersion: apps/v1
kind: Deployment
metadata:
name: otel-collector
namespace: monitoring
spec:
replicas: 1 # See note on replicas below
selector:
matchLabels:
app: otel-collector
template:
metadata:
labels:
app: otel-collector
spec:
serviceAccountName: otel-collector
containers:
- name: otel-collector
image: otel/opentelemetry-collector-contrib:latest
args: ["--config=/etc/otelcol-contrib/config.yaml"]
volumeMounts:
- name: config
mountPath: /etc/otelcol-contrib
volumes:
- name: config
configMap:
name: otel-collector-config
⚠️ Replica count: Without leader election, run exactly one replica — multiple replicas each open a separate watch stream and produce duplicate events. For high availability, configure the
k8s_leader_electorextension so only the elected replica watches events.
🔗 OpenTelemetry Operator: For automated Collector lifecycle management in Kubernetes, consider the OpenTelemetry Operator.
Full config combining events + metrics:
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-config
namespace: monitoring
data:
config.yaml: |
receivers:
k8s_events:
auth_type: serviceAccount
namespaces: [production, staging, default]
# Also collect cluster metrics
k8s_cluster:
auth_type: serviceAccount
collection_interval: 30s
processors:
resource:
attributes:
- action: insert
key: k8s.cluster.name
value: my-cluster
batch:
timeout: 10s
exporters:
otlp/uptrace:
endpoint: api.uptrace.dev:4317
headers:
uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'
service:
pipelines:
logs:
receivers: [k8s_events]
processors: [resource, batch]
exporters: [otlp/uptrace]
metrics:
receivers: [k8s_cluster]
processors: [resource, batch]
exporters: [otlp/uptrace]
Real-World Examples
Alert on CrashLoopBackOff
Forward only crash-related events to a high-priority pipeline:
receivers:
k8s_events:
auth_type: serviceAccount
processors:
filter/crashes:
error_mode: ignore
log_conditions:
- not IsMatch(attributes["k8s.event.reason"], "^(BackOff|OOMKilling|Evicted|Unhealthy)$")
batch:
timeout: 5s
exporters:
otlp/uptrace:
endpoint: api.uptrace.dev:4317
headers:
uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'
service:
pipelines:
logs/alerts:
receivers: [k8s_events]
processors: [filter/crashes, batch]
exporters: [otlp/uptrace]
Separate Pipelines: Warnings vs Informational
processors:
filter/warnings:
error_mode: ignore
log_conditions:
- severity_number < SEVERITY_NUMBER_WARN # drops INFO, keeps WARN+
filter/normal:
error_mode: ignore
log_conditions:
- severity_number >= SEVERITY_NUMBER_WARN # drops WARN+, keeps INFO
service:
pipelines:
logs/warnings:
receivers: [k8s_events]
processors: [filter/warnings, resource, batch]
exporters: [otlp/uptrace]
logs/normal:
receivers: [k8s_events]
processors: [filter/normal, resource, batch]
exporters: [otlp/uptrace]
Note: A single receiver can fan out to multiple pipelines. The
k8s_eventsreceiver only opens one watch connection regardless of how many pipelines consume it.
Combine Events with Pod Logs
Full-picture Kubernetes observability — events and container logs in one pipeline:
receivers:
k8s_events:
auth_type: serviceAccount
filelog:
include: [/var/log/pods/*/*/*.log]
include_file_path: true
operators:
- type: container
processors:
resource:
attributes:
- action: insert
key: k8s.cluster.name
value: my-cluster
batch:
timeout: 10s
exporters:
otlp/uptrace:
endpoint: api.uptrace.dev:4317
headers:
uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'
service:
pipelines:
logs:
receivers: [k8s_events, filelog]
processors: [resource, batch]
exporters: [otlp/uptrace]
Troubleshooting
No Events Appearing
Check 1: RBAC permissions
kubectl auth can-i list events \
--as=system:serviceaccount:monitoring:otel-collector
# Must return: yes
kubectl auth can-i watch events \
--as=system:serviceaccount:monitoring:otel-collector
# Must return: yes
# Also verify access to referenced objects (pods, nodes, etc.)
kubectl auth can-i list pods \
--as=system:serviceaccount:monitoring:otel-collector
# Must return: yes
If no, re-apply the ClusterRoleBinding from RBAC Setup.
Check 2: Collector logs
kubectl logs -n monitoring deployment/otel-collector | grep -i "event\|error\|k8s"
Check 3: Verify events exist in the cluster
kubectl get events --all-namespaces --sort-by='.lastTimestamp' | tail -20
If there are no events, nothing is happening — trigger one:
kubectl run test-pod --image=does-not-exist:latest
kubectl get events --field-selector involvedObject.name=test-pod
Check 4: Debug exporter
exporters:
debug:
verbosity: detailed
service:
pipelines:
logs:
receivers: [k8s_events]
exporters: [debug]
Duplicate Events
Cause: Multiple Collector replicas each watching events independently.
Fix: Either use a single replica, or enable leader election so only one replica watches at a time:
extensions:
k8s_leader_elector:
lease_name: otel-k8s-events
lease_namespace: monitoring
receivers:
k8s_events:
auth_type: serviceAccount
k8s_leader_elector: k8s_leader_elector # Wire receiver to the extension
service:
extensions: [k8s_leader_elector]
Events Disappearing After Restart
Cause: The receiver resumes from the last watched resource version, but if the cluster has been running long enough and the version is too old, the API server may reject it.
Behavior: The receiver automatically falls back to listing all current events and continues watching from there. A short gap is expected after restart — this is normal.
Connection Refused to API Server
Cause: The Collector is running outside the cluster but auth_type: serviceAccount is set.
Fix: Use kubeConfig for out-of-cluster deployments:
receivers:
k8s_events:
auth_type: kubeConfig
FAQ
- How is this different from tailing Kubernetes pod logs?
Kubernetes Events record cluster-level lifecycle events (scheduling, restarts, OOM) generated by Kubernetes controllers — not your application's log output. Use the Filelog Receiver for container stdout/stderr and this receiver for cluster events. They complement each other. - Does this receiver miss events if the Collector restarts?
It minimizes data loss by resuming from the last resource version on reconnect. However, Kubernetes only retains events for 1 hour (default) — if the Collector is down longer than that, old events are gone. For critical environments, reduce downtime by using a Deployment with quick restart and persistent storage for the Collector state. - Can I watch events from all namespaces without listing them?
Yes — omittingnamespacesentirely watches all namespaces:yamlreceivers: k8s_events: auth_type: serviceAccount # No namespaces field = watch all namespaces - Why are Warning events showing as WARN severity but Normal events as INFO?
This is intentional — the receiver maps Kubernetes eventtypefield (Normal→INFO,Warning→WARN) to OpenTelemetry severity. This lets you filter by severity in your backend. - Can I run this alongside other receivers in the same Collector?
Yes, and it's recommended. A single Collector instance can runk8s_events,filelog,k8s_cluster, and others simultaneously. They share the batch processor and exporter. - How do I change the 1-hour event retention in Kubernetes?
Set--event-ttlon the API server (default:1h0m0s). For example,--event-ttl=4h. This is a kube-apiserver flag — change it in your cluster's control plane configuration. Note: increasing it puts more load on etcd. - What is
k8s.event.count?
Kubernetes deduplicates repeated events — instead of creating thousands of records for a CrashLoopBackOff, it increments thecountfield on a single event. The receiver exposes this ask8s.event.count. A count of 47 means that event fired 47 times.
Backend Examples
Examples in this guide use Uptrace, but the Collector works with any OTLP-compatible backend:
Uptrace
exporters:
otlp/uptrace:
endpoint: api.uptrace.dev:4317
headers:
uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'
Grafana Cloud
exporters:
otlp:
endpoint: otlp-gateway.grafana.net:443
headers:
authorization: "Bearer YOUR_GRAFANA_TOKEN"
Jaeger
exporters:
otlp:
endpoint: jaeger-collector:4317
tls:
insecure: true
Datadog
exporters:
otlp:
endpoint: trace.agent.datadoghq.com:4317
headers:
dd-api-key: "YOUR_DATADOG_API_KEY"
Multiple Backends
service:
pipelines:
logs:
receivers: [k8s_events]
processors: [batch]
exporters: [otlp/uptrace, otlp/datadog]
More backends: See the OpenTelemetry backends guide for a full list of compatible platforms.
What's next?
With the Kubernetes Events Receiver in place, you have full visibility into cluster lifecycle events alongside your application logs and metrics.
Next steps to complete your Kubernetes observability:
- Collect container logs with the Filelog Receiver
- Collect Linux system logs with the Syslog Receiver
- Add OpenTelemetry instrumentation to your apps for trace-correlated logs
- Monitor cluster metrics with the Kubernetes Monitoring guide
- Correlate events with distributed traces for full-stack debugging
- Manage Collectors at scale with the OpenTelemetry Operator
- See the official k8seventsreceiver docs for all configuration options