OpenTelemetry Syslog Receiver
📋 Part of the OpenTelemetry ecosystem: The Syslog Receiver is a component of the OpenTelemetry Collector that ingests syslog messages over the network and converts them into structured OpenTelemetry log records. New to OpenTelemetry? Start with What is OpenTelemetry?
Linux servers, network devices, and applications have used syslog as the standard logging protocol for decades. The OpenTelemetry Syslog Receiver bridges that legacy infrastructure with modern observability — receiving syslog messages over TCP or UDP and converting them into vendor-neutral OpenTelemetry log records.
This guide takes you from a basic syslog listener to a production-grade pipeline with rsyslog/syslog-ng forwarding, TLS encryption, and Kubernetes integration.
What is the Syslog Receiver?
The Syslog Receiver opens a TCP or UDP port, receives incoming syslog messages, parses their structured fields, and emits OpenTelemetry log records. It's part of the OpenTelemetry Collector Contrib distribution.
Why use it instead of tailing log files?
Unlike the Filelog Receiver (which reads files from disk), the Syslog Receiver is push-based: your systems actively forward logs to the Collector. This is ideal when:
- You can't mount or access log files directly (network devices, remote servers)
- You already have rsyslog or syslog-ng configured
- You want centralized collection from many hosts without deploying agents everywhere
Core capabilities:
- Listens for syslog messages on TCP or UDP
- Parses RFC 3164 (BSD) and RFC 5424 (IETF) protocols
- Automatically extracts timestamp, severity, facility, hostname, app name
- Supports TLS for encrypted transport
- Works with rsyslog, syslog-ng, and any RFC-compliant syslog sender
How it works:
Syslog Protocol Formats
The Syslog Receiver supports two protocol versions. Choose based on what your systems send.
RFC 3164 — BSD Syslog
The original syslog format, used by older Linux systems and most network equipment (routers, switches, firewalls):
<34>Oct 11 22:14:15 mymachine su: 'su root' failed for lonvick on /dev/pts/8
Fields extracted automatically:
| Field | Example Value |
|---|---|
| priority | 34 (facility × 8 + severity) |
| timestamp | Oct 11 22:14:15 |
| hostname | mymachine |
| appname | su |
| message | 'su root' failed for lonvick... |
Limitation: RFC 3164 timestamps have no year and no timezone — the receiver uses the location setting to interpret them.
RFC 5424 — IETF Syslog
The modern format with structured data, timezone, and message ID:
<34>1 2026-04-05T14:30:00.000Z mymachine su 1234 ID47 [exampleSDID@32473 iut="3"] Auth failure
Fields extracted automatically:
| Field | Example Value |
|---|---|
| priority | 34 |
| version | 1 |
| timestamp | 2026-04-05T14:30:00.000Z (ISO 8601 with TZ) |
| hostname | mymachine |
| appname | su |
| proc_id | 1234 |
| msg_id | ID47 |
| structured_data | [exampleSDID@32473 iut="3"] |
| message | Auth failure |
RFC 5424 is preferred for new deployments — it's unambiguous and carries more context.
Quick Start
📚 Prerequisites: This guide assumes you have the OpenTelemetry Collector installed. Need help? See our Collector installation guide.
💡 Backend Configuration: Examples in this guide use Uptrace as the observability backend. OpenTelemetry is vendor-neutral — you can send logs to Grafana Cloud, Datadog, Elasticsearch, or any OTLP-compatible platform. See backend examples at the end of this guide.
The minimal configuration listens on TCP port 54527 for RFC 3164 messages:
receivers:
syslog:
tcp:
listen_address: '0.0.0.0:54527'
protocol: rfc3164
location: UTC
exporters:
otlp/uptrace:
endpoint: api.uptrace.dev:4317
headers:
uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'
service:
pipelines:
logs:
receivers: [syslog]
exporters: [otlp/uptrace]
What happens:
- Collector opens port 54527 and waits for incoming syslog messages
- Each received message is parsed into structured fields
- Parsed log records are forwarded to your backend
Test it immediately with logger:
logger -n 127.0.0.1 -P 54527 --tcp "Test message from syslog"
TCP vs UDP Transport
TCP (Recommended)
TCP guarantees delivery — if the Collector is temporarily unavailable, rsyslog/syslog-ng will buffer messages and retry:
receivers:
syslog:
tcp:
listen_address: '0.0.0.0:54527'
max_log_size: 1MiB # Max size per message
protocol: rfc5424
UDP
UDP is simpler and lower overhead, but messages are lost if the Collector is down. Use for high-volume, low-criticality logs:
receivers:
syslog:
udp:
listen_address: '0.0.0.0:514'
protocol: rfc3164
location: UTC
⚠️ Port 514: Binding to ports below 1024 requires root or
CAP_NET_BIND_SERVICE. For non-root deployments, use ports ≥ 1024 (e.g. 54527) and have rsyslog forward there.
TCP + TLS
For secure transport between hosts or across networks:
receivers:
syslog:
tcp:
listen_address: '0.0.0.0:6514'
tls:
cert_file: /etc/otelcol/server.crt
key_file: /etc/otelcol/server.key
ca_file: /etc/otelcol/ca.crt # For mutual TLS (optional)
protocol: rfc5424
Standard TLS syslog port is 6514. Most syslog clients support it natively.
Forwarding from rsyslog
rsyslog is the default syslog daemon on most Linux distributions (Debian, Ubuntu, RHEL, CentOS).
Basic TCP Forwarding
Add to the end of /etc/rsyslog.conf (or create /etc/rsyslog.d/otel.conf):
# Forward all logs to OpenTelemetry Collector over TCP
*.* action(type="omfwd"
target="127.0.0.1"
port="54527"
protocol="tcp"
action.resumeRetryCount="10"
queue.type="linkedList"
queue.size="10000")
Key options:
| Option | Description |
|---|---|
target | Collector host (use 127.0.0.1 for local, IP/hostname for remote) |
protocol | tcp or udp |
action.resumeRetryCount | Retries if Collector is unreachable |
queue.type="linkedList" | In-memory buffer to avoid data loss |
queue.size | Max buffered messages |
Restart rsyslog after changes:
sudo systemctl restart rsyslog
RFC 5424 Format with rsyslog
By default rsyslog sends RFC 3164. To send RFC 5424 (recommended):
# Use RFC 5424 format
template(name="RFC5424" type="string"
string="<%PRI%>1 %TIMESTAMP:::date-rfc3339% %HOSTNAME% %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %MSG%\n")
*.* action(type="omfwd"
target="127.0.0.1"
port="54527"
protocol="tcp"
Template="RFC5424"
queue.type="linkedList"
queue.size="10000")
Update your Collector config to use protocol: rfc5424 when using this template.
TLS Forwarding with rsyslog
# Load TLS module
module(load="omfwd")
*.* action(type="omfwd"
target="collector.internal"
port="6514"
protocol="tcp"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="x509/name"
StreamDriverPermittedPeers="collector.internal")
Forwarding from syslog-ng
syslog-ng is common on SUSE, Alpine, and systems where more flexibility is needed.
Basic TCP Forwarding
Add to /etc/syslog-ng/syslog-ng.conf:
destination d_otel {
tcp("127.0.0.1"
port(54527)
flags(no-multi-line)
);
};
log {
source(s_local);
destination(d_otel);
};
RFC 5424 Format with syslog-ng
Use syslog-ng's built-in syslog driver, which outputs RFC 5424 natively:
destination d_otel_rfc5424 {
syslog("127.0.0.1"
port(54527)
transport(tcp)
);
};
log {
source(s_local);
destination(d_otel_rfc5424);
};
The syslog() driver automatically formats messages as RFC 5424 with PRI, version, structured-data, and ISO 8601 timestamps — no custom template needed.
Restart after changes:
sudo systemctl restart syslog-ng
Operators and Enrichment
After the syslog receiver parses protocol fields, you can use operators to transform and enrich log records — the same operators available in the Filelog Receiver.
Move Message to Body
By default the syslog receiver stores the message text in attributes.message. Move it to the log record body:
receivers:
syslog:
tcp:
listen_address: '0.0.0.0:54527'
protocol: rfc3164
location: UTC
operators:
- type: move
from: attributes.message
to: body
Add Environment Metadata
Enrich all received logs with service and environment context:
operators:
- type: move
from: attributes.message
to: body
- type: add
field: resource["service.name"]
value: infrastructure
- type: add
field: attributes.environment
value: production
Filter by Facility
Forward only security-related logs (facility 4 = security/auth):
operators:
- type: filter
expr: 'attributes.facility != 4 and attributes.facility != 10'
Syslog facility numbers:
| Number | Facility |
|---|---|
| 0 | kernel |
| 1 | user-level |
| 3 | system daemons |
| 4 | security/auth |
| 7 | news |
| 10 | security/auth (alt) |
| 16-23 | local0–local7 (custom) |
Parse Structured Data (RFC 5424)
RFC 5424 STRUCTURED-DATA fields land in attributes.structured_data. Extract individual values:
operators:
- type: move
from: attributes.message
to: body
- type: regex_parser
parse_from: attributes.structured_data
regex: 'requestId="(?P<request_id>[^"]+)"'
if: 'attributes.structured_data != "-"'
Kubernetes Syslog Collection
In Kubernetes, syslog receivers are useful for collecting node-level system logs (kernel, kubelet, systemd) rather than container stdout/stderr — those are better handled by the Filelog Receiver with the container operator.
DaemonSet with Syslog Receiver
Deploy on every node to collect system-level syslog:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: otel-collector-syslog
namespace: monitoring
spec:
selector:
matchLabels:
app: otel-collector-syslog
template:
metadata:
labels:
app: otel-collector-syslog
spec:
hostNetwork: true # Access host network to receive syslog
containers:
- name: otel-collector
image: otel/opentelemetry-collector-contrib:latest
ports:
- containerPort: 54527
hostPort: 54527
protocol: TCP
volumeMounts:
- name: config
mountPath: /etc/otelcol-contrib/
volumes:
- name: config
configMap:
name: otel-collector-syslog-config
Collector ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-syslog-config
namespace: monitoring
data:
config.yaml: |
receivers:
syslog:
tcp:
listen_address: '0.0.0.0:54527'
protocol: rfc5424
operators:
- type: move
from: attributes.message
to: body
- type: add
field: resource["k8s.node.name"]
value: ${env:NODE_NAME}
processors:
batch:
timeout: 10s
exporters:
otlp/uptrace:
endpoint: api.uptrace.dev:4317
headers:
uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'
service:
pipelines:
logs:
receivers: [syslog]
processors: [batch]
exporters: [otlp/uptrace]
Configure each node's rsyslog to forward to the DaemonSet pod's hostPort:
*.* action(type="omfwd"
target="127.0.0.1"
port="54527"
protocol="tcp"
queue.type="linkedList"
queue.size="5000")
Real-World Examples
Collect Auth Logs Only
receivers:
syslog:
tcp:
listen_address: '0.0.0.0:54527'
protocol: rfc3164
location: UTC
operators:
- type: filter
expr: 'attributes.facility != 4 and attributes.facility != 10'
- type: move
from: attributes.message
to: body
Network Device Logs (Cisco / Juniper)
Network equipment typically sends RFC 3164 over UDP from port 514:
receivers:
syslog:
udp:
listen_address: '0.0.0.0:514'
protocol: rfc3164
location: UTC
operators:
- type: move
from: attributes.message
to: body
- type: add
field: attributes.source_type
value: network_device
Then in rsyslog, configure specific hosts to forward:
if $fromhost-ip startswith '10.0.1.' then {
action(type="omfwd" target="otel-collector" port="514" protocol="udp")
stop
}
High-Volume with Filtering and Batching
Production setup with memory protection and selective forwarding:
receivers:
syslog:
tcp:
listen_address: '0.0.0.0:54527'
protocol: rfc5424
operators:
- type: filter
# Drop noisy health-check messages
expr: 'body matches "(?i)health.?check|ping|heartbeat"'
- type: move
from: attributes.message
to: body
processors:
memory_limiter:
check_interval: 1s
limit_mib: 256
spike_limit_mib: 64
batch:
timeout: 5s
send_batch_size: 512
exporters:
otlp/uptrace:
endpoint: api.uptrace.dev:4317
headers:
uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'
service:
pipelines:
logs:
receivers: [syslog]
processors: [memory_limiter, batch]
exporters: [otlp/uptrace]
⚠️ Order matters:
memory_limitermust be the first processor in the pipeline.
Troubleshooting
No Logs Appearing
Check 1: Port is open and listening
# Verify the port is bound
ss -tlnp | grep 54527
# or
netstat -tlnp | grep 54527
Check 2: Firewall
# Allow port (firewalld)
firewall-cmd --permanent --add-port=54527/tcp
firewall-cmd --reload
# Or iptables
iptables -A INPUT -p tcp --dport 54527 -j ACCEPT
Check 3: Test connectivity manually
# Send a test message
logger -n 127.0.0.1 -P 54527 --tcp "Test OTel syslog message"
# For UDP
logger -n 127.0.0.1 -P 514 "Test UDP message"
Check 4: Debug exporter
exporters:
debug:
verbosity: detailed
service:
pipelines:
logs:
receivers: [syslog]
exporters: [debug]
./otelcol --config=config.yaml 2>&1 | grep -i "log\|syslog\|error"
Parse Errors
Problem: failed to parse syslog message in Collector logs.
Cause: Protocol mismatch — your sender uses RFC 3164 but Collector expects RFC 5424, or vice versa.
Diagnose:
# Capture raw syslog messages to inspect format
nc -l 54528 | head -5
# Then point rsyslog temporarily at port 54528
Fix: Match protocol in Collector config to what your sender actually produces.
rsyslog Not Forwarding
Verify rsyslog is sending:
# Check rsyslog for errors
journalctl -u rsyslog -f
# Test rsyslog config
rsyslogd -N1
Check the queue isn't blocked:
# Look for "action suspended" messages
grep "action suspended" /var/log/syslog
If Collector was down and the queue filled up, restart rsyslog after bringing Collector back up:
sudo systemctl restart rsyslog
High Memory Usage
Reduce batch size and add memory limiter:
processors:
memory_limiter:
check_interval: 1s
limit_mib: 128
batch:
timeout: 5s
send_batch_size: 256
Filter noisy senders in rsyslog before they reach the Collector:
# Drop debug messages before forwarding
if $syslogseverity >= 7 then stop
*.* action(type="omfwd" target="127.0.0.1" port="54527" protocol="tcp")
Performance Optimization
Batching
Reduces network round-trips to the backend:
processors:
batch:
timeout: 5s # Send at least every 5s
send_batch_size: 512 # Or when 512 records accumulate
service:
pipelines:
logs:
receivers: [syslog]
processors: [batch]
exporters: [otlp/uptrace]
Guidelines:
- Low latency requirements:
timeout: 1-2s - High volume: increase
send_batch_sizeto 1024–2048 - Default (200 records / 200ms) is fine for moderate load
TCP vs UDP Performance
- UDP: lower overhead, no connection state, suitable for > 50k msg/sec
- TCP: connection overhead but retries on failure — better for reliability
For very high volume with TCP, increase the OS socket buffer:
sysctl -w net.core.rmem_max=8388608
sysctl -w net.core.rmem_default=8388608
FAQ
- What's the difference between Syslog Receiver and Filelog Receiver?
- Syslog Receiver: push-based, receives messages over the network (TCP/UDP). Best for remote systems, network devices, and existing rsyslog/syslog-ng infrastructure.
- Filelog Receiver: pull-based, reads log files from disk. Best for local application logs, container logs, and any file-based log source.
- Which protocol should I use — RFC 3164 or RFC 5424?
Use RFC 5424 for new deployments — it includes timezone in timestamps, structured data fields, and a message ID. Use RFC 3164 only when your sender doesn't support RFC 5424 (older network devices, legacy apps). - Can I receive from multiple sources simultaneously?
Yes. Define multiple named receivers:yamlreceivers: syslog/servers: tcp: listen_address: '0.0.0.0:54527' protocol: rfc5424 syslog/network: udp: listen_address: '0.0.0.0:514' protocol: rfc3164 location: UTC service: pipelines: logs: receivers: [syslog/servers, syslog/network] exporters: [otlp/uptrace] - Why is the timestamp wrong?
RFC 3164 has no timezone. Setlocationto match your sender's timezone:yamlreceivers: syslog: udp: listen_address: '0.0.0.0:514' protocol: rfc3164 location: America/New_York # IANA timezone name - Can I correlate syslog logs with traces?
Yes, if your app includestrace_idandspan_idin syslog messages (e.g. in the structured-data field for RFC 5424). Use an operator to extract them. For native trace correlation, consider using a logging bridge instead. - How do I handle log rotation on the sender side?
Syslog receivers are not affected by log rotation — they receive messages over the network, not from files. Rotation on the sender side is transparent. - Is there a message size limit?
RFC 3164 recommends 1024 bytes; RFC 5424 allows up to 2048 bytes by default. The Collector's TCP receiver defaults to1MiB. Set explicitly:yamlreceivers: syslog: tcp: listen_address: '0.0.0.0:54527' max_log_size: 1MiB protocol: rfc5424
Backend Examples
This guide uses Uptrace in examples, but OpenTelemetry works with any OTLP-compatible backend:
Uptrace
exporters:
otlp/uptrace:
endpoint: api.uptrace.dev:4317
headers:
uptrace-dsn: 'https://<secret>@api.uptrace.dev?grpc=4317'
Grafana Cloud
exporters:
otlp:
endpoint: otlp-gateway.grafana.net:443
headers:
authorization: "Bearer YOUR_GRAFANA_TOKEN"
Jaeger
exporters:
otlp:
endpoint: jaeger-collector:4317
tls:
insecure: true
Datadog
exporters:
otlp:
endpoint: trace.agent.datadoghq.com:4317
headers:
dd-api-key: "YOUR_DATADOG_API_KEY"
Multiple Backends
service:
pipelines:
logs:
receivers: [syslog]
processors: [batch]
exporters: [otlp/uptrace, otlp/datadog]
More backends: See the OpenTelemetry backends guide for a full list of compatible platforms.
What's next?
With the Syslog Receiver configured, your infrastructure logs flow into OpenTelemetry alongside application traces and metrics.
Next steps:
- Learn about OpenTelemetry Logs for the complete logging picture
- Collect file-based logs with the Filelog Receiver
- Follow structured logging best practices to get more value from log data
- Correlate logs with distributed traces for faster debugging
- Monitor your infrastructure metrics alongside logs
- See the official Syslog Receiver docs for all configuration options