OpenTelemetry RabbitMQ Monitoring Guide

Alexandr Bandurchin
February 07, 2026
6 min read

Monitoring RabbitMQ performance is essential to ensure reliable message delivery and identify potential bottlenecks in your messaging infrastructure. RabbitMQ metrics help you track queue depths, message rates, connection counts, and resource utilization.

This guide explains how to collect RabbitMQ metrics using the OpenTelemetry Collector and visualize them in your monitoring backend.

Quick Setup

StepActionDetails
1. EnableEnable RabbitMQ management pluginrabbitmq-plugins enable rabbitmq_management
2. ConfigureAdd rabbitmq receiver to Collector configPoint to http://localhost:15672
3. RestartRestart the Collectorsudo systemctl restart otelcol-contrib
4. VerifyCheck for metrics in your backendLook for rabbitmq.queue.messages metrics

What is RabbitMQ?

RabbitMQ is an open-source message broker that implements the Advanced Message Queuing Protocol (AMQP). It acts as an intermediary for messaging, allowing applications to communicate by sending and receiving messages through queues, exchanges, and bindings.

Key features include:

  • Multiple messaging patterns: point-to-point, publish-subscribe, and request-reply
  • Message durability and persistence
  • Clustering, federation, and high availability
  • Support for quorum queues and streams for high-throughput workloads
  • Management HTTP API for monitoring and administration

What is OpenTelemetry Collector?

OpenTelemetry Collector is an agent that pulls telemetry data from systems you want to monitor and sends it to your observability backend using the OpenTelemetry protocol (OTLP).

You can use OpenTelemetry Collector to monitor host metrics, PostgreSQL, MySQL, Redis, Kafka, and more.

Prerequisites

Enable RabbitMQ Management Plugin

The OpenTelemetry RabbitMQ receiver requires the management plugin to be enabled, as it collects metrics through the management HTTP API:

shell
# Enable management plugin
sudo rabbitmq-plugins enable rabbitmq_management

# Verify the plugin is enabled
sudo rabbitmq-plugins list

The management interface will be available at http://localhost:15672 by default.

Verify Management API Access

Test that the management API is accessible before configuring the Collector:

shell
# Test API endpoint
curl -u guest:guest http://localhost:15672/api/overview

# Should return JSON with RabbitMQ overview information

Create a Monitoring User

For production environments, create a dedicated monitoring user instead of using the default guest account:

shell
# Create monitoring user
sudo rabbitmqctl add_user otel_monitor secure_password

# Set monitoring tag (required for management API access)
sudo rabbitmqctl set_user_tags otel_monitor monitoring

# Grant read-only permissions
sudo rabbitmqctl set_permissions -p / otel_monitor "" "" ".*"

OpenTelemetry RabbitMQ Receiver

To start monitoring RabbitMQ with OpenTelemetry Collector, configure the RabbitMQ receiver in /etc/otelcol-contrib/config.yaml using your Uptrace DSN:

yaml
receivers:
  otlp:
    protocols:
      grpc:
      http:
  rabbitmq:
    endpoint: http://localhost:15672
    username: guest
    password: guest
    collection_interval: 10s

exporters:
  otlp:
    endpoint: api.uptrace.dev:4317
    headers: { 'uptrace-dsn': '<FIXME>' }

processors:
  resourcedetection:
    detectors: [env, system]
  cumulativetodelta:
  batch:
    timeout: 10s

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]
    metrics:
      receivers: [otlp, rabbitmq]
      processors: [cumulativetodelta, batch, resourcedetection]
      exporters: [otlp]

Don't forget to restart the service:

shell
sudo systemctl restart otelcol-contrib

You can also check OpenTelemetry Collector logs for any errors:

shell
sudo journalctl -u otelcol-contrib -f

Configuration Options

The RabbitMQ receiver provides several configuration options:

OptionDefaultDescription
endpointhttp://localhost:15672RabbitMQ management API URL
username(required)Management API username
password(required)Management API password
collection_interval10sHow often to scrape metrics
timeout10sHTTP request timeout
tls(none)TLS configuration for HTTPS endpoints

Basic Configuration

yaml
receivers:
  rabbitmq:
    endpoint: http://localhost:15672
    username: otel_monitor
    password: secure_password
    collection_interval: 30s

Advanced Configuration with TLS

yaml
receivers:
  rabbitmq:
    endpoint: https://rabbitmq.example.com:15671
    username: otel_monitor
    password: secure_password
    collection_interval: 10s
    tls:
      insecure: false
      ca_file: /path/to/ca.crt
      cert_file: /path/to/client.crt
      key_file: /path/to/client.key
    timeout: 60s

Available Metrics

The RabbitMQ receiver collects metrics covering queues, exchanges, nodes, and connections:

Queue Metrics

MetricDescription
rabbitmq.queue.messagesTotal messages in queue
rabbitmq.queue.messages.readyMessages ready for delivery
rabbitmq.queue.messages.unacknowledgedMessages waiting for acknowledgment
rabbitmq.queue.consumersNumber of consumers per queue
rabbitmq.queue.message.currentCurrent message count

Exchange Metrics

MetricDescription
rabbitmq.exchange.messages.publishedMessages published to exchange
rabbitmq.exchange.messages.confirmedConfirmed published messages
rabbitmq.exchange.messages.returnedReturned messages

Node Metrics

MetricDescription
rabbitmq.node.memory.usedMemory usage by node
rabbitmq.node.disk.freeAvailable disk space
rabbitmq.node.fd.usedFile descriptors in use
rabbitmq.node.sockets.usedNetwork sockets in use
rabbitmq.node.process.countRunning Erlang processes

Connection Metrics

MetricDescription
rabbitmq.connection.countTotal active connections
rabbitmq.channel.countTotal active channels

Docker Compose Example

Run RabbitMQ and the OpenTelemetry Collector together for quick setup:

yaml
services:
  rabbitmq:
    image: rabbitmq:3-management
    container_name: rabbitmq
    ports:
      - "5672:5672"
      - "15672:15672"
    environment:
      RABBITMQ_DEFAULT_USER: admin
      RABBITMQ_DEFAULT_PASS: admin
    networks:
      - monitoring

  otel-collector:
    image: otel/opentelemetry-collector-contrib:0.145.0
    container_name: otel-collector
    command: ["--config=/etc/otelcol-contrib/config.yaml"]
    volumes:
      - ./config.yaml:/etc/otelcol-contrib/config.yaml
    depends_on:
      - rabbitmq
    networks:
      - monitoring

networks:
  monitoring:
    driver: bridge

With the corresponding config.yaml:

yaml
receivers:
  rabbitmq:
    endpoint: http://rabbitmq:15672
    username: admin
    password: admin
    collection_interval: 10s

exporters:
  otlp:
    endpoint: api.uptrace.dev:4317
    headers: { 'uptrace-dsn': '<FIXME>' }

processors:
  batch:
    timeout: 10s

service:
  pipelines:
    metrics:
      receivers: [rabbitmq]
      processors: [batch]
      exporters: [otlp]

Dashboard Visualization

When telemetry data reaches Uptrace, it automatically generates a RabbitMQ dashboard from a pre-defined template. You can also create custom dashboards to visualize:

  • Queue Health: Message counts, consumer activity, and queue depths
  • Throughput Metrics: Message publish/consume rates and acknowledgment patterns
  • Resource Utilization: Memory usage, disk space, and connection counts
  • Node Status: Cluster health and individual node performance
  • Exchange Activity: Message routing and delivery statistics

For other backends, see OpenTelemetry backend comparison.

Alerting

Set up alerts for common RabbitMQ issues:

  • Queue depth growing: rabbitmq.queue.messages > 10000 for 5 minutes — consumers can't keep up
  • No consumers: rabbitmq.queue.consumers == 0 — messages accumulating with no processing
  • Unacknowledged messages: rabbitmq.queue.messages.unacknowledged > 1000 — consumers may be stuck
  • Memory pressure: rabbitmq.node.memory.used approaching configured limit — risk of flow control
  • Disk space low: rabbitmq.node.disk.free below threshold — risk of blocking publishers

Troubleshooting

Common Issues

ProblemLikely CauseSolution
Connection refusedManagement plugin not enabledRun rabbitmq-plugins enable rabbitmq_management
Authentication failedWrong credentials or missing permissionsVerify user has monitoring tag
Missing metricsUser lacks required tagsRun rabbitmqctl set_user_tags user monitoring
High CPU on CollectorCollection interval too aggressiveIncrease collection_interval to 30s or 60s
Timeout errorsLarge cluster or slow networkIncrease timeout value in receiver config

Debug Configuration

Enable debug logging to troubleshoot collection issues:

yaml
service:
  telemetry:
    logs:
      level: debug
  pipelines:
    metrics:
      receivers: [rabbitmq]
      processors: [batch]
      exporters: [debug, otlp]  # Add debug exporter to see metrics in logs

Firewall Configuration

Ensure the Collector can access RabbitMQ management port:

shell
# Allow access to management port (15672)
sudo ufw allow 15672

# Test connectivity from Collector host
curl -u otel_monitor:password http://rabbitmq-server:15672/api/overview

Security Best Practices

For production environments:

shell
# Remove default guest user
sudo rabbitmqctl delete_user guest

# Create dedicated monitoring user with minimal permissions
sudo rabbitmqctl add_user otel_monitor $(openssl rand -base64 32)
sudo rabbitmqctl set_user_tags otel_monitor monitoring
sudo rabbitmqctl set_permissions -p / otel_monitor "" "" ".*"

Use TLS when the Collector and RabbitMQ are on different hosts:

yaml
receivers:
  rabbitmq:
    endpoint: https://rabbitmq.example.com:15671
    tls:
      insecure: false
      ca_file: /etc/ssl/certs/rabbitmq-ca.crt
      cert_file: /etc/ssl/certs/client.crt
      key_file: /etc/ssl/private/client.key

Performance Optimization

Collection Interval Tuning

Adjust collection frequency based on your monitoring needs:

yaml
receivers:
  rabbitmq:
    collection_interval: 30s  # Reduce frequency for high-traffic brokers
    timeout: 10s              # Increase timeout for slow responses

Metric Filtering

Filter unnecessary metrics to reduce overhead:

yaml
processors:
  filter/rabbitmq:
    metrics:
      exclude:
        match_type: regexp
        metric_names:
          - "rabbitmq\\.queue\\.messages\\..*"  # Exclude specific queue metrics if not needed

FAQ

What RabbitMQ versions are supported? The receiver works with RabbitMQ 3.8+ and 4.x. It requires the management plugin which is available in all modern RabbitMQ versions.

Does the receiver support RabbitMQ clusters? Yes. Point the receiver at any node in the cluster and it will collect metrics for all nodes, queues, and exchanges across the cluster via the management API.

How is this different from Prometheus scraping? RabbitMQ also exposes a Prometheus endpoint. The OpenTelemetry receiver collects similar metrics but exports them in OTLP format, allowing you to use any OTLP-compatible backend and correlate with traces and logs.

Can I monitor multiple RabbitMQ clusters? Yes. Define multiple receiver instances with different names:

yaml
receivers:
  rabbitmq/cluster1:
    endpoint: http://rabbitmq-cluster1:15672
    username: monitor
    password: pass1
  rabbitmq/cluster2:
    endpoint: http://rabbitmq-cluster2:15672
    username: monitor
    password: pass2

What's the monitoring tag vs administrator tag? The monitoring tag grants read-only access to the management API, which is sufficient for metrics collection. The administrator tag grants full access and should not be used for monitoring.

What's next?

RabbitMQ monitoring provides visibility into message queue performance, consumer lag, and broker health.

Next steps to enhance your infrastructure observability: