OpenTelemetry RabbitMQ Monitoring Guide

Alexandr Bandurchin
August 14, 2025
4 min read

Monitoring RabbitMQ performance is essential to ensure reliable message delivery and identify potential bottlenecks in your messaging infrastructure. RabbitMQ metrics help you track queue depths, message rates, connection counts, and resource utilization.

To monitor RabbitMQ performance, you can use OpenTelemetry Collector to collect metrics and Uptrace to visualize them.

What is RabbitMQ?

RabbitMQ is open-source message broker that implements the Advanced Message Queuing Protocol (AMQP). It acts as an intermediary for messaging, allowing applications to communicate by sending and receiving messages through queues, exchanges, and bindings.

RabbitMQ supports multiple messaging patterns including point-to-point, publish-subscribe, and request-reply. It provides features like message durability, clustering, federation, and high availability, making it suitable for enterprise-scale distributed systems.

What is OpenTelemetry Collector?

OpenTelemetry Collector is an agent that pulls telemetry data from systems you want to monitor and sends it to tracing tools using the OpenTelemetry protocol (OTLP).

You can use OpenTelemetry Collector to monitor host metrics, PostgreSQL, MySQL, Redis, and more.

OpenTelemetry RabbitMQ receiver

To start monitoring RabbitMQ with OpenTelemetry Collector, you need to configure RabbitMQ receiver in /etc/otel-contrib-collector/config.yaml using Uptrace DSN:

yaml
receivers:
  otlp:
    protocols:
      grpc:
      http:
  rabbitmq:
    endpoint: http://localhost:15672
    username: guest
    password: guest
    collection_interval: 10s

exporters:
  otlp:
    endpoint: api.uptrace.dev:4317
    headers: { 'uptrace-dsn': '<FIXME>' }

processors:
  resourcedetection:
    detectors: [env, system]
  cumulativetodelta:
  batch:
    timeout: 10s

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]
    metrics:
      receivers: [otlp, rabbitmq]
      processors: [cumulativetodelta, batch, resourcedetection]
      exporters: [otlp]

Don't forget to restart the service:

shell
sudo systemctl restart otelcol-contrib

You can also check OpenTelemetry Collector logs for any errors:

shell
sudo journalctl -u otelcol-contrib -f

Configuration Options

The RabbitMQ receiver provides several configuration options:

Basic Configuration

yaml
receivers:
  rabbitmq:
    endpoint: http://localhost:15672
    username: monitoring_user
    password: secure_password
    collection_interval: 30s

Advanced Configuration

yaml
receivers:
  rabbitmq:
    endpoint: http://rabbitmq.example.com:15672
    username: monitoring_user
    password: secure_password
    collection_interval: 10s
    tls:
      insecure: false
      ca_file: /path/to/ca.crt
      cert_file: /path/to/client.crt
      key_file: /path/to/client.key
    timeout: 60s

Authentication Setup

For production environments, create a dedicated monitoring user in RabbitMQ:

shell
# Create monitoring user
sudo rabbitmqctl add_user monitoring_user secure_password

# Set monitoring tag (required for management API access)
sudo rabbitmqctl set_user_tags monitoring_user monitoring

# Grant permissions (optional, for more detailed metrics)
sudo rabbitmqctl set_permissions -p / monitoring_user ".*" ".*" ".*"

Prerequisites

Enable RabbitMQ Management Plugin

The OpenTelemetry RabbitMQ receiver requires the management plugin to be enabled:

shell
# Enable management plugin
sudo rabbitmq-plugins enable rabbitmq_management

# Verify the plugin is enabled
sudo rabbitmq-plugins list

The management interface will be available at http://localhost:15672 by default.

Verify Management API Access

Test that the management API is accessible:

shell
# Test API endpoint
curl -u guest:guest http://localhost:15672/api/overview

# Should return JSON with RabbitMQ overview information

Available Metrics

The RabbitMQ receiver collects comprehensive metrics covering various aspects of your message broker:

Queue Metrics

  • rabbitmq.queue.messages - Total messages in queue
  • rabbitmq.queue.messages.ready - Messages ready for delivery
  • rabbitmq.queue.messages.unacknowledged - Messages waiting for acknowledgment
  • rabbitmq.queue.consumers - Number of consumers per queue
  • rabbitmq.queue.message.current - Current message count

Exchange Metrics

  • rabbitmq.exchange.messages.published - Messages published to exchange
  • rabbitmq.exchange.messages.confirmed - Confirmed published messages
  • rabbitmq.exchange.messages.returned - Returned messages

Node Metrics

  • rabbitmq.node.memory.used - Memory usage by node
  • rabbitmq.node.disk.free - Available disk space
  • rabbitmq.node.fd.used - File descriptors in use
  • rabbitmq.node.sockets.used - Network sockets in use
  • rabbitmq.node.process.count - Running Erlang processes

Connection Metrics

  • rabbitmq.connection.count - Total active connections
  • rabbitmq.channel.count - Total active channels

OpenTelemetry Backend

Once the metrics are collected and exported, you can visualize them using a compatible backend system. For example, you can use Uptrace to create dashboards that display metrics from the OpenTelemetry Collector or compare backends. For APM capabilities, explore top APM tools for messaging infrastructure.

Dashboard Visualization

When telemetry data reaches Uptrace, it automatically generates a RabbitMQ dashboard from a pre-defined template showing:

  • Queue Health: Message counts, consumer activity, and queue depths
  • Throughput Metrics: Message publish/consume rates and acknowledgment patterns
  • Resource Utilization: Memory usage, disk space, and connection counts
  • Node Status: Cluster health and individual node performance
  • Exchange Activity: Message routing and delivery statistics

Troubleshooting

Common Issues

  1. Connection refused: Ensure RabbitMQ management plugin is enabled and running
  2. Authentication failed: Verify username/password and user permissions
  3. Missing metrics: Check that the monitoring user has appropriate tags and permissions
  4. High CPU usage: Adjust collection_interval to reduce scraping frequency

Debug Configuration

Enable debug logging to troubleshoot collection issues:

yaml
service:
  telemetry:
    logs:
      level: debug
  pipelines:
    metrics:
      receivers: [rabbitmq]
      processors: [batch]
      exporters: [logging, otlp]  # Add logging exporter for debugging

Firewall Configuration

Ensure the collector can access RabbitMQ management port:

shell
# Allow access to management port (15672)
sudo ufw allow 15672

# For remote monitoring, ensure network connectivity
telnet rabbitmq-server 15672

Security Considerations

Production Authentication

For production environments, avoid using default credentials:

shell
# Remove default guest user (recommended)
sudo rabbitmqctl delete_user guest

# Create dedicated monitoring user with minimal permissions
sudo rabbitmqctl add_user otel_monitor $(openssl rand -base64 32)
sudo rabbitmqctl set_user_tags otel_monitor monitoring

TLS Configuration

Configure TLS for secure communication:

yaml
receivers:
  rabbitmq:
    endpoint: https://rabbitmq.example.com:15671
    tls:
      insecure: false
      ca_file: /etc/ssl/certs/rabbitmq-ca.crt
      cert_file: /etc/ssl/certs/client.crt
      key_file: /etc/ssl/private/client.key

Performance Optimization

Collection Interval Tuning

Adjust collection frequency based on your monitoring needs:

yaml
receivers:
  rabbitmq:
    collection_interval: 30s  # Reduce frequency for high-traffic brokers
    timeout: 10s              # Increase timeout for slow responses

Metric Filtering

Filter unnecessary metrics to reduce overhead:

yaml
processors:
  filter/rabbitmq:
    metrics:
      exclude:
        match_type: regexp
        metric_names:
          - "rabbitmq\.queue\.messages\..*"  # Exclude specific queue metrics if not needed

What's next?

RabbitMQ monitoring provides visibility into message queue performance, consumer lag, and broker health. For alternative messaging systems, explore Kafka monitoring, or combine with Docker and Kubernetes instrumentation for complete infrastructure observability.