AWS CloudWatch metrics and logs

AWS CloudWatch allows you to forward metrics and logs to third-party destinations using AWS Data Firehose. Uptrace provides compatible HTTP endpoints so you can monitor your AWS infrastructure with Uptrace, an open source APM tool that supports distributed tracing, metrics, and logs.

Choosing an approach

Metrics:

ApproachProsCons
Data FirehoseSimplest setup, fully managed by AWSNo AWS tags — only standard dimensions
YACE + PrometheusRich metadata (AWS tags as labels)Requires Prometheus and an AWS-accessible host

Logs:

ApproachProsCons
Data FirehosePush-based, fully managed by AWSRequires subscription filter per log group
OTel CollectorPull-based, supports autodiscovery and filteringAlpha stability, requires AWS credentials

Metrics via YACE + Prometheus

CloudWatch Metrics is a monitoring service provided by Amazon Web Services (AWS) that allows you to collect and track metrics in real-time. For developers looking to programmatically interact with CloudWatch Metrics, see our guide to CloudWatch Metrics API.

yet-another-cloudwatch-exporter (YACE) exports CloudWatch metrics as Prometheus metrics with AWS tags as labels. This gives you richer metadata than Data Firehose, which only provides access to standard dimensions such as InstanceId and InstanceType.

  1. First, install YACE by downloading a binary file or using Docker/Kubernetes.
    Use the following IAM policy to grant all the permissions required by YACE:
    json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": [
            "tag:GetResources",
            "cloudwatch:GetMetricData",
            "cloudwatch:GetMetricStatistics",
            "cloudwatch:ListMetrics",
            "apigateway:GET",
            "aps:ListWorkspaces",
            "autoscaling:DescribeAutoScalingGroups",
            "dms:DescribeReplicationInstances",
            "dms:DescribeReplicationTasks",
            "ec2:DescribeTransitGatewayAttachments",
            "ec2:DescribeSpotFleetRequests",
            "shield:ListProtections",
            "storagegateway:ListGateways",
            "storagegateway:ListTagsForResource",
            "iam:ListAccountAliases"
          ],
          "Effect": "Allow",
          "Resource": "*"
        }
      ]
    }
    
  2. Next, configure YACE using a YAML configuration file. To specify which configuration file to load, pass the -config.file flag on the command line.
    YACE supports automatic resource discovery via tags, but you can also use static and custom namespace jobs.
    Here is an example config file for EC2, but you can find more on GitHub:
    yaml
    apiVersion: v1alpha1
    discovery:
      jobs:
        - type: AWS/EC2
          regions:
            - us-east-1
          period: 300
          length: 300
          metrics:
            - name: CPUUtilization
              statistics: [Average]
            - name: NetworkIn
              statistics: [Average, Sum]
            - name: NetworkOut
              statistics: [Average, Sum]
            - name: NetworkPacketsIn
              statistics: [Sum]
            - name: NetworkPacketsOut
              statistics: [Sum]
            - name: DiskReadBytes
              statistics: [Sum]
            - name: DiskWriteBytes
              statistics: [Sum]
            - name: DiskReadOps
              statistics: [Sum]
            - name: DiskWriteOps
              statistics: [Sum]
            - name: StatusCheckFailed
              statistics: [Sum]
            - name: StatusCheckFailed_Instance
              statistics: [Sum]
            - name: StatusCheckFailed_System
              statistics: [Sum]
    
  3. Once you have YACE running, the Prometheus metrics should be available at http://localhost:5000/metrics.
    Now you need to add a corresponding job to your Prometheus configuration:
    yaml
    - job_name: 'yet-another-cloudwatch-exporter'
      metrics_path: '/metrics'
      static_configs:
        - targets: ['localhost:5000']
    
  4. The final step is to configure Prometheus to export data to Uptrace using remote write or OpenTelemetry Collector. You can also use Grafana integration to explore collected Prometheus metrics and create dashboards provided by YACE.

Metrics via Data Firehose

If you don't need AWS tags and only require standard dimensions, you can send CloudWatch metrics directly to Uptrace using AWS Data Firehose.

You can configure Data Firehose using a Terraform module or the AWS Console.

Terraform module

Uptrace provides a Terraform module that configures AWS CloudWatch to send metrics to Uptrace. Refer to the module's readme for details.

AWS Console

You can also configure CloudWatch manually using the AWS Console.

  1. Create a new Data Firehose Delivery Stream with the following details:
    • Stream source: Direct PUT
    • Endpoint: https://api.uptrace.dev/api/v1/cloudwatch/metrics
    • API Key: Enter the Uptrace DSN for your project.
    • Content Encoding: GZIP.
  2. Create a new CloudWatch Metric Stream.
    1. Open the CloudWatch AWS console.
    2. Choose Metrics → Streams.
    3. Click the Create metric stream button.
    4. Choose CloudWatch metric namespaces to include in the metric stream.
    5. Choose Select an existing Firehose owned by your account, and select the Firehose Delivery Stream you created earlier.
    6. Specify an Output Format of json.
    7. Optionally, specify a name for this metric stream under Metric Stream Name.
    8. Click on the Create metric stream button.

Logs via OpenTelemetry Collector

The OpenTelemetry Collector awscloudwatch receiver pulls CloudWatch logs via the AWS SDK, giving you autodiscovery of log groups and fine-grained filtering without managing subscription filters.

The receiver authenticates using standard AWS credentials (credentials file, IMDS on EC2, or environment variables such as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).

Here is an example configuration that autodiscovers log groups with a /aws/eks/ prefix and forwards them to Uptrace:

yaml
receivers:
  awscloudwatch:
    region: us-west-1
    logs:
      poll_interval: 1m
      groups:
        autodiscover:
          limit: 100
          prefix: /aws/eks/

processors:
  batch:
    send_batch_size: 10000
    timeout: 10s

exporters:
  otlp/uptrace:
    endpoint: api.uptrace.dev:4317
    headers:
      uptrace-dsn: '<FIXME>'

service:
  pipelines:
    logs:
      receivers: [awscloudwatch]
      processors: [batch]
      exporters: [otlp/uptrace]

You can also specify named log groups instead of autodiscovery:

yaml
receivers:
  awscloudwatch:
    region: us-west-1
    logs:
      poll_interval: 5m
      groups:
        named:
          /aws/eks/dev-0/cluster:

Logs via Data Firehose

CloudWatch Logs is a log management service provided by Amazon Web Services (AWS) that allows you to collect, monitor, and analyze log files from your applications and infrastructure.

You can forward CloudWatch logs to Uptrace using AWS Data Firehose. The flow is: CloudWatch log group → subscription filter → Data Firehose delivery stream → Uptrace HTTP endpoint.

Terraform module

Uptrace provides a Terraform module that configures AWS CloudWatch to send logs to Uptrace. Refer to the module's readme for details.

AWS Console

You can also configure CloudWatch manually using the AWS Console.

  1. Create a new Data Firehose Delivery Stream with the following details:
    • Stream source: Direct PUT
    • Endpoint: https://api.uptrace.dev/api/v1/cloudwatch/logs
    • API Key: Enter the Uptrace DSN for your project.
    • Content Encoding: GZIP.
  2. Create a new CloudWatch log group using the Firehose Delivery Stream you created earlier.

What's next?