Skip to content

AWS Integration - Redshift

Mackerel supports obtaining and monitoring Amazon Redshift metrics in AWS Integration. When integrating with AWS Integration, billable targets are determined using the conversion 1 Cluster = 1 Micro Host. In addition to this, depending on the number of metrics retrieved, you may be charged for exceeding the maximum number of metrics per micro host.

Please refer to the following page for AWS Integration configuration methods and a list of supported AWS services.
AWS Integration

Obtaining metrics

The metrics obtainable with AWS Integration’s Redshift support are as follows. For Metric explanations, refer to the AWS help page.

The maximum number of metrics obtainable is 24 + 2 × (number of queues) + 1 × (number of service classes) + 10 × (number of nodes).

Metrics per Cluster

The WLM_ID of the metric name contains the workload management (WLM) queue ID. (Example: 1, Default etc.)
The SERVICE_CLASS of the metric name contains the workload management (WLM) service class ID. (Example: 6, 7 etc.)

Graph nameMetricMetric name in MackerelUnitStatistics
CPUCPUUtilizationredshift.cpu.usedpercentageAverage
Database ConnectionsDatabaseConnectionsredshift.database_connections.usedfloatAverage
Cluster StatusHealthStatus
MaintenanceMode
redshift.cluster_status.health
redshift.cluster_status.maintenance
floatAverage
Network ThroughputNetworkReceiveThroughput
NetworkTransmitThroughput
redshift.network_throughput.receive
redshift.network_throughput.transmit
bytes/secAverage
Disk SpacePercentageDiskSpaceUsedredshift.disk.usedpercentageAverage
Total Table CountTotalTableCountredshift.total_table_count.countfloatAverage
Query Runtime BreakdownQueryRuntimeBreakdownredshift.query_runtime_breakdown.planning
redshift.query_runtime_breakdown.waiting
redshift.query_runtime_breakdown.executing_read
redshift.query_runtime_breakdown.executing_insert
redshift.query_runtime_breakdown.executing_delete
redshift.query_runtime_breakdown.executing_update
redshift.query_runtime_breakdown.executing_ctas
redshift.query_runtime_breakdown.executing_unload
redshift.query_runtime_breakdown.executing_copy
redshift.query_runtime_breakdown.commit
floatAverage
Query ThroughputQueriesCompletedPerSecondredshift.query_throughput.short
redshift.query_throughput.medium
redshift.query_throughput.long
floatAverage
Query DurationQueryDurationredshift.query_duration.short
redshift.query_duration.medium
redshift.query_duration.long
floatAverage
WLM Query ThroughputWLMQueriesCompletedPerSecondredshift.wlm_query_throughput.WLM_IDfloatAverage
WLM Query DurationWLMQueryDurationredshift.wlm_query_duration.WLM_IDfloatAverage
WLM Queue LengthWLMQueueLengthredshift.wlm_queue_length.SERVICE_CLASSfloatAverage

Metrics per Node

Since there can be multiple Nodes per cluster in Redshift, each metric is grouped as follows. The metric name’s NODE_ROLE will contain the Role of the Node. (Example: leader, compute_0 etc.)

Graph nameMetricMetric name in MackerelUnitStatistics
CPU per NodeCPUUtilizationredshift.cpu_per_node.NODE_ROLE.usedpercentageAverage
Network Throughput per NodeNetworkReceiveThroughput
NetworkTransmitThroughput
redshift.network_throughput_per_node.NODE_ROLE.receive
redshift.network_throughput_per_node.NODE_ROLE.transmit
bytes/secAverage
Disk Space per NodePercentageDiskSpaceUsedredshift.disk_per_node.NODE_ROLE.usedpercentageAverage
Disk IOPSReadIOPS
WriteIOPS
redshift.diskiops.NODE_ROLE.read
redshift.diskiops.NODE_ROLE.write
iopsAverage
Disk LatencyReadLatency
WriteLatency
redshift.latency.NODE_ROLE.read
redshift.latency.NODE_ROLE.write
floatAverage
Disk ThroughputReadThroughput
WriteThroughput
redshift.throughput.NODE_ROLE.read
redshift.throughput.NODE_ROLE.write
bytes/secAverage

Notes

Among the graphs/metrics obtainable with AWS integration mentioned above, the metric retrieval interval differs for metrics included in the following graph.

  • 5 minute interval
    • Query Runtime Breakdown
    • Query Duration
    • Query Throughput
    • WLM Query Duration
    • WLM Query Throughput