How to scrape Prometheus Kubernetes cluster metrics with Prometheus
Estimated time to read: 2 minutes
In this tutorial you will learn how to setup a Prometheus server to scrape the Prometheus Kubernetes cluster metrics gathered in the control plane.
This is useful if you have a large environment on multiple clouds or on-prem and want to centralize the observability of your infrastructure. For example when using DataDog, Signoz or another observability platform.
In this tutorial we will use a Prometheus server, but any service capable of scraping Prometheus metrics with Basic Auth is possible to use, such as an OpenTelemtry collector Prometheus receiver.
Prerequisites:
- Have a Kubernetes Cluster
- Have a Prometheus Server
Configuration Required
As we're using a Prometheus server we'll use Prometheus YAML config as an example, the same config with slight differencs applies when converting the configuration to an OTel Collector prometheus scraper or some other form of Prometheus scraping.
We start with adding a scrape job, you can name this anything you want, but in our example we'll call it 'federated-prometheus'. After this add the usual configuration you require for your intervals, timeouts and more, two useful ones are "honor_labels: true" and "honor_timestamps: true" to preserve the timestamps and labels of the original scrapes within the cluster.
scrape_configs:
- job_name: 'federated-prometheus'
# Scrape interval (adjust as needed)
scrape_interval: 30s
scrape_timeout: 30s
# Preserve original labels from the federated server
honor_labels: true
honor_timestamps: true
Now make sure to use a https scheme and use the '/federate' metrics path. This is a path exposed on the 'in-cluster' Prometheus server for being scraped by. You can use params to only scrape what you want, in our example we'll scrape all metrics.
scheme: https
# Federate endpoint path
metrics_path: '/federate'
# Query parameters for federation
params:
'match[]':
- '{__name__=~".*"}' # Match all metrics
Finally add the basic auth (your Prometheus dashboard username and password) and target (your in-cluster Prometheus Server). These are obtained from your Managed Kubernetes cluster overview screen under 'Logging and monitoring'.
basic_auth:
username: 'your-username'
password: 'your-password'
# Target Prometheus server (hostname only, no scheme)
static_configs:
- targets:
- 'p-...emk.fuga.cloud'
Your configuration is now complete and once your Prometheus server is started/restarted you'll be scraping your Clusters metrics.
The full configuration as an example is given below.
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'federated-prometheus'
# Scrape interval for federation (adjust as needed)
scrape_interval: 30s
scrape_timeout: 30s
# Preserve original labels from the federated server
honor_labels: true
honor_timestamps: true
# Use HTTPS
scheme: https
# Federate endpoint path
metrics_path: '/federate'
# Query parameters for federation
params:
'match[]':
- '{__name__=~".*"}' # Match all metrics
# Basic authentication
basic_auth:
username: 'your-username'
password: 'your-password'
# Target Prometheus server (hostname only, no scheme)
static_configs:
- targets:
- 'p-...emk.fuga.cloud'
