I'm facing challenges with my observability setup on a K3s cluster. We have a single node running multiple applications and a vector agent configured to transform metrics and logs. While I have node-exporter and kube-state-metrics set up to expose metrics to the vector agent, I'm stuck because the node-exporter only provides node-level metrics and kube-state-metrics does not give insights on current CPU and memory usage at the pod level. I'm looking for ways to scrape pod-level metrics. My specific questions are: 1) Can we scrape metrics from the metrics server? If yes, how do we connect to its API? 2) Are there any exporters available that can expose pod-level CPU and memory usage?
2 Answers
First off, I don't think you can scrape metrics directly from the metrics server. It's more of an internal component that doesn't expose metrics in a Prometheus-friendly way. For your second question, while Prometheus is a great tool for collecting those metrics, I get that you might be locked into using the vector agent. You could look into using cAdvisor to expose pod-level metrics. It can be a good fit for collecting resource usage information from the containers. Just make sure you set it up properly to integrate with your current setup.
Regarding the metrics server, it's really intended for Kubernetes use and typically doesn't allow direct scraping. Instead, for pod-level metrics, you might want to utilize the kubelet metrics endpoint. You can reach this at `:10250/metrics/cadvisor` using the appropriate authorization. Just remember to include the necessary token from your service account. If you're looking for alternatives aside from Prometheus, you might explore using other lightweight exporters that can work with your vector setup.
That's a great suggestion! Can you explain how to get the kubelet metrics endpoint URL?
Thanks for the tip on cAdvisor! Do you have any suggestions on how to configure it to work with the vector agent?