Mastering Kubernetes Monitoring: Install Metrics Server via Helm the Right Way!
Complete step-by-step guide to install and troubleshoot Metrics Server on Kubernetes using Helm for EKS, GKE, Minikube & Kubeadm clusters
Quick Navigation
Difficulty: 🟡 Intermediate
Estimated Time: 15-25 minutes
Prerequisites: Basic Kubernetes knowledge, Helm experience, kubectl familiarity
What You'll Learn
This tutorial covers essential Kubernetes Metrics Server concepts and tools:
- Metrics Server Fundamentals - Understanding its role in Kubernetes monitoring
- Helm Installation - Step-by-step setup with proper flags and configuration
- Verification & Testing - Ensuring your installation works correctly
- Custom Configuration - Using values.yaml for advanced setups
- Troubleshooting - Common issues and solutions for different cluster types
- Production Best Practices - Security, performance, and monitoring considerations
Prerequisites
- Basic Kubernetes knowledge and cluster administration experience
- Helm package manager experience
- kubectl command-line tool familiarity
- Access to a Kubernetes cluster
Related Tutorials
- Kubernetes HPA Autoscaling - Set up autoscaling with Metrics Server
- OpenTelemetry Guide - Advanced observability setup
- Main Tutorials Hub - Step-by-step implementation guides
Introduction
Metrics Server is the heartbeat of Kubernetes performance monitoring. Whether you're tuning autoscaling or simply checking node usage, it's an essential component. However, Helm installation can get tricky if syntax or flags are off.
In this guide, you'll learn how to add the Helm repo, install Metrics Server correctly (with --kubelet-insecure-tls
), verify the setup, and use a custom values.yaml
for better flexibility.
Step-by-Step Installation
Step 1: Add the Metrics Server Helm Repository
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
helm repo update
This adds and refreshes the repo so you can access the latest Metrics Server chart.
Step 2: Install Metrics Server via Helm
helm install metrics-server metrics-server/metrics-server \
--namespace kube-system \
--set "args={--kubelet-insecure-tls,--kubelet-preferred-address-types=InternalIP}"
Why This Matters
Many clusters (like Minikube or bare-metal) need --kubelet-insecure-tls
or they'll fail TLS verification.
Key Flags Explained:
--kubelet-insecure-tls
: Skips TLS verification for kubelet connections--kubelet-preferred-address-types=InternalIP
: Uses internal IPs for better connectivity
Step 3: Verify Your Installation
Check Pod Status
kubectl get pods -n kube-system -l "app.kubernetes.io/name=metrics-server"
Check Metrics API
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
Expected Output
You should see a JSON output with node stats:
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {},
"items": [
{
"metadata": {
"name": "kube1",
"creationTimestamp": "2025-05-21T10:56:12Z",
"labels": {
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/arch": "amd64",
"kubernetes.io/hostname": "kube1",
"kubernetes.io/os": "linux",
"node-role.kubernetes.io/control-plane": "",
"node.kubernetes.io/exclude-from-external-load-balancers": ""
}
},
"timestamp": "2025-05-21T10:55:59Z",
"window": "20.039s",
"usage": {
"cpu": "210243026n",
"memory": "3123988Ki"
}
}
]
}
Troubleshooting
If you don't see the expected output, run:
kubectl describe pod <metrics-server-pod-name> -n kube-system
Step 4: Use a Custom values.yaml (Optional but Recommended)
Create values.yaml
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
Install Using Values File
helm install metrics-server metrics-server/metrics-server \
--namespace kube-system -f values.yaml
Bonus: You can add options like nodeSelector
, affinity
, or tolerations
here for advanced scheduling.
Advanced Configuration Options
Complete values.yaml Example
# Basic configuration
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls=true
# Resource limits
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
# Scheduling preferences
nodeSelector:
kubernetes.io/os: linux
# Affinity rules
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
# Tolerations
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
# Service configuration
service:
type: ClusterIP
port: 4443
# Security context
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
Cluster-Specific Configurations
For EKS Clusters
args:
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-insecure-tls=false
- --requestheader-client-ca-file=/etc/ssl/certs/ca-bundle.crt
For GKE Clusters
args:
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-insecure-tls=false
For Minikube Clusters
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
Testing and Validation
Test Node Metrics
# Get node metrics
kubectl top nodes
# Get pod metrics
kubectl top pods --all-namespaces
# Get specific pod metrics
kubectl top pod <pod-name> -n <namespace>
Test HPA Integration
# Create a test deployment
kubectl create deployment nginx --image=nginx
# Create HPA
kubectl autoscale deployment nginx --cpu-percent=50 --min=1 --max=10
# Check HPA status
kubectl get hpa
# Watch HPA scaling
kubectl get hpa -w
Common Issues and Solutions
Issue: "unable to fetch metrics from API server"
Solution: Check if metrics-server pod is running and has proper permissions.
Issue: "no metrics available for pod"
Solution: Ensure pods have resource requests/limits defined.
Issue: "connection refused" errors
Solution: Verify --kubelet-insecure-tls
flag is set for self-signed certificates.
Issue: Metrics API returns 404
Solution: Check if metrics-server is properly installed and the API is enabled.
Troubleshooting Commands
Check Metrics Server Logs
# Get metrics-server pod name
kubectl get pods -n kube-system -l "app.kubernetes.io/name=metrics-server"
# Check logs
kubectl logs -n kube-system <metrics-server-pod-name>
# Follow logs in real-time
kubectl logs -n kube-system <metrics-server-pod-name> -f
Verify API Resources
# Check if metrics API is available
kubectl api-resources | grep metrics
# Check metrics API endpoints
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/" | jq
Check Cluster Configuration
# Verify kubelet configuration
kubectl get nodes -o yaml | grep -A 5 kubelet
# Check cluster info
kubectl cluster-info
Production Best Practices
Security Considerations
- Use proper TLS certificates in production environments
- Implement RBAC for metrics access control
- Monitor metrics-server resource usage
- Regular updates to get security patches
Performance Optimization
- Resource limits to prevent resource exhaustion
- Node affinity for optimal placement
- Monitoring of metrics-server performance
- Scaling based on cluster size
Monitoring and Alerting
- Set up alerts for metrics-server failures
- Monitor API response times
- Track metrics collection success rates
- Log analysis for troubleshooting
Alternative Installation Methods
Using kubectl (Direct Installation)
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Using Kustomize
# Create kustomization.yaml
cat <<EOF > kustomization.yaml
resources:
- https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
patchesStrategicMerge:
- patch.yaml
EOF
# Create patch.yaml for custom args
cat <<EOF > patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
spec:
template:
spec:
containers:
- name: metrics-server
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
EOF
# Apply
kubectl apply -k .
Conclusion
Installing Metrics Server might seem like a small step, but it's crucial for autoscaling and performance visibility.
Using the right Helm syntax or a clean values.yaml
file can save you from cryptic errors and hours of debugging.
Key Takeaways
- Essential Component - Metrics Server is required for HPA and resource monitoring
- Proper Configuration - Use correct flags for your cluster type
- Verification - Always test the installation before proceeding
- Customization - Use values.yaml for production deployments
- Troubleshooting - Common issues have straightforward solutions
Next Steps
- Verify your installation with the testing commands provided
- Set up HPA to use the metrics from Metrics Server
- Configure monitoring with Prometheus and Grafana
- Implement best practices for production environments
- Explore advanced features like custom metrics and external metrics
Now your Kubernetes cluster is ready to report accurate metrics like a pro!
Tags: #Kubernetes #Helm #MetricsServer #DevOps #K8sMonitoring #Kubeadm #EKS #GKE #Minikube #ClusterMonitoring #KubernetesAutoscaling