Quick Start
This guide walks you through installing Kubeadapt in your Kubernetes cluster using Helm.
Prerequisites
Before installing Kubeadapt, ensure you have:
- Kubernetes Cluster - Version 1.24 or later
- Helm 3 - Version 3.0 or later
- kubectl - Configured to access your cluster
- Storage Class – Required for Prometheus data persistence. Since Kubeadapt only requires a maximum 30-minute data retention policy for Prometheus, you may use an ephemeral emptyDir volume instead of persistent storage.
Installation
Step 1: Add Helm Repository
1helm repo add kubeadapt https://kubeadapt.github.io/kubeadapt-helm
2helm repo updateStep 2: Create Namespace
1kubectl create namespace kubeadaptStep 3: Generate Agent Token
Before installation, generate an agent token from the Kubeadapt dashboard:
- Navigate to app.kubeadapt.io
- Go to Clusters → Add Cluster
- Choose the cloud provider or custom cluster type
- Specify environment type production-like or non-production-like
- If you already have a running Node Exporter in your cluster, it can be re-used. There is no need to install it again. You can optionally provide the namespace where it is installed.
The generated command will look similar to this:
1helm install kubeadapt kubeadapt/kubeadapt \
2 --namespace kubeadapt \
3 --create-namespace \
4 --set agent.enabled=true \
5 --set agent.config.token="your-generated-token-here"Note: The actual command will be automatically generated with your unique token. Copy and use the command provided in the dashboard.
Step 4: Install Kubeadapt
Use the dashboard-generated command from Step 3 above, which includes your agent token.
Step 5: Verify Installation
Check all pods are running:
1kubectl get pods -n kubeadaptExpected output:
1NAME READY STATUS AGE
2kubeadapt-kube-state-metrics-xxxxx 1/1 Running 2m
3kubeadapt-opencost-xxxxx 1/1 Running 2m
4kubeadapt-prometheus-node-exporter-xxxxx 1/1 Running 2m
5kubeadapt-prometheus-server-xxxxx 1/1 Running 2mCheck services:
1kubectl get svc -n kubeadaptExpected output:
1NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
2kubeadapt-prometheus-server ClusterIP 10.0.0.100 <none> 80/TCP,9090/TCP 2m
3kubeadapt-opencost ClusterIP 10.0.0.101 <none> 9003/TCP 2m
4kubeadapt-kube-state-metrics ClusterIP 10.0.0.102 <none> 8080/TCP 2m
5kubeadapt-prometheus-node-exporter ClusterIP 10.0.0.103 <none> 9100/TCP 2mConfiguration Options
Storage Configuration
Persistent Storage (any StorageClass):
1prometheus:
2 server:
3 persistentVolume:
4 enabled: true
5 size: 20Gi
6 storageClass: your-storage-class # gp2, gp3, standard, ssd, managed-premium, local-path, etc.
7 retention: "30m"Ephemeral Storage (emptyDir):
1prometheus:
2 server:
3 persistentVolume:
4 enabled: false
5 emptyDir:
6 enabled: true
7 sizeLimit: "20Gi"
8 retention: "30m"Resource Configuration
Adjust resources based on cluster size.
Small Clusters (~10 nodes, ~300 pods):
1agent:
2 enabled: true
3 resources:
4 requests:
5 cpu: 100m
6 memory: 128Mi
7 limits:
8 cpu: 500m
9 memory: 256Mi
10
11prometheus:
12 server:
13 persistentVolume:
14 enabled: true
15 size: 10Gi # With networkCost: ~10Gi (~10K-180K network time series)
16 storageClass: gp2 # Adjust for your cloud provider
17 retention: "30m"
18 resources:
19 requests:
20 cpu: 500m # With networkCost: ~575m (+15% scraping overhead)
21 memory: 512Mi # With networkCost: ~1-2Gi (network time series vary by topology)
22 limits:
23 cpu: 2000m # With networkCost: ~2500m (burst capacity)
24 memory: 2Gi # With networkCost: ~3-4Gi (traffic bursts)
25
26 kube-state-metrics:
27 resources:
28 requests:
29 cpu: 10m
30 memory: 55Mi
31 limits:
32 cpu: 100m
33 memory: 128Mi
34
35 prometheus-node-exporter:
36 resources:
37 requests:
38 cpu: 100m
39 memory: 30Mi
40 limits:
41 cpu: 200m
42 memory: 50Mi
43
44opencost:
45 opencost:
46 exporter:
47 resources:
48 requests:
49 cpu: 10m
50 memory: 55Mi
51 limits:
52 cpu: 999m
53 memory: 1GiMedium Clusters (~100 nodes, ~3,000 pods):
1agent:
2 enabled: true
3 resources:
4 requests:
5 cpu: 200m
6 memory: 256Mi
7 limits:
8 cpu: 1000m
9 memory: 1Gi
10
11prometheus:
12 server:
13 persistentVolume:
14 enabled: true
15 size: 20Gi # With networkCost: ~30Gi (~100K realistic, ~18M theoretical network time series)
16 storageClass: gp2
17 retention: "30m"
18 resources:
19 requests:
20 cpu: 1000m # With networkCost: ~1150m (+15% scraping overhead)
21 memory: 2Gi # With networkCost: ~3-6Gi (network topology dependent)
22 limits:
23 cpu: 4000m # With networkCost: ~5000m (burst capacity)
24 memory: 8Gi # With networkCost: ~12Gi (high-density service mesh)
25
26 kube-state-metrics:
27 resources:
28 requests:
29 cpu: 100m
30 memory: 256Mi
31 limits:
32 cpu: 500m
33 memory: 1Gi
34
35 prometheus-node-exporter:
36 resources:
37 requests:
38 cpu: 100m
39 memory: 30Mi
40 limits:
41 cpu: 200m
42 memory: 50Mi
43
44opencost:
45 opencost:
46 exporter:
47 resources:
48 requests:
49 cpu: 50m
50 memory: 128Mi
51 limits:
52 cpu: 1500m
53 memory: 2GiLarge Clusters (~500 nodes, ~15,000 pods):
1agent:
2 enabled: true
3 resources:
4 requests:
5 cpu: 500m
6 memory: 512Mi
7 limits:
8 cpu: 3000m
9 memory: 4Gi
10 config:
11 queryConcurrency: 20
12 goMaxProcs: 6
13 goMemLimit: "3600MiB"
14
15prometheus:
16 server:
17 persistentVolume:
18 enabled: true
19 size: 50Gi # With networkCost: ~100Gi
20 storageClass: gp2
21 retention: "30m"
22 resources:
23 requests:
24 cpu: 2000m # With networkCost: ~2500m (+15% scraping overhead)
25 memory: 8Gi # With networkCost: ~12-24Gi (network topology dependent)
26 limits:
27 cpu: 8000m # With networkCost: ~12000m (burst capacity)
28 memory: 24Gi # With networkCost: ~32Gi (high-density service mesh)
29
30 kube-state-metrics:
31 resources:
32 requests:
33 cpu: 200m
34 memory: 512Mi
35 limits:
36 cpu: 1000m
37 memory: 2Gi
38
39 prometheus-node-exporter:
40 resources:
41 requests:
42 cpu: 100m
43 memory: 30Mi
44 limits:
45 cpu: 200m
46 memory: 50Mi
47
48opencost:
49 opencost:
50 exporter:
51 resources:
52 requests:
53 cpu: 200m
54 memory: 512Mi
55 limits:
56 cpu: 2000m
57 memory: 4GiData Retention Configuration
Adjust based on your monitoring needs:
1prometheus:
2 server:
3 retention: "30m"
4opencost:
5 opencost:
6 dataRetention:
7 dailyResolutionDays: 15Note: Metrics are continuously sent to Kubeadapt cloud for long-term storage and analysis. Local retention primarily serves as a buffer for the agent's 60-second collection cycle.
Upgrading
Dashboard-Managed Upgrades (Recommended)
Kubeadapt Dashboard automatically tracks your cluster's Helm chart version and notifies you when upgrades are available.
How it works:
- Version Tracking: Dashboard monitors each cluster's current agent version via heartbeat
- Upgrade Detection: Uses semantic versioning (semVer) to detect available updates
- One-Click Command: Dashboard generates an upgrade command that preserves your custom values
To upgrade:
- Navigate to app.kubeadapt.io → Your Cluster → Connectivity tab
- If an update is available, you'll see an "Update Available" badge
- Click Copy Upgrade Command to get the optimized command
- Run the command in your terminal
Example upgrade command:
1helm get values kubeadapt -n kubeadapt > values.yaml && \
2helm upgrade kubeadapt kubeadapt/kubeadapt \
3 --version 0.5.3 \
4 --namespace kubeadapt \
5 -f values.yamlWhy this approach:
- Preserves Configuration: Exports your current values before upgrading
- Version-Specific: Targets exact version tested by Kubeadapt team for current installed version
- Breaking Changes: Dashboard warns if upgrade contains breaking changes
Manual Upgrade
If you prefer manual upgrades:
1# Update Helm repository
2helm repo update
3
4# Export current values
5helm get values kubeadapt -n kubeadapt > values.yaml
6
7# Upgrade to latest version
8helm upgrade kubeadapt kubeadapt/kubeadapt \
9 --namespace kubeadapt \
10 -f values.yaml
11
12# Or upgrade to specific version
13helm upgrade kubeadapt kubeadapt/kubeadapt \
14 --version 0.5.3 \
15 --namespace kubeadapt \
16 -f values.yamlCheck Available Versions
1helm search repo kubeadapt --versionsUninstalling
Remove Kubeadapt:
1helm uninstall kubeadapt --namespace kubeadaptDelete namespace and all data:
1kubectl delete namespace kubeadaptWarning: This permanently deletes all Prometheus metrics and cost data.
Next Steps
After successful installation:
- Configure Cloud Integration - Set up AWS, GCP, or Azure for accurate billing costs
- Review Cost Attribution - Understand how costs are calculated
- Explore Features - Check out the Dashboard and Available Savings
- Implement Rightsizing - Follow the Rightsizing Guide to start optimizing workloads