Quick Start
This guide walks you through installing Kubeadapt in your Kubernetes cluster using Helm.
Prerequisites
Before installing Kubeadapt, ensure you have:
- Kubernetes Cluster - Version 1.24 or later
- Helm 3 - Version 3.0 or later
- kubectl - Configured to access your cluster
- Storage Class – Required for Prometheus data persistence. Since Kubeadapt only requires a maximum 30-minute data retention policy for Prometheus, you may use an ephemeral emptyDir volume instead of persistent storage.
Installation
Step 1: Add Helm Repository
bash1helm repo add kubeadapt https://kubeadapt.github.io/kubeadapt-helm 2helm repo update
Step 2: Create Namespace
bash1kubectl create namespace kubeadapt
Step 3: Generate Agent Token
Before installation, generate an agent token from the Kubeadapt dashboard:
- Navigate to app.kubeadapt.io
- Go to Clusters → Add Cluster
- Choose the cloud provider or custom cluster type
- Specify environment type production-like or non-production-like
- If you already have a running Node Exporter in your cluster, it can be re-used. There is no need to install it again. You can optionally provide the namespace where it is installed.
The generated command will look similar to this:
bash1helm install kubeadapt kubeadapt/kubeadapt \ 2 --namespace kubeadapt \ 3 --create-namespace \ 4 --set agent.enabled=true \ 5 --set agent.config.token="your-generated-token-here"
Note: The actual command will be automatically generated with your unique token. Copy and use the command provided in the dashboard.
Step 4: Install Kubeadapt
Use the dashboard-generated command from Step 3 above, which includes your agent token.
Step 5: Verify Installation
Check all pods are running:
bash1kubectl get pods -n kubeadapt
Expected output:
text1NAME READY STATUS AGE 2kubeadapt-kube-state-metrics-xxxxx 1/1 Running 2m 3kubeadapt-opencost-xxxxx 1/1 Running 2m 4kubeadapt-prometheus-node-exporter-xxxxx 1/1 Running 2m 5kubeadapt-prometheus-server-xxxxx 1/1 Running 2m
Check services:
bash1kubectl get svc -n kubeadapt
Expected output:
text1NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 2kubeadapt-prometheus-server ClusterIP 10.0.0.100 <none> 80/TCP,9090/TCP 2m 3kubeadapt-opencost ClusterIP 10.0.0.101 <none> 9003/TCP 2m 4kubeadapt-kube-state-metrics ClusterIP 10.0.0.102 <none> 8080/TCP 2m 5kubeadapt-prometheus-node-exporter ClusterIP 10.0.0.103 <none> 9100/TCP 2m
Configuration Options
Storage Configuration
Persistent Storage (any StorageClass):
yaml1prometheus: 2 server: 3 persistentVolume: 4 enabled: true 5 size: 20Gi 6 storageClass: your-storage-class # gp2, gp3, standard, ssd, managed-premium, local-path, etc. 7 retention: "30m"
Ephemeral Storage (emptyDir):
yaml1prometheus: 2 server: 3 persistentVolume: 4 enabled: false 5 emptyDir: 6 enabled: true 7 sizeLimit: "20Gi" 8 retention: "30m"
Resource Configuration
Adjust resources based on cluster size.
Small Clusters (~10 nodes, ~300 pods):
yaml1agent: 2 enabled: true 3 resources: 4 requests: 5 cpu: 100m 6 memory: 128Mi 7 limits: 8 cpu: 500m 9 memory: 256Mi 10 11prometheus: 12 server: 13 persistentVolume: 14 enabled: true 15 size: 10Gi # With networkCost: ~10Gi (~10K-180K network time series) 16 storageClass: gp2 # Adjust for your cloud provider 17 retention: "30m" 18 resources: 19 requests: 20 cpu: 500m # With networkCost: ~575m (+15% scraping overhead) 21 memory: 512Mi # With networkCost: ~1-2Gi (network time series vary by topology) 22 limits: 23 cpu: 2000m # With networkCost: ~2500m (burst capacity) 24 memory: 2Gi # With networkCost: ~3-4Gi (traffic bursts) 25 26 kube-state-metrics: 27 resources: 28 requests: 29 cpu: 10m 30 memory: 55Mi 31 limits: 32 cpu: 100m 33 memory: 128Mi 34 35 prometheus-node-exporter: 36 resources: 37 requests: 38 cpu: 100m 39 memory: 30Mi 40 limits: 41 cpu: 200m 42 memory: 50Mi 43 44opencost: 45 opencost: 46 exporter: 47 resources: 48 requests: 49 cpu: 10m 50 memory: 55Mi 51 limits: 52 cpu: 999m 53 memory: 1Gi
Medium Clusters (~100 nodes, ~3,000 pods):
yaml1agent: 2 enabled: true 3 resources: 4 requests: 5 cpu: 200m 6 memory: 256Mi 7 limits: 8 cpu: 1000m 9 memory: 512Mi 10 11prometheus: 12 server: 13 persistentVolume: 14 enabled: true 15 size: 20Gi # With networkCost: ~30Gi (~100K realistic, ~18M theoretical network time series) 16 storageClass: gp2 17 retention: "30m" 18 resources: 19 requests: 20 cpu: 1000m # With networkCost: ~1150m (+15% scraping overhead) 21 memory: 2Gi # With networkCost: ~3-6Gi (network topology dependent) 22 limits: 23 cpu: 4000m # With networkCost: ~5000m (burst capacity) 24 memory: 8Gi # With networkCost: ~12Gi (high-density service mesh) 25 26 kube-state-metrics: 27 resources: 28 requests: 29 cpu: 50m 30 memory: 128Mi 31 limits: 32 cpu: 200m 33 memory: 256Mi 34 35 prometheus-node-exporter: 36 resources: 37 requests: 38 cpu: 100m 39 memory: 30Mi 40 limits: 41 cpu: 200m 42 memory: 50Mi 43 44opencost: 45 opencost: 46 exporter: 47 resources: 48 requests: 49 cpu: 50m 50 memory: 128Mi 51 limits: 52 cpu: 1500m 53 memory: 2Gi
Large Clusters (~500 nodes, ~15,000 pods):
yaml1agent: 2 enabled: true 3 resources: 4 requests: 5 cpu: 500m 6 memory: 512Mi 7 limits: 8 cpu: 2000m 9 memory: 1Gi 10 11prometheus: 12 server: 13 persistentVolume: 14 enabled: true 15 size: 50Gi # With networkCost: ~100Gi 16 storageClass: gp2 17 retention: "30m" 18 resources: 19 requests: 20 cpu: 2000m # With networkCost: ~2500m (+15% scraping overhead) 21 memory: 8Gi # With networkCost: ~12-24Gi (network topology dependent) 22 limits: 23 cpu: 8000m # With networkCost: ~12000m (burst capacity) 24 memory: 16Gi # With networkCost: ~32Gi (high-density service mesh) 25 26 kube-state-metrics: 27 resources: 28 requests: 29 cpu: 100m 30 memory: 256Mi 31 limits: 32 cpu: 300m 33 memory: 512Mi 34 35 prometheus-node-exporter: 36 resources: 37 requests: 38 cpu: 100m 39 memory: 30Mi 40 limits: 41 cpu: 200m 42 memory: 50Mi 43 44opencost: 45 opencost: 46 exporter: 47 resources: 48 requests: 49 cpu: 100m 50 memory: 256Mi 51 limits: 52 cpu: 2000m 53 memory: 4Gi
Data Retention Configuration
Adjust based on your monitoring needs:
yaml1prometheus: 2 server: 3 retention: "30m" 4opencost: 5 opencost: 6 dataRetention: 7 dailyResolutionDays: 15
Note: Metrics are continuously sent to Kubeadapt cloud for long-term storage and analysis. Local retention primarily serves as a buffer for the agent's 60-second collection cycle.
Upgrading
Dashboard-Managed Upgrades (Recommended)
Kubeadapt Dashboard automatically tracks your cluster's Helm chart version and notifies you when upgrades are available.
How it works:
- Version Tracking: Dashboard monitors each cluster's current agent version via heartbeat
- Upgrade Detection: Uses semantic versioning (semVer) to detect available updates
- One-Click Command: Dashboard generates an upgrade command that preserves your custom values
To upgrade:
- Navigate to app.kubeadapt.io → Your Cluster → Connectivity tab
- If an update is available, you'll see an "Update Available" badge
- Click Copy Upgrade Command to get the optimized command
- Run the command in your terminal
Example upgrade command:
bash1helm get values kubeadapt -n kubeadapt > values.yaml && \ 2helm upgrade kubeadapt kubeadapt/kubeadapt \ 3 --version 0.5.3 \ 4 --namespace kubeadapt \ 5 -f values.yaml
Why this approach:
- Preserves Configuration: Exports your current values before upgrading
- Version-Specific: Targets exact version tested by Kubeadapt team for current installed version
- Breaking Changes: Dashboard warns if upgrade contains breaking changes
Manual Upgrade
If you prefer manual upgrades:
bash1# Update Helm repository 2helm repo update 3 4# Export current values 5helm get values kubeadapt -n kubeadapt > values.yaml 6 7# Upgrade to latest version 8helm upgrade kubeadapt kubeadapt/kubeadapt \ 9 --namespace kubeadapt \ 10 -f values.yaml 11 12# Or upgrade to specific version 13helm upgrade kubeadapt kubeadapt/kubeadapt \ 14 --version 0.5.3 \ 15 --namespace kubeadapt \ 16 -f values.yaml
Check Available Versions
bash1helm search repo kubeadapt --versions
Uninstalling
Remove Kubeadapt:
bash1helm uninstall kubeadapt --namespace kubeadapt
Delete namespace and all data:
bash1kubectl delete namespace kubeadapt
Warning: This permanently deletes all Prometheus metrics and cost data.
Next Steps
After successful installation:
- Configure Cloud Integration - Set up AWS, GCP, or Azure for accurate billing costs
- Review Cost Attribution - Understand how costs are calculated
- Explore Features - Check out the Dashboard and Available Savings
- Implement Rightsizing - Follow the Rightsizing Guide to start optimizing workloads