Gcp
Learn how to integrate Kubeadapt with your Google Cloud Platform infrastructure for accurate cost tracking, preemptible instance pricing, and committed use discount visibility.
Overview
Kubeadapt provides comprehensive GCP integration capabilities:
- Cloud Billing Integration - Connect to BigQuery billing export for accurate cloud costs
- Preemptible Instance Pricing - Real-time pricing for preemptible VMs
- Committed Use Discounts - Automatic tracking of CUD utilization and coverage
Prerequisites
- GCP project with billing enabled
- Kubeadapt installed via Helm chart
- kubectl access to your cluster
- gcloud CLI configured (for setup steps)
- Billing account admin or Billing Account Costs Manager role
Part 1: Cloud Billing Integration
Step 1: Enable Billing Export to BigQuery
-
Navigate to Google Cloud Console → Billing → Billing Export
-
Click "Edit Settings" under BigQuery Export
-
Select or create a BigQuery dataset:
- Project ID: my-billing-project
- Dataset name: billing_export
- Data location: Choose region (e.g., US)
-
Enable both export types:
- ✓ Standard usage cost data - Daily cost data
- ✓ Detailed usage cost data - Granular resource-level data
- ✓ Pricing data - SKU pricing information (optional but recommended)
-
Click "Save"
Wait 24 hours for data to start populating. Initial export can take several hours.
Step 2: Verify BigQuery Tables
Check that billing data is being exported:
bash1bq ls --project_id=my-billing-project billing_export
You should see tables with names like:
- gcp_billing_export_v1_XXXXXX_XXXXXX_XXXXXX (standard)
- gcp_billing_export_resource_v1_XXXXXX_XXXXXX_XXXXXX (detailed)
Get the exact table name:
bash1bq ls --project_id=my-billing-project --max_results=10 billing_export | grep gcp_billing_export
Step 3: Generate GCP API Key for Pricing
Kubeadapt needs an API key to fetch GCP pricing data from the Cloud Billing API.
- Navigate to Google Cloud Console → APIs & Services → Credentials
- Click "Create Credentials" → "API Key"
- (Optional) Click "Restrict Key" to limit access to Cloud Billing API only
- Save the API key securely
You will add this key to your Helm values file later.
Step 4: Create Service Account
Create a service account with BigQuery access:
bash1export PROJECT_ID=$(gcloud config get-value project) 2gcloud iam service-accounts create kubeadapt-bigquery \ 3 --display-name="Kubeadapt BigQuery Access" \ 4 --format json
Grant required permissions:
bash1# Compute Viewer - get instance metadata 2gcloud projects add-iam-policy-binding $PROJECT_ID \ 3 --member="serviceAccount:kubeadapt-bigquery@$PROJECT_ID.iam.gserviceaccount.com" \ 4 --role="roles/compute.viewer" 5 6# BigQuery User - create and run queries 7gcloud projects add-iam-policy-binding $PROJECT_ID \ 8 --member="serviceAccount:kubeadapt-bigquery@$PROJECT_ID.iam.gserviceaccount.com" \ 9 --role="roles/bigquery.user" 10 11# BigQuery Data Viewer - read billing data 12gcloud projects add-iam-policy-binding $PROJECT_ID \ 13 --member="serviceAccount:kubeadapt-bigquery@$PROJECT_ID.iam.gserviceaccount.com" \ 14 --role="roles/bigquery.dataViewer" 15 16# BigQuery Job User - run queries 17gcloud projects add-iam-policy-binding $PROJECT_ID \ 18 --member="serviceAccount:kubeadapt-bigquery@$PROJECT_ID.iam.gserviceaccount.com" \ 19 --role="roles/bigquery.jobUser"
Create and download service account key:
bash1gcloud iam service-accounts keys create kubeadapt-bigquery-key.json \ 2 --iam-account=kubeadapt-bigquery@$PROJECT_ID.iam.gserviceaccount.com
Step 5: Configure Kubeadapt with BigQuery Integration
Create a cloud-integration.json file:
json1{ 2 "gcp": { 3 "bigQuery": [ 4 { 5 "projectID": "my-billing-project", 6 "dataset": "billing_export", 7 "table": "gcp_billing_export_v1_018AIF_74KD1D_534A2", 8 "authorizer": { 9 "authorizerType": "GCPServiceAccountKey", 10 "key": { 11 "type": "service_account", 12 "project_id": "my-billing-project", 13 "private_key_id": "...", 14 "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", 15 "client_email": "kubeadapt-bigquery@my-billing-project.iam.gserviceaccount.com", 16 "client_id": "...", 17 "auth_uri": "https://accounts.google.com/o/oauth2/auth", 18 "token_uri": "https://oauth2.googleapis.com/token", 19 "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", 20 "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/..." 21 } 22 } 23 } 24 ] 25 } 26}
Important:
- The dataset and table must be provided separately (not combined)
- The table is the full BigQuery table name (e.g., gcp_billing_export_v1_018AIF_74KD1D_534A2)
- The dataset is just the dataset name (e.g., billing_export)
- You can get the exact table name from BigQuery console or by running: text
1bq ls --project_id=my-billing-project billing_export
Create Kubernetes secret:
bash1kubectl create secret generic cloud-integration \ 2 --from-file=cloud-integration.json \ 3 --namespace kubeadapt
Update your Helm values file with the API key and secret:
yaml1# values.yaml 2opencost: 3 opencost: 4 exporter: 5 # GCP API key for pricing data 6 cloudProviderApiKey: "YOUR_GCP_API_KEY_HERE" 7 # Secret containing cloud-integration.json 8 cloudIntegrationSecret: "cloud-integration" 9 cloudCost: 10 enabled: true
Apply the configuration:
bash1helm upgrade kubeadapt kubeadapt/kubeadapt \ 2 --namespace kubeadapt \ 3 -f values.yaml
Alternative: Using Workload Identity (GKE)
For GKE clusters, use Workload Identity instead of service account keys for better security:
- Enable Workload Identity on your cluster:
bash1gcloud container clusters update my-cluster \ 2 --workload-pool=my-billing-project.svc.id.goog
- Enable Workload Identity on node pools:
bash1gcloud container node-pools update default-pool \ 2 --cluster=my-cluster \ 3 --workload-metadata=GKE_METADATA
- Create service account binding:
bash1gcloud iam service-accounts add-iam-policy-binding \ 2 kubeadapt-bigquery@my-billing-project.iam.gserviceaccount.com \ 3 --role roles/iam.workloadIdentityUser \ 4 --member "serviceAccount:my-billing-project.svc.id.goog[kubeadapt/kubeadapt-cost-analyzer]"
- Annotate Kubernetes service account in your Helm values file:
yaml1# values.yaml 2opencost: 3 serviceAccount: 4 annotations: 5 iam.gke.io/gcp-service-account: kubeadapt-bigquery@my-billing-project.iam.gserviceaccount.com
- Use simplified cloud-integration.json (no key needed):
json1{ 2 "gcp": { 3 "bigQuery": [ 4 { 5 "projectID": "my-billing-project", 6 "dataset": "billing_export", 7 "table": "gcp_billing_export_v1_018AIF_74KD1D_534A2", 8 "authorizer": { 9 "authorizerType": "GCPWorkloadIdentity" 10 } 11 } 12 ] 13 } 14}
Part 2: Preemptible Instance Pricing
Preemptible VM prices are retrieved automatically from GCP's public pricing API. No additional configuration is needed beyond the basic BigQuery integration.
How It Works
- Automatic Detection: Kubeadapt automatically detects preemptible nodes by checking instance metadata
- Real-time Pricing: Current preemptible prices are fetched from GCP Pricing API
- Historical Data: BigQuery export includes preemptible usage with actual costs
Verification
Check that preemptible nodes are detected correctly:
bash1kubectl logs -n kubeadapt deployment/kubeadapt-cost-analyzer | grep -i preemptible
In the Kubeadapt dashboard:
- Navigate to Node View
- Check Pricing Type column
- Preemptible nodes should show "Preemptible" label
Part 3: Committed Use Discounts (CUD)
Committed Use Discount data is automatically retrieved from your BigQuery billing export. CUDs are applied at billing time and reflected in the cost data.
What Gets Tracked
With BigQuery billing export configured, Kubeadapt automatically shows:
- CUD Coverage - Percentage of usage covered by committed use discounts
- CUD Utilization - How much of your commitment is being used
- Effective Discount - Actual discount percentage applied
- Net Savings - Savings compared to on-demand pricing
CUD Types Supported
- Resource-based CUDs - Committed spend on vCPUs and memory
- Spend-based CUDs - Committed dollar amount across all services
- VM Instance CUDs - Specific instance type commitments
Viewing CUD Data
CUD information appears automatically in:
- Dashboard - Cost breakdown shows On-Demand vs CUD vs Preemptible
- Node View - Nodes show effective pricing (CUD-discounted or on-demand)
- Cost Reports - Historical trends show CUD coverage over time
- Namespace View - Per-namespace costs reflect CUD savings
Troubleshooting CUD Tracking
Issue: CUDs not appearing
- Verify BigQuery export includes both standard and detailed usage tables
- Check that export has been running for at least 24 hours
- Ensure service account has bigquery.dataViewer role
Issue: CUD amounts seem incorrect
- CUDs are applied at the billing account level, ensure you're querying the correct billing project
- Shared VPC setups may require cross-project BigQuery access
- CUD allocation across multiple projects follows GCP's billing rules
Multi-Project Setup (Organization / Billing Account)
For organizations with multiple GCP projects under a single billing account, you need to configure Kubeadapt to access cost data across all projects.
Architecture Overview
Unlike AWS (which uses
1masterPayerARNtext1GCP Organization (Billing Account: 012345-ABCDEF-678901) 2├── Billing Export Project: billing-project-111111 3│ └── BigQuery Dataset: organization_billing_export 4│ └── Contains costs from ALL projects 5│ 6├── GKE Cluster 1 - Project: gke-prod-1a2b3c (GKE Cluster 1) 7│ └── Kubeadapt → Reads centralized billing export 8│ 9├── GKE Cluster 2 - Project: gke-prod-4d5e6f (GKE Cluster 2) 10│ └── Kubeadapt → Reads centralized billing export 11│ 12├── GKE Cluster 3 - Project: gke-staging-7g8h9i (GKE Cluster 3) 13│ └── Kubeadapt → Reads centralized billing export 14│ 15├── GKE Cluster 4 - Project: gke-dev-0j1k2l (GKE Cluster 4) 16│ └── Kubeadapt → Reads centralized billing export 17│ 18└── GKE Cluster 5 - Project: gke-dev-3m4n5o (GKE Cluster 5) 19 └── Kubeadapt → Reads centralized billing export
Key Point: All projects share the same billing account, so one billing export in BigQuery contains costs for all projects. Each Kubeadapt instance filters by its own
1project.idApproach 1: Centralized Billing Export (Recommended)
This approach uses a single BigQuery dataset in a dedicated billing project that exports costs for all projects under the billing account.
Step 1: Create Centralized Billing Export Project
Create a dedicated project for billing data:
bash1gcloud projects create billing-project-111111 \ 2 --name="Organization Billing Export" \ 3 --organization=YOUR_ORG_ID
Link it to your billing account:
bash1gcloud billing projects link billing-project-111111 \ 2 --billing-account=012345-ABCDEF-678901
Step 2: Create BigQuery Dataset
bash1bq mk --project_id=billing-project-111111 \ 2 --location=US \ 3 --dataset organization_billing_export
Step 3: Configure Billing Export at Organization Level
- Navigate to Google Cloud Console → Billing → Billing Export
- Select your billing account: 012345-ABCDEF-678901
- Click Edit Settings under BigQuery Export
- Configure:
- Project ID: billing-project-111111
- Dataset name: organization_billing_export
- Data location: US
- Enable:
- ✓ Standard usage cost data
- ✓ Detailed usage cost data
- ✓ Pricing data
- Click Save
This export will contain costs from all projects (gke-prod-1a2b3c, gke-prod-4d5e6f, gke-staging-7g8h9i, etc.) under the billing account.
Step 4: Create Service Account with Multi-Project Access
Create service account in billing project:
bash1gcloud iam service-accounts create kubeadapt-org-billing \ 2 --project=billing-project-111111 \ 3 --display-name="Kubeadapt Organization Billing Access" \ 4 --format json
Grant permissions on billing project:
bash1# Compute Viewer - get instance metadata 2gcloud projects add-iam-policy-binding billing-project-111111 \ 3 --member="serviceAccount:kubeadapt-org-billing@billing-project-111111.iam.gserviceaccount.com" \ 4 --role="roles/compute.viewer" 5 6# BigQuery User - create and run queries 7gcloud projects add-iam-policy-binding billing-project-111111 \ 8 --member="serviceAccount:kubeadapt-org-billing@billing-project-111111.iam.gserviceaccount.com" \ 9 --role="roles/bigquery.user" 10 11# BigQuery Data Viewer - read billing data 12gcloud projects add-iam-policy-binding billing-project-111111 \ 13 --member="serviceAccount:kubeadapt-org-billing@billing-project-111111.iam.gserviceaccount.com" \ 14 --role="roles/bigquery.dataViewer" 15 16# BigQuery Job User - run queries 17gcloud projects add-iam-policy-binding billing-project-111111 \ 18 --member="serviceAccount:kubeadapt-org-billing@billing-project-111111.iam.gserviceaccount.com" \ 19 --role="roles/bigquery.jobUser"
Grant Compute Viewer on all GKE projects (so Kubeadapt can fetch instance metadata):
bash1# Repeat for each project 2for PROJECT_ID in gke-prod-1a2b3c gke-prod-4d5e6f gke-staging-7g8h9i gke-dev-0j1k2l gke-dev-3m4n5o; do 3 gcloud projects add-iam-policy-binding $PROJECT_ID \ 4 --member="serviceAccount:kubeadapt-org-billing@billing-project-111111.iam.gserviceaccount.com" \ 5 --role="roles/compute.viewer" 6done
Create service account key:
bash1gcloud iam service-accounts keys create kubeadapt-org-billing-key.json \ 2 --iam-account=kubeadapt-org-billing@billing-project-111111.iam.gserviceaccount.com
Step 5: Configure cloud-integration.json
For all 5 GKE clusters, use the same configuration:
json1{ 2 "gcp": { 3 "bigQuery": [ 4 { 5 "projectID": "billing-project-111111", 6 "dataset": "organization_billing_export", 7 "table": "gcp_billing_export_v1_012345_ABCDEF_678901", 8 "authorizer": { 9 "authorizerType": "GCPServiceAccountKey", 10 "key": { 11 "type": "service_account", 12 "project_id": "billing-project-111111", 13 "private_key_id": "...", 14 "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", 15 "client_email": "kubeadapt-org-billing@billing-project-111111.iam.gserviceaccount.com", 16 "client_id": "...", 17 "auth_uri": "https://accounts.google.com/o/oauth2/auth", 18 "token_uri": "https://oauth2.googleapis.com/token", 19 "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", 20 "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/..." 21 } 22 } 23 } 24 ] 25 } 26}
Key Point: All clusters use:
- Same (billing project)text
1projectID - Same andtext
1dataset(centralized billing export)text1table - Same service account credentials
Step 6: Deploy to Each Cluster
For each of the 5 GKE clusters:
bash1# Set kubectl context to the cluster 2gcloud container clusters get-credentials CLUSTER_NAME \ 3 --project=PROJECT_ID \ 4 --zone=ZONE 5 6# Create secret 7kubectl create secret generic cloud-integration \ 8 --from-file=cloud-integration.json \ 9 --namespace kubeadapt 10 11# Deploy or upgrade Kubeadapt 12helm upgrade kubeadapt kubeadapt/kubeadapt \ 13 --namespace kubeadapt \ 14 -f values.yaml
How It Works (Centralized Approach)
- Centralized Billing Export: Cost data from all 5 projects flows into single BigQuery dataset in billing-project-111111
- Kubeadapt in gke-prod-1a2b3c:
- Reads from centralized billing export
- Queries cost data
- Filters by (automatically)text
1project.id = "gke-prod-1a2b3c"
- Kubeadapt in gke-prod-4d5e6f:
- Reads from same centralized billing export
- Queries cost data
- Filters by text
1project.id = "gke-prod-4d5e6f"
- No Double-Counting: Each Kubeadapt only reports costs for its own project
Approach 2: Per-Project Billing Export
If centralized billing export is not available, configure each project separately.
Architecture
text1Project 1: gke-prod-1a2b3c (GKE Cluster 1) 2 └── BigQuery: project1_billing → Kubeadapt reads project1 costs 3 4Project 2: gke-prod-4d5e6f (GKE Cluster 2) 5 └── BigQuery: project2_billing → Kubeadapt reads project2 costs 6 7Project 3: gke-staging-7g8h9i (GKE Cluster 3) 8 └── BigQuery: project3_billing → Kubeadapt reads project3 costs 9 10...
Configuration for Project 1
Create billing export in each project:
bash1# Create dataset in project 1 2bq mk --project_id=gke-prod-1a2b3c \ 3 --location=US \ 4 --dataset project1_billing 5 6# Enable billing export for project 1 only 7# (Configure via Cloud Console → Billing → Billing Export)
cloud-integration.json for Project 1:
json1{ 2 "gcp": { 3 "bigQuery": [ 4 { 5 "projectID": "gke-prod-1a2b3c", 6 "dataset": "project1_billing", 7 "table": "gcp_billing_export_v1_XXXXXX_XXXXXX_XXXXXX", 8 "authorizer": { 9 "authorizerType": "GCPServiceAccountKey", 10 "key": { 11 "type": "service_account", 12 "project_id": "gke-prod-1a2b3c", 13 "private_key_id": "...", 14 "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", 15 "client_email": "kubeadapt-sa@gke-prod-1a2b3c.iam.gserviceaccount.com", 16 "client_id": "...", 17 "auth_uri": "https://accounts.google.com/o/oauth2/auth", 18 "token_uri": "https://oauth2.googleapis.com/token", 19 "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", 20 "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/..." 21 } 22 } 23 } 24 ] 25 } 26}
Each project gets its own:
- BigQuery dataset with billing export
- Service account
- cloud-integration.json configuration
Alternative: Using Workload Identity (GKE)
For better security, use Workload Identity instead of service account keys.
Enable Workload Identity on Cluster
bash1gcloud container clusters update my-cluster \ 2 --workload-pool=gke-prod-1a2b3c.svc.id.goog \ 3 --zone=us-central1-a
Create Service Account Binding
bash1# Create service account in billing project 2gcloud iam service-accounts create kubeadapt-wi \ 3 --project=billing-project-111111 \ 4 --display-name="Kubeadapt Workload Identity" \ 5 --format json 6 7# Grant required permissions 8gcloud projects add-iam-policy-binding billing-project-111111 \ 9 --member="serviceAccount:kubeadapt-wi@billing-project-111111.iam.gserviceaccount.com" \ 10 --role="roles/compute.viewer" 11 12gcloud projects add-iam-policy-binding billing-project-111111 \ 13 --member="serviceAccount:kubeadapt-wi@billing-project-111111.iam.gserviceaccount.com" \ 14 --role="roles/bigquery.user" 15 16gcloud projects add-iam-policy-binding billing-project-111111 \ 17 --member="serviceAccount:kubeadapt-wi@billing-project-111111.iam.gserviceaccount.com" \ 18 --role="roles/bigquery.dataViewer" 19 20gcloud projects add-iam-policy-binding billing-project-111111 \ 21 --member="serviceAccount:kubeadapt-wi@billing-project-111111.iam.gserviceaccount.com" \ 22 --role="roles/bigquery.jobUser" 23 24# Bind Workload Identity 25gcloud iam service-accounts add-iam-policy-binding \ 26 kubeadapt-wi@billing-project-111111.iam.gserviceaccount.com \ 27 --role roles/iam.workloadIdentityUser \ 28 --member "serviceAccount:gke-prod-1a2b3c.svc.id.goog[kubeadapt/kubeadapt-cost-analyzer]"
Configure Helm Values
yaml1opencost: 2 serviceAccount: 3 annotations: 4 iam.gke.io/gcp-service-account: kubeadapt-wi@billing-project-111111.iam.gserviceaccount.com
Simplified cloud-integration.json
json1{ 2 "gcp": { 3 "bigQuery": [ 4 { 5 "projectID": "billing-project-111111", 6 "dataset": "organization_billing_export", 7 "table": "gcp_billing_export_v1_012345_ABCDEF_678901", 8 "authorizer": { 9 "authorizerType": "GCPWorkloadIdentity" 10 } 11 } 12 ] 13 } 14}
No credentials needed - Workload Identity handles authentication automatically.
Key Differences from AWS and Azure
| Aspect | AWS | Azure | GCP |
|---|---|---|---|
| Cross-Account Auth | text 1masterPayerARN | Service Principal with multi-sub permissions | Service Account with org-level permissions |
| Billing Data | CUR in Management Account | Cost Export in Storage Account | BigQuery Export in Billing Project |
| Identity Method | IRSA (IAM Roles) | Workload Identity (Managed Identity) | Workload Identity (GKE) |
| Configuration | One field: text 1masterPayerARN | Same Service Principal across all clusters | Same BigQuery config across all clusters |
| Filtering | text 1projectID | text 1SubscriptionId | text 1project.id |
Approach 1 vs Approach 2 Comparison
| Feature | Centralized (Approach 1) | Per-Project (Approach 2) |
|---|---|---|
| Setup Complexity | Low (one export) | High (N exports) |
| Cost | Lower (one dataset) | Higher (N datasets, more queries) |
| Management | Easier (single source) | Complex (multiple sources) |
| Security | Single service account | Per-project service accounts |
| CUD Visibility | Organization-wide | Per-project only |
| Recommended | ✅ Yes | Only if org billing unavailable |
Validation
Test your configuration:
bash1# Check if cloud costs are being retrieved 2kubectl logs -n kubeadapt deployment/kubeadapt-cost-analyzer | grep -i "cloud cost" 3 4# Verify secret is mounted correctly 5kubectl describe pod -n kubeadapt -l app=cost-analyzer | grep cloud-integration 6 7# Test BigQuery access from within pod 8kubectl exec -n kubeadapt deployment/kubeadapt-cost-analyzer -- \ 9 /bin/sh -c 'bq query --project_id=billing-project-111111 --use_legacy_sql=false \ 10 "SELECT project.id, SUM(cost) as total_cost FROM \`billing-project-111111.organization_billing_export.gcp_billing_export_v1_012345_ABCDEF_678901\` WHERE DATE(usage_start_time) = CURRENT_DATE() GROUP BY project.id LIMIT 10"'
Expected output: Should show costs grouped by project ID (gke-prod-1a2b3c, gke-prod-4d5e6f, etc.).
Troubleshooting
Common Issues
Issue: "BigQuery permission denied"
- Verify service account has roles/bigquery.dataViewer on the billing project
- Ensure roles/bigquery.jobUser is granted for running queries
- Check that the dataset and table names are correct
Issue: "No billing data found"
-
Wait 24 hours after enabling billing export
-
Verify BigQuery export is enabled in Billing settings
-
Run test query to check data:
bash1bq query --project_id=my-billing-project \ 2 'SELECT COUNT(*) FROM `billing_export.gcp_billing_export_v1_XXXXXX_XXXXXX_XXXXXX` LIMIT 10'Issue: "Table not found"
-
Get exact table name from BigQuery console
-
Table name format: dataset.table (e.g., billing_export.gcp_billing_export_v1_ABC123_DEF456_GHI789)
-
Ensure you're using the full table name, not just the dataset
Issue: "Preemptible costs missing"
- Preemptible usage appears in BigQuery export automatically
- Check usage.pricing_unit column for "Preemptible" indicator
- Verify nodes are actually preemptible instances
Issue: "Cross-project access denied"
- For multi-project setups, grant service account viewer role on all projects:
bash1 gcloud projects add-iam-policy-binding PROJECT_ID \ 2 --member="serviceAccount:kubeadapt-bigquery@BILLING_PROJECT.iam.gserviceaccount.com" \ 3 --role="roles/compute.viewer"
Validation
Test your configuration:
bash1# Check if cloud costs are being retrieved 2 3kubectl logs -n kubeadapt deployment/kubeadapt-cost-analyzer | grep -i "cloud cost" 4 5# Verify secret is mounted correctly 6kubectl describe pod -n kubeadapt -l app=cost-analyzer | grep cloud-integration 7 8# Check BigQuery connectivity 9kubectl exec -n kubeadapt deployment/kubeadapt-cost-analyzer -- \ 10 curl -s http://localhost:9003/healthz 11
Testing BigQuery Access
Run a test query from within the pod:
bash1kubectl exec -n kubeadapt deployment/kubeadapt-cost-analyzer -- \ 2 /bin/sh -c 'echo "SELECT COUNT(*) as row_count FROM \`$BIGQUERY_DATASET\` LIMIT 1" > query.sql && \ 3 bq query --use_legacy_sql=false < query.sql'
Support
For additional help:
- Review Cost Attribution Concepts
- Contact authors@kubeadapt.io
Next Steps
- AWS Integration - Configure Amazon Web Services
- Azure Integration - Configure Microsoft Azure
- Dashboard Overview - Explore cost monitoring features
- Available Savings - Review optimization recommendations