Kloudfuse stack is designed to run in a Kubernetes cluster on GCP, AWS or Azure cloud.

To install Kloudfuse stack you will need:

  • Machine with at least 8vCPU and 32GB memory.

  • A kubernetes cluster (separate or can be shared with the application)

  • Persistent volumes (these don’t have to manually created. installation will create these, but the account used to install needs to have the necessary permissions for smooth install)

  • Helm, kubectl.

Above configuration is good enough to get started. However, for more advanced/production grade customizations please ensure to review the prerequisites for advanced use cases page.


Kloudfuse follows standard helm best practices. Please follow these steps to install the Kloudfuse stack.

  • Step 1. Login to Kloudfuse helm registry using the “single-node-install-token.json" provided in your email. (see here if following command fails). If you do not have this token or haven’t received the email, please contact us.

cat single-node-install-token.json | helm registry login -u _json_key --password-stdin
  • Step 2. Create a namespace called “kfuse” to Install Kloudfuse. Create a secret for helm to pull the required docker images.

    kubectl create ns kfuse kubectl config set-context --current --namespace=kfuse kubectl create secret docker-registry kfuse-image-pull-credentials \ --namespace='kfuse' --docker-server '' --docker-username _json_key \ --docker-email '' \ --docker-password=''"$(cat single-node-install-token.json)"''
  • Step 3. create a custom_values.yaml file.

    • Add an orgId (typically the company name). This is a required field.

    • Add an cloudProvider. This is a required field. Valid values are aws, gcp, or azure

      global: cloudProvider: <aws, gcp, or azure>
    • Optionally, add a section in custom_values.yaml specific to your cloud provider

  • Step 4. To install the Kfuse chart, run the following command (Please note helm version 3.8.1 or above is needed):

helm upgrade --install -n kfuse kfuse oci:// --version 2.2.3 -f custom_values.yaml


Upon completion of the above helm install you would see instructions for accessing the Kloudfuse Platform UI.

To access the UI: NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc --namespace kfuse -w kfuse-ingress-nginx-controller' export SERVICE_IP=$(kubectl get svc --namespace kfuse kfuse-ingress-nginx-controller --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}") echo http://$SERVICE_IP

Kloudfuse UI and Grafana Access

The Kloudfuse Platform has its own UI for each of the streams and more. You will also see a tab for Grafana if you wish to access your data using Grafana. To access them the default credentials are:

username: admin password: password

Please follow the steps here to configure Google Auth or change the default password.

Please see next steps to explore more.

Telemetry from Kloudfuse stack

The standard Kloudfuse install sends its telemetry data to itself as well as the Kloudfuse cloud ( This is so that we can monitor the health of the installed Kloudfuse stack(s) across our customers and help resolve any issues quickly. If you wish to disable sending telemetry data to Kloudfuse cloud, please contact us .


More Info

helm registry login failure

  • if helm registry login command does not work, you can substitute ‘helm registry login’ with ‘docker login’:

cat single-node-install-token.json | docker login -u _json_key --password-stdin

Review default values used in the install

  • The standard default values of the Kfuse helm chart is configured for a single-node cluster install (without deepstore). The default values of the Kfuse helm chart can be viewed by running the command below. Any additional customization can be added to the custom_values.yaml.

    helm show values oci:// --version 2.0.1


To delete the kloudfuse installation, you can helm delete kfuse chart and/or delete the ‘kfuse’ namespace. helm delete does not get rid of the data stored in persistent disk or in the deepstore in the GCP/AWS/Azure cloud buckets.

helm delete kfuse

Deletion of Persistent Disks

kubectl delete pvc -n kfuse -l

Deletion of DeepStore Data


gsutil -m rm -r gs://<pinot-bucket-name>/controller


aws s3 rm s3://<pinot-bucket-name>/controller --recursive