Before performing an upgrade, validate that the upgrade won’t revert any customization on your cluster. The steps to run a validation are detailed in here.
To check the current installed Kfuse version, run the following command:
helm list |
The upgrade command is the same as the install command specified in Installation. The command is copied here for reference. The latest released version is specified in the installation page.
helm upgrade --install -n kfuse kfuse oci://us-east1-docker.pkg.dev/mvp-demo-301906/kfuse-helm/kfuse --version <SPECIFY VERSION HERE> -f custom_values.yaml |
Package upgrades to remove service vulnerabilities. Beforehelm upgrade
you need to run a script that's related to the Kafka service. There will be some downtime between running the script and helm upgrade
. You can find the script here
Edit the custom_values.yaml file and move the block under kafka to kafka-broker section as follows
kafka: broker: <<previous kafka block>> |
Add these topics to the kafkaTopics section for record-replay
# kafkaTopics -- kafka topics and configuration to create for Kfuse kafkaTopics: - name: kf_commands partitions: 1 replicationFactor: 1 - name: kf_recorder_data partitions: 1 replicationFactor: 1 |
Add a recorder section with the same affinity and tolerations values as the ingester. If empty, don’t add recorder section
recorder: # affinity -- affinity settings. affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: ng_label operator: In values: - amrut tolerations: - key: "ng_taint" operator: "Equal" value: "amrut" effect: "NoSchedule" |
Now upgrade the stack with the upgrade command
Note that Identity for Databases is introduced in Kfuse version 2.6.7. Database Identity only takes effect on new ingested APM-related data. In addition, timestamp granularities for APM/span data has been increased from millisecond to nanosecond to provide better accuracy in Trace Flamegraph/Waterfall. For older APM data to be rendered accurately, follow the instructions Converting old APM data to Kfuse 2.6.5 APM Service Identity format to convert old data to the new format.
SLO is reenabled in 2.6.7 with enhanced features.
> ./kfuse-postgres.sh kfuse-configdb-0 kfuse slodb slodb=# drop table slodbs; |
The kfuse-postgres.sh script is available in the customer repository under scripts directory
There are few changes in the pinot database which requires Pinot server to be restarted post upgrade with following command
kubectl rollout restart sts -n kfuse pinot-server-offline pinot-server-realtime |
Note that Service Identity for APM is introduced in Kfuse version 2.6.5. Service Identity only takes effect on new ingested APM-related data. Accordingly, old APM data will not get rendered properly in the UI. If older APM data is needed. Then follow the instructions Converting old APM data to Kfuse 2.6.5 APM Service Identity format to convert old data to the new format.
On Azure, The kfuse-ssd-offline
storage class is changed toStandardSSD_LRS
disk type. The kfuse-ssd-offline
storage class needs to be deleted prior to upgrade to allow the new version to update the disk type. Note that if the installation is not on Azure, then this step can be skipped.
kubectl delete storageclass kfuse-ssd-offline |
There are few changes in the pinot database which requires Pinot server to be restarted post upgrade with following command
kubectl rollout restart sts -n kfuse pinot-server-offline pinot-server-realtime |
A new kfuse-ssd-offline
storage class has been introduced in Kfuse version 2.6. This storage class uses gp3
on AWS, pd-balanced
on GCP, and Standard_LRS
on Azure. This is now the default storage class for Pinot Offline Servers, which should give better disk IO performance.
If the custom values yaml is already set to use the specified disk type (e.g., kfuse-ssd-aws-gp3
or standard-rwo
on GCP), then the remaining steps can be skipped.
If the custom values yaml does not explicitly set the pinot.server.offline.persistence.storageClass
field or it is set to a different storage class. Ensure that the field is not set in the custom values yaml. Then run the following commands
kubectl delete sts -n kfuse pinot-server-offline kubectl delete pvc -l app.kubernetes.io/instance=kfuse -l component=server-offline -n kfuse |
Note that the above commands corresponds to the PVCs of the Pinot offline servers. After upgrade to Kfuse version 2.6, PVCs with the desired storage class will be created for the Pinot offline servers.
Based on observation and feedback, it seems the current persistent volume size for zookeeper pods is getting full quite often. To remediate that we have increased the default size of all zookeeper pods to 32Gi. It needs changes in two places as shown below
kafka: # zookeeper - Configuration for Kafka's Zookeeper. zookeeper: persistence: size: 32Gi . . . pinot: # zookeeper - Configuration for Pinot's Zookeeper. zookeeper: persistence: size: 32Gi |
Please update this prior to update with the resize_pvc.sh script. Please reach out if you need assistance.
There are few changes in the pinot database which requires the some of the services to be restarted post upgrade with following command
kubectl rollout restart sts -n kfuse pinot-server-offline pinot-server-realtime pinot-controller pinot-broker logs-parser logs-query-service kubectl rollout restart deployment -n kfuse logs-transformer trace-transformer trace-query-service |
There are few changes in the pinot database which requires the some of the services to be restarted post upgrade with following command
kubectl rollout restart sts -n kfuse pinot-server-offline pinot-server-realtime pinot-controller pinot-broker logs-parser logs-query-service kubectl rollout restart deployment -n kfuse logs-transformer |
There are few changes in the pinot database which requires the pinot-* servers to be restarted post upgrade with following command
kubectl rollout restart sts -n kfuse pinot-server-offline pinot-server-realtime pinot-controller pinot-broker |
The default value for pinot zookeeper
persistence (PVC) value is now 32Gi
.
If the existing Kfuse installation is using the default value (i.e., custom_values.yaml
did not explicitly specify the persistence size for pinot zookeeper
), update the pinot zookeeper
persistence (PVC) value to 16Gi
. Add the following snippet under pinot.zookeeper
section:
persistence: size: 16Gi |
Kloudfuse provided alerts organization has been updated for better maintenance. Make sure to remove old version.
# Connect to kfuse cluster and log in to catalog service pod kubens kfuse kubectl exec -it catalog-servicexxx -- bash # Remove older folders. python3 /catalog_service/catalog.py --remove_installed --list kloudfuse,kloudfuse_alerts,kubernetes_alerts --artifact_type alerts |
The Kfuse-provisioned dashboard has been cleaned up. Run the following command:
kubectl -n kfuse exec -it kfuse-configdb-0 -- bash -c "PGDATABASE=alertsdb PGPASSWORD=\$POSTGRES_PASSWORD psql -U postgres -c 'delete from dashboard_provisioning where name='\''hawkeye-outliers-resources'\'';'; " |
Note: Kfuse services will go offline during this process. Kloudfuse storage class configuration has been simplified to keep future releases/features in mind. This requires running the migrate_storage_class.sh script provided by Kloudfuse team.
./migrate_storage_class.sh |
After running the script, ensure that the pvc’s storage class has kfuse-ssd
, instead of kfuse-ssd-gcp
or kfuse-ssd-aws
.
kubectl get pvc -n kfuse |
Old alerts have to be removed. Kloudfuse alerts organization has changed with the introduction of additional alerts. New version does the organization automatically, however, the older alerts have to be removed.
Manually remove all alerts by navigating to the grafana tab and remove all alerts from kloudfuse_alerts and kubernetes_alerts folder.
Older kubernetes secret related configuration needs to be removed from the custom values.yaml file. Also kfuse-credentials
secret can be removed.
auth: config: AUTH_TYPE: "google" AUTH_COOKIE_MAX_AGE_IN_SECONDS: 259200 auth: existingAdminSecret: "kfuse-credentials" existingSecret: "kfuse-credentials" |
There is a schema change introduced in the traces table. Make sure to restart the Pinot servers after upgrade completes.
kubectl rollout restart sts -n kfuse pinot-server-realtime kubectl rollout restart sts -n kfuse pinot-server-offline |
There is a schema change introduced in version 1.3.3. Make sure to restart the Pinot servers after upgrade completes.
kubectl rollout restart sts -n kfuse pinot-server-realtime kubectl rollout restart sts -n kfuse pinot-server-offline |
Advance monitoring is available as an optional component in 1.3 and later releases. To enable:
Knight agent is required to be installed. Please review steps/settings here.
Additional agent settings required. Please review settings here.
Starting Kfuse version 1.3.0, Kfuse has added retention support using Pinot Minion framework. This feature requires changes to the existing Pinot Minion statefulset. The statefulset needs to be deleted prior to upgrade.
kubectl delete sts -n kfuse pinot-minion |
Existing alerts have to be updated with new, efficient version. Please follow these steps to refresh the alerts.
Go to Kloudfuse UI → Alerts → Alert Rules
Using the filter panel on the left, expand “Component” item, and filter all the “Kloudfuse” and “Kubernetes” alerts to include only these. Remove each of these alerts from the filtered list. (post upgrade the new alerts will be installed automatically)
Starting Kfuse version 1.2.0, cloud-specific yamls (aws.yaml, gcp.yaml, azure.yaml) are not included in the chart anymore. The custom_values.yaml
needs to include these configurations. Refer to Configure Cloud-Specific Helm Values and https://kloudfuse.atlassian.net/wiki/spaces/EX/pages/793378845. With Kfuse version 1.2.0, there is no need to pull the chart prior to installation. helm upgrade
can be directly run with the Kfuse helm chart registry.
A breaking change related to the number of postgresql servers installed as part of the Kfuse install was introduced after Kfuse version 1.1.0. Due to this, the stored alerts will be deleted if directly upgrading Kfuse. In order to retain the stored alerts, run the following pre and post upgrade steps below. Note that until the post-upgrade steps are executed, alerts & dashboards from pre-upgrade will not show up.
kubectl exec -n kfuse alerts-postgresql-0 -- bash -c 'PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U postgres -F c alertsdb' > alertsdb.tar |
kubectl cp -n kfuse alertsdb.tar kfuse-configdb-0:/tmp/alertsdb.tar kubectl exec -n kfuse kfuse-configdb-0 -- bash -c 'PGPASSWORD=$POSTGRES_PASSWORD pg_restore -U postgres -Fc --clean --if-exists -d alertsdb < /tmp/alertsdb.tar' kubectl delete pvc -n kfuse data-alerts-postgresql-0 kubectl delete pvc -n kfuse data-beffe-postgresql-0 kubectl delete pvc -n kfuse data-fpdb-postgresql-0 |
The Kfuse StorageClass resources need to be deleted prior to upgrade.
To check the current installed Kfuse version, run the following command:
helm list |
To delete the storage class:
kubectl delete storageclass kfuse-ssd-aws kfuse-ssd-aws-gp3 kfuse-ssd-gcp |
Otherwise, to upgrade just follow the install instructions as is.