...
Code Block |
---|
helm upgrade --install -n kfuse kfuse oci://us-east1-docker.pkg.dev/mvp-demo-301906/kfuse-helm/kfuse --version <SPECIFY VERSION HERE> -f custom_values.yaml |
Upgrading to Kfuse version to 3.1.0
Pre-Upgrade
Because of the fix we did for the labels and labelselector for some of our components (kubernetes resources) to match with the rest, we need to run this command once before upgrading to 3.1.0 or TOT.
Code Block |
---|
kubectl delete deployments.apps catalog-service rulemanager advance-functions-service |
Upgrading to Kfuse version from 2.7.3 to 2.7.4
Pre-Upgrade
For rbac, before upgrading to 2.7.4, check for a blank user row in the user management tab in Admin tab. The login and email fields would be empty with a random id. Delete that row either through ui directly or by exec-ing into the configdb shell and run the following command. You can find the script to connect to rbacdb here
...
Note that Identity for Databases is introduced in Kfuse version 2.6.7. Database Identity only takes effect on new ingested APM-related data. In addition, timestamp granularities for APM/span data has been increased from millisecond to nanosecond to provide better accuracy in Trace Flamegraph/Waterfall. For older APM data to be rendered accurately, follow the instructions Converting old APM data to Kfuse 2.6.5 APM Service Identity format to convert old data to the new format.
Pre-Upgrade
SLO is reenabled in 2.6.7 with enhanced features.
...
Note that Service Identity for APM is introduced in Kfuse version 2.6.5. Service Identity only takes effect on new ingested APM-related data. Accordingly, old APM data will not get rendered properly in the UI. If older APM data is needed. Then follow the instructions Converting old APM data to Kfuse 2.6.5 APM Service Identity format to convert old data to the new format.
Pre-Upgrade
On Azure, The
kfuse-ssd-offline
storage class is changed toStandardSSD_LRS
disk type. Thekfuse-ssd-offline
storage class needs to be deleted prior to upgrade to allow the new version to update the disk type. Note that if the installation is not on Azure, then this step can be skipped.Code Block kubectl delete storageclass kfuse-ssd-offline
...
Upgrading to Kfuse version 2.6
Pre-Upgrade
A new
kfuse-ssd-offline
storage class has been introduced in Kfuse version 2.6. This storage class usesgp3
on AWS,pd-balanced
on GCP, andStandard_LRS
on Azure. This is now the default storage class for Pinot Offline Servers, which should give better disk IO performance.If the custom values yaml is already set to use the specified disk type (e.g.,
kfuse-ssd-aws-gp3
orstandard-rwo
on GCP), then the remaining steps can be skipped.If the custom values yaml does not explicitly set the
pinot.server.offline.persistence.storageClass
field or it is set to a different storage class. Ensure that the field is not set in the custom values yaml. Then run the following commandsCode Block kubectl delete sts -n kfuse pinot-server-offline kubectl delete pvc -l app.kubernetes.io/instance=kfuse -l component=server-offline -n kfuse
Note that the above commands corresponds to the PVCs of the Pinot offline servers. After upgrade to Kfuse version 2.6, PVCs with the desired storage class will be created for the Pinot offline servers.
Upgrading to Kfuse version 2.5.3
Pre-Upgrade
Based on observation and feedback, it seems the current persistent volume size for zookeeper pods is getting full quite often. To remediate that we have increased the default size of all zookeeper pods to 32Gi. It needs changes in two places as shown below
...
Upgrading to Kfuse version 2.2.3
Pre-Upgrade
The default value for
pinot zookeeper
persistence (PVC) value is now32Gi
.If the existing Kfuse installation is using the default value (i.e.,
custom_values.yaml
did not explicitly specify the persistence size forpinot zookeeper
), update thepinot zookeeper
persistence (PVC) value to16Gi
. Add the following snippet underpinot.zookeeper
section:
...
Upgrading from Kfuse version 1.3.4 or earlier
Pre-Upgrade
Note: Kfuse services will go offline during this process. Kloudfuse storage class configuration has been simplified to keep future releases/features in mind. This requires running the migrate_storage_class.sh script provided by Kloudfuse team.
Code Block ./migrate_storage_class.sh
After running the script, ensure that the pvc’s storage class has
kfuse-ssd
, instead ofkfuse-ssd-gcp
orkfuse-ssd-aws
.Code Block kubectl get pvc -n kfuse
Old alerts have to be removed. Kloudfuse alerts organization has changed with the introduction of additional alerts. New version does the organization automatically, however, the older alerts have to be removed.
Manually remove all alerts by navigating to the grafana tab and remove all alerts from kloudfuse_alerts and kubernetes_alerts folder.
...
Upgrading from Kfuse version 1.2.1 or earlier
Pre-Upgrade
/wiki/spaces/EX/pages/756056089 is available as an optional component in 1.3 and later releases. To enable:
Knight agent is required to be installed. Please review steps/settings here.
Additional agent settings required. Please review settings here.
Starting Kfuse version 1.3.0, Kfuse has added retention support using Pinot Minion framework. This feature requires changes to the existing Pinot Minion statefulset. The statefulset needs to be deleted prior to upgrade.
...
A breaking change related to the number of postgresql servers installed as part of the Kfuse install was introduced after Kfuse version 1.1.0. Due to this, the stored alerts will be deleted if directly upgrading Kfuse. In order to retain the stored alerts, run the following pre and post upgrade steps below. Note that until the post-upgrade steps are executed, alerts & dashboards from pre-upgrade will not show up.
Pre-Upgrade
Code Block |
---|
kubectl exec -n kfuse alerts-postgresql-0 -- bash -c 'PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U postgres -F c alertsdb' > alertsdb.tar |
...