Upgrade command
Before performing an upgrade, validate that the upgrade won’t revert any customization on your cluster. The steps to run a validation are detailed in here.
To check the current installed Kfuse version, run the following command:
helm list
The upgrade command is the same as the install command specified in Installation. The command is copied here for reference. The latest released version is specified in the installation page.
helm upgrade --install -n kfuse kfuse oci://us-east1-docker.pkg.dev/mvp-demo-301906/kfuse-helm/kfuse --version <SPECIFY VERSION HERE> -f custom_values.yaml
[Upcoming] Upgrading to Kfuse version from 2.7.3 to 2.7.4
Post-Upgrade
kubectl port-forward --namespace kfuse deployments.apps/trace-query-service 8080:8080 curl -X POST http://localhost:8080/v1/trace/query \ -H "Content-Type: application/json" \ -d '{ "query": "query { refreshServicesInApmStore(lookbackDays: 1) }" }' kubectl rollout restart sts pinot-server-offline
Upgrading to Kfuse version from 2.7.2 to 2.7.3
The 2.7.3 version upgrade is a two step process.
Set
pinot.server.realtime.replicaCount
to 0. Keep note of the original value of this field because we will set it back to that value in step 4.Run helm upgrade as usual.
Make sure all pods and jobs have finished successfully.
Set
pinot.server.realtime.replicaCount
back to its original value in the values.yaml file.Either run helm upgrade again.
Alternatively, you could runkubectl scale sts pinot-server-realtime --replicas=N
Upgrading to Kfuse version from 2.7.1 to 2.7.2
Because this release changes the RBAC implementation, you may see numeric IDs in the email field of the users. To populate Kloudfuse with correct emails, delete all users. Kloudfuse recreates individual users as they log in, with correct email values.
We advise that you create new groups after completing this step. You can then assign users to groups, policies to users and groups, and so on.
Post-Upgrade
Connect to
rbacdb
> ./kfuse-postgres.sh kfuse-configdb-0 kfuse rbacdb
Make a note of user_id with NULL value created during rbac migration
rbacdb=# select id from users where grafana_id=NULL
Cleanup of empty users in the rbac db created during import from existing users.
rbacdb=# delete from users where grafana_id is NULL;
For each user in the output of step 2, delete the user from group
rbacdb=# delete from group_members where user_id='<user-id>'
The kfuse-postgres.sh script is available in the customer repository under scripts directory
Upgrading to Kfuse version from 2.7 to 2.7.1
There is no specific pre-upgrade or post-upgrade steps for 2.7.1 release. Please follow the upgrade command section.
Upgrading to Kfuse version from 2.6.7 to 2.6.8 or 2.7
Package upgrades to remove service vulnerabilities. Beforehelm upgrade
you need to run a script that's related to the Kafka service. There will be some downtime between running the script and helm upgrade
. You can find the script here
Edit the <custom_values.yaml>
file and move the block under kafka to kafka-broker section as follows
kafka: broker: <<previous kafka block>>
Add these topics to the kafkaTopics section for record-replay
# kafkaTopics -- kafka topics and configuration to create for Kfuse kafkaTopics: - name: kf_commands partitions: 1 replicationFactor: 1 - name: kf_recorder_data partitions: 1 replicationFactor: 1
Add a recorder section with the same affinity and tolerations values as the ingester. If empty, don’t add recorder section
recorder: # affinity -- affinity settings. affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: ng_label operator: In values: - amrut tolerations: - key: "ng_taint" operator: "Equal" value: "amrut" effect: "NoSchedule"
If using aws enrichment, the config format in the values has changed. Please see this page.
Now upgrade the stack with the upgrade command
Upgrading to Kfuse version 2.6.7
Note that Identity for Databases is introduced in Kfuse version 2.6.7. Database Identity only takes effect on new ingested APM-related data. In addition, timestamp granularities for APM/span data has been increased from millisecond to nanosecond to provide better accuracy in Trace Flamegraph/Waterfall. For older APM data to be rendered accurately, follow the instructions Converting old APM data to Kfuse 2.6.5 APM Service Identity format to convert old data to the new format.
Pre-Upgrade
SLO is reenabled in 2.6.7 with enhanced features.
> ./kfuse-postgres.sh kfuse-configdb-0 kfuse slodb slodb=# drop table slodbs;
The kfuse-postgres.sh script is available in the customer repository under scripts directory
Post-Upgrade
There are few changes in the pinot database which requires Pinot server to be restarted post upgrade with following command
kubectl rollout restart sts -n kfuse pinot-server-offline pinot-server-realtime
Upgrading to Kfuse version 2.6.5
Note that Service Identity for APM is introduced in Kfuse version 2.6.5. Service Identity only takes effect on new ingested APM-related data. Accordingly, old APM data will not get rendered properly in the UI. If older APM data is needed. Then follow the instructions Converting old APM data to Kfuse 2.6.5 APM Service Identity format to convert old data to the new format.
Pre-Upgrade
On Azure, The
kfuse-ssd-offline
storage class is changed toStandardSSD_LRS
disk type. Thekfuse-ssd-offline
storage class needs to be deleted prior to upgrade to allow the new version to update the disk type. Note that if the installation is not on Azure, then this step can be skipped.kubectl delete storageclass kfuse-ssd-offline
Post-Upgrade
There are few changes in the pinot database which requires Pinot server to be restarted post upgrade with following command
kubectl rollout restart sts -n kfuse pinot-server-offline pinot-server-realtime
Upgrading to Kfuse version 2.6
Pre-Upgrade
A new
kfuse-ssd-offline
storage class has been introduced in Kfuse version 2.6. This storage class usesgp3
on AWS,pd-balanced
on GCP, andStandard_LRS
on Azure. This is now the default storage class for Pinot Offline Servers, which should give better disk IO performance.If the custom values yaml is already set to use the specified disk type (e.g.,
kfuse-ssd-aws-gp3
orstandard-rwo
on GCP), then the remaining steps can be skipped.If the custom values yaml does not explicitly set the
pinot.server.offline.persistence.storageClass
field or it is set to a different storage class. Ensure that the field is not set in the custom values yaml. Then run the following commandskubectl delete sts -n kfuse pinot-server-offline kubectl delete pvc -l app.kubernetes.io/instance=kfuse -l component=server-offline -n kfuse
Note that the above commands corresponds to the PVCs of the Pinot offline servers. After upgrade to Kfuse version 2.6, PVCs with the desired storage class will be created for the Pinot offline servers.
Upgrading to Kfuse version 2.5.3
Pre-Upgrade
Based on observation and feedback, it seems the current persistent volume size for zookeeper pods is getting full quite often. To remediate that we have increased the default size of all zookeeper pods to 32Gi. It needs changes in two places as shown below
kafka: # zookeeper - Configuration for Kafka's Zookeeper. zookeeper: persistence: size: 32Gi . . . pinot: # zookeeper - Configuration for Pinot's Zookeeper. zookeeper: persistence: size: 32Gi
Please update this prior to update with the resize_pvc.sh script. Please reach out if you need assistance.
Post-Upgrade
There are few changes in the pinot database which requires the some of the services to be restarted post upgrade with following command
kubectl rollout restart sts -n kfuse pinot-server-offline pinot-server-realtime pinot-controller pinot-broker logs-parser logs-query-service kubectl rollout restart deployment -n kfuse logs-transformer trace-transformer trace-query-service
Upgrading to Kfuse version 2.5.0
Post-Upgrade
There are few changes in the pinot database which requires the some of the services to be restarted post upgrade with following command
kubectl rollout restart sts -n kfuse pinot-server-offline pinot-server-realtime pinot-controller pinot-broker logs-parser logs-query-service kubectl rollout restart deployment -n kfuse logs-transformer
Upgrading to Kfuse version 2.2.4
Post-Upgrade
There are few changes in the pinot database which requires the pinot-* servers to be restarted post upgrade with following command
kubectl rollout restart sts -n kfuse pinot-server-offline pinot-server-realtime pinot-controller pinot-broker
Upgrading to Kfuse version 2.2.3
Pre-Upgrade
The default value for
pinot zookeeper
persistence (PVC) value is now32Gi
.If the existing Kfuse installation is using the default value (i.e.,
custom_values.yaml
did not explicitly specify the persistence size forpinot zookeeper
), update thepinot zookeeper
persistence (PVC) value to16Gi
. Add the following snippet underpinot.zookeeper
section:
persistence: size: 16Gi
Upgrading from Kfuse version 2.1 or earlier
Post-Upgrade
Kloudfuse provided alerts organization has been updated for better maintenance. Make sure to remove old version.
# Connect to kfuse cluster and log in to catalog service pod kubens kfuse kubectl exec -it catalog-servicexxx -- bash # Remove older folders. python3 /catalog_service/catalog.py --remove_installed --list kloudfuse,kloudfuse_alerts,kubernetes_alerts --artifact_type alerts
Upgrading from Kfuse version 2.0.1 or earlier
Post-Upgrade
The Kfuse-provisioned dashboard has been cleaned up. Run the following command:
kubectl -n kfuse exec -it kfuse-configdb-0 -- bash -c "PGDATABASE=alertsdb PGPASSWORD=\$POSTGRES_PASSWORD psql -U postgres -c 'delete from dashboard_provisioning where name='\''hawkeye-outliers-resources'\'';'; "
Upgrading from Kfuse version 1.3.4 or earlier
Pre-Upgrade
Note: Kfuse services will go offline during this process. Kloudfuse storage class configuration has been simplified to keep future releases/features in mind. This requires running the migrate_storage_class.sh script provided by Kloudfuse team.
./migrate_storage_class.sh
After running the script, ensure that the pvc’s storage class has
kfuse-ssd
, instead ofkfuse-ssd-gcp
orkfuse-ssd-aws
.kubectl get pvc -n kfuse
Old alerts have to be removed. Kloudfuse alerts organization has changed with the introduction of additional alerts. New version does the organization automatically, however, the older alerts have to be removed.
Manually remove all alerts by navigating to the grafana tab and remove all alerts from kloudfuse_alerts and kubernetes_alerts folder.
Post-Upgrade
Older kubernetes secret related configuration needs to be removed from the custom values.yaml file. Also
kfuse-credentials
secret can be removed.
auth: config: AUTH_TYPE: "google" AUTH_COOKIE_MAX_AGE_IN_SECONDS: 259200 auth: existingAdminSecret: "kfuse-credentials" existingSecret: "kfuse-credentials"
There is a schema change introduced in the traces table. Make sure to restart the Pinot servers after upgrade completes.
kubectl rollout restart sts -n kfuse pinot-server-realtime kubectl rollout restart sts -n kfuse pinot-server-offline
Upgrading from Kfuse version 1.3.2 or earlier
Post-Upgrade
There is a schema change introduced in version 1.3.3. Make sure to restart the Pinot servers after upgrade completes.
kubectl rollout restart sts -n kfuse pinot-server-realtime kubectl rollout restart sts -n kfuse pinot-server-offline
Upgrading from Kfuse version 1.2.1 or earlier
Pre-Upgrade
/wiki/spaces/EX/pages/756056089 is available as an optional component in 1.3 and later releases. To enable:
Knight agent is required to be installed. Please review steps/settings here.
Additional agent settings required. Please review settings here.
Starting Kfuse version 1.3.0, Kfuse has added retention support using Pinot Minion framework. This feature requires changes to the existing Pinot Minion statefulset. The statefulset needs to be deleted prior to upgrade.
kubectl delete sts -n kfuse pinot-minion
Existing alerts have to be updated with new, efficient version. Please follow these steps to refresh the alerts.
Go to Kloudfuse UI → Alerts → Alert Rules
Using the filter panel on the left, expand “Component” item, and filter all the “Kloudfuse” and “Kubernetes” alerts to include only these. Remove each of these alerts from the filtered list. (post upgrade the new alerts will be installed automatically)
Upgrading from Kfuse version 1.1.1 or earlier
Cloud-Specific configurations
Starting Kfuse version 1.2.0, cloud-specific yamls (aws.yaml, gcp.yaml, azure.yaml) are not included in the chart anymore. The custom_values.yaml
needs to include these configurations. Refer to Configure Cloud-Specific Helm Values and https://kloudfuse.atlassian.net/wiki/spaces/EX/pages/793378845. With Kfuse version 1.2.0, there is no need to pull the chart prior to installation. helm upgrade
can be directly run with the Kfuse helm chart registry.
Database changes
A breaking change related to the number of postgresql servers installed as part of the Kfuse install was introduced after Kfuse version 1.1.0. Due to this, the stored alerts will be deleted if directly upgrading Kfuse. In order to retain the stored alerts, run the following pre and post upgrade steps below. Note that until the post-upgrade steps are executed, alerts & dashboards from pre-upgrade will not show up.
Pre-Upgrade
kubectl exec -n kfuse alerts-postgresql-0 -- bash -c 'PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U postgres -F c alertsdb' > alertsdb.tar
Post-Upgrade
kubectl cp -n kfuse alertsdb.tar kfuse-configdb-0:/tmp/alertsdb.tar kubectl exec -n kfuse kfuse-configdb-0 -- bash -c 'PGPASSWORD=$POSTGRES_PASSWORD pg_restore -U postgres -Fc --clean --if-exists -d alertsdb < /tmp/alertsdb.tar' kubectl delete pvc -n kfuse data-alerts-postgresql-0 kubectl delete pvc -n kfuse data-beffe-postgresql-0 kubectl delete pvc -n kfuse data-fpdb-postgresql-0
Upgrading from Kfuse version 1.0.4 or earlier
The Kfuse StorageClass resources need to be deleted prior to upgrade.
To check the current installed Kfuse version, run the following command:
helm list
To delete the storage class:
kubectl delete storageclass kfuse-ssd-aws kfuse-ssd-aws-gp3 kfuse-ssd-gcp
Otherwise, to upgrade just follow the install instructions as is.