Enable RUM on Customer's Kloudfuse installation
Step 1: Customer creates Session Replay S3 Bucket / Azure Blob Storage Container / Google Cloud Storage
AWS S3
RUM session recordings can be stored in an S3 bucket. Have the customer create an S3 bucket for this purpose and share with you the following:
S3
accessKey
andsecretKey
(these will be used to create a secret on the customers kfuse cluster)S3
bucketName
andawsRegion
where the bucket exists
Azure Blob
RUM session recordings can be stored in an Azure Blob container. Have the customer create a Storage Account and a blob container and share with you the following:
Container Name
Connection String for Storage Account
Google GCS
RUM session recordings can be stored in a GCS Bucket. Have the customer create a new bucket. Next create a service account on GCP IAM that has access to the bucket. Create a new key under this service account (which will create and download a credential file). Rename this local file to secretKey
. Note that the name of the credential file has to be secretKey
:
GCS Bucket Name
Credential file named
secretKey
Step 2: Create secret on customers kfuse namespace
Customer or CS team requires to do the following on the cluster where kloudfuse is running
AWS S3
kubectl create secret generic kfuse-rum-s3 --from-literal=accessKey=<accessKey> --from-literal=secretKey='<secretKey>'
Azure Blob Storage
kubectl create secret generic kfuse-rum-azure --from-literal=connectionString=<connectionString>
Google GCS
kubectl create secret generic kfuse-rum-gcs --from-file=./secretKey
Step 3: Customize the customers environment yaml file to enable RUM
CS team - Incorporate the changes similar to PR that enables RUM on Playground and PR that enables RUM menu item on UI. Below is a verbal description of the change within these PRs:
Add a
global.rum
sectionTo generate a UUID above you can run command line
uuidgen | tr 'A-Z' 'a-z'
Ensure RUM specific Kafka topics are listed in the customer YAML. You can use the configuration for the events stream as a reference for number of replicas and partitions. We have not done a performance evaluation for RUM so there is no proper guideline at the moment.
TLS and Ingress configuration. RUM requires a public ingest endpoint that frontend browser applications will post data to. Ensure that the customer has defined the
tls
section and has a publichostname
. Ensure to enable external ingress and external traffic policy:
Enable RUM under
ingester
:Add parsing rules under
log-parser
to accept frontend logs. You can include these rules verbatim along with any existing rules the customer already hasEnable RUM under
ui
:
That completes the cluster specific changes required. Upgrade the cluster with the above changes and ensure all pods are coming up.
Step 4: Customer instruments their frontend application
The Frontend SDK setup is documented here. Below is the guideline for how customer needs to supply the SDK initialization parameters:
applicationId: '<APPLICATION_ID>'
- this needs to match the application id defined above under the customer yamlglobal.rum.applications
sectionclientToken: '<CLIENT_TOKEN>'
- use the stringdummy
as a value (this is update we implement auth ingest for RUM and actually validate)site: '<SITE>'
- use empty stringproxy: '<KFUSE_RUM_ENDPOINT>'
- this needs to behttps://<customers-kfuse-hostname>/ddrumproxy
e.g. for the playground cluster this value will behttps://playground.kloudfuse.io/ddrumproxy
service: '<APPLICATION_NAME>'
- match the application name (should not contain any white-space e.g.kf-frontend
)env: 'production'
- whatever the customer wants (e.g. test or production or staging etc)version: '1.0.0'
- if the customer has a way to identify the version of their frontend application then that version string should come here. Otherwise just use a default like1.0.0