Enable RUM on Customer's Kloudfuse installation

Step 1: Customer creates Session Replay S3 Bucket / Azure Blob Storage Container / Google Cloud Storage

AWS S3

RUM session recordings can be stored in an S3 bucket. Have the customer create an S3 bucket for this purpose and share with you the following:

  1. S3 accessKey and secretKey (these will be used to create a secret on the customers kfuse cluster)

  2. S3 bucketName and awsRegion where the bucket exists

Azure Blob

RUM session recordings can be stored in an Azure Blob container. Have the customer create a Storage Account and a blob container and share with you the following:

  1. Container Name

  2. Connection String for Storage Account

Google GCS

RUM session recordings can be stored in a GCS Bucket. Have the customer create a new bucket. Next create a service account on GCP IAM that has access to the bucket. Create a new key under this service account (which will create and download a credential file). Rename this local file to secretKey. Note that the name of the credential file has to be secretKey:

  1. GCS Bucket Name

  2. Credential file named secretKey

Step 2: Create secret on customers kfuse namespace

Customer or CS team requires to do the following on the cluster where kloudfuse is running

AWS S3

kubectl create secret generic kfuse-rum-s3 --from-literal=accessKey=<accessKey> --from-literal=secretKey='<secretKey>'

Azure Blob Storage

kubectl create secret generic kfuse-rum-azure --from-literal=connectionString=<connectionString>

Google GCS

kubectl create secret generic kfuse-rum-gcs --from-file=./secretKey

Step 3: Customize the customers environment yaml file to enable RUM

CS team - Incorporate the changes similar to PR that enables RUM on Playground and PR that enables RUM menu item on UI. Below is a verbal description of the change within these PRs:

  • Add a global.rum section

  • To generate a UUID above you can run command line uuidgen | tr 'A-Z' 'a-z'

  • Ensure RUM specific Kafka topics are listed in the customer YAML. You can use the configuration for the events stream as a reference for number of replicas and partitions. We have not done a performance evaluation for RUM so there is no proper guideline at the moment.

  • TLS and Ingress configuration. RUM requires a public ingest endpoint that frontend browser applications will post data to. Ensure that the customer has defined the tls section and has a public hostname. Ensure to enable external ingress and external traffic policy:

  • Enable RUM under ingester:

  • Add parsing rules under log-parser to accept frontend logs. You can include these rules verbatim along with any existing rules the customer already has

  • Enable RUM under ui:

That completes the cluster specific changes required. Upgrade the cluster with the above changes and ensure all pods are coming up.

Step 4: Customer instruments their frontend application

The Frontend SDK setup is documented here. Below is the guideline for how customer needs to supply the SDK initialization parameters:

  • applicationId: '<APPLICATION_ID>' - this needs to match the application id defined above under the customer yaml global.rum.applications section

  • clientToken: '<CLIENT_TOKEN>' - use the string dummy as a value (this is update we implement auth ingest for RUM and actually validate)

  • site: '<SITE>' - use empty string

  • proxy: '<KFUSE_RUM_ENDPOINT>' - this needs to be https://<customers-kfuse-hostname>/ddrumproxy e.g. for the playground cluster this value will be https://playground.kloudfuse.io/ddrumproxy

  • service: '<APPLICATION_NAME>' - match the application name (should not contain any white-space e.g. kf-frontend)

  • env: 'production' - whatever the customer wants (e.g. test or production or staging etc)

  • version: '1.0.0' - if the customer has a way to identify the version of their frontend application then that version string should come here. Otherwise just use a default like 1.0.0

 

Related pages