Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Step 1: Customer creates Session Replay S3 Bucket / Azure Blob Storage Container / Google Cloud Storage

AWS S3

RUM session recordings can be stored recordings in an S3 bucket. Have the customer create an S3 bucket for this purpose and share with you the following:

  1. S3 accessKey and secretKey (these will be used to create a secret on the customers kfuse cluster)

  2. S3 bucketName and awsRegion where the bucket exists

Azure Blob

RUM session recordings can be stored in an Azure Blob container. Have the customer create a Storage Account and a blob container and share with you the following:

  1. Container Name

  2. Connection String for Storage Account

Google GCS

RUM session recordings can be stored in a GCS Bucket. Have the customer create a new bucket. Next create a service account on GCP IAM that has access to the bucket. Create a new key under this service account (which will create and download a credential file). Rename this local file to secretKey. Note that the name of the credential file has to be secretKey:

  1. GCS Bucket Name

  2. Credential file named secretKey

Step 2: Create secret on customers kfuse namespace

Customer or CS team requires to do the following on the cluster where kloudfuse is running

AWS S3

kubectl create secret generic kfuse-rum-s3 --from-literal=accessKey=<accessKey> --from-literal=secretKey='<secretKey>'

Azure Blob Storage

kubectl create secret generic kfuse-rum-azure --from-literal=connectionString=<connectionString>

Google GCS

kubectl create secret generic kfuse-rum-gcs --from-file=./secretKey

Step 3: Customize the customers environment yaml file to enable RUM

CS team - Incorporate the changes similar to PR that enables RUM on Playground and PR that enables RUM menu item on UI. Below is a verbal description of the change within these PRs:

  • Add a global.rum section

  •   rum:
        enabled: true
        
        # This is a list of names and UUIDs. The customer can generate any
        # UUID. The same UUID will need to be referenced in the Kfuse frontend SDK
        # initialization call.
        applications:
          - name: kf-frontend
            id: 944f6a58-dbc2-45ad-bf93-def505aaff62
        
        # only if customer uses AWS S3
        sessionReplayStorage:
          type: s3
          useSecret: true
          
          # Below secret name references the secret created earlier
          secretName: "kfuse-rum-s3"
          
          # Below references the bucket name and region that customer has
          # created for session replay storage
          s3:
            region: us-west-2
            bucket: rum-session-replay-playground
            
        # only if customer uses Azure Blob
        sessionReplayStorage:
          type: azure
          useSecret: true
          
          # Below secret name references the secret created earlier
          secretName: "kfuse-rum-azure"
          
          # Below references the container that the customer has
          # created for session replay storage
          azure:
            container: rum-session-replay-playground
            
        # only if customer uses Google GCS
        sessionReplayStorage:
          type: gcs
          useSecret: true
          secretName: kfuse-rum-gcs
          gcs:
            bucket: rum-session-replay-playground
  • To generate a UUID above you can run command line uuidgen | tr 'A-Z' 'a-z'

  • Ensure RUM specific Kafka topics are listed in the customer YAML. You can use the configuration for the events stream as a reference for number of replicas and partitions. We have not done a performance evaluation for RUM so there is no proper guideline at the moment.

    - name: kf_rum_session_replay_topic
      partitions: 3
      replicationFactor: 2
    - name: kf_rum_views_topic
      partitions: 3
      replicationFactor: 2
    - name: kf_rum_actions_topic
      partitions: 3
      replicationFactor: 2
    - name: kf_rum_resources_topic
      partitions: 3
      replicationFactor: 2
    - name: kf_rum_longtasks_topic
      partitions: 3
      replicationFactor: 2
    - name: kf_rum_errors_topic
      partitions: 3
      replicationFactor: 2
  • TLS and Ingress configuration. RUM requires a public ingest endpoint that frontend browser applications will post data to. Ensure that the customer has defined the tls section and has a public hostname. Ensure to enable external ingress and external traffic policy:

tls:
  enabled: true
  host: playground.kloudfuse.io
  email: admin@kloudfuse.com
  clusterIssuer: playground-letsencrypt-prod
  
ingress:
  controller:
    service:
      external:
        enabled: true
      externalTrafficPolicy: Local
  • Enable RUM under ingester:

  • ingester:
      config:
        rum:
          enabled: true
          # you can ignore the datadog section below from the PR since 
          # this is the default
          datadog:
            proxyToDatadogEnabled: false
  • Add parsing rules under log-parser to accept frontend logs. You can include these rules verbatim along with any existing rules the customer already has

  • Enable RUM under ui:

ui:
  config:
    rum:
      enabled: true

That completes the cluster specific changes required. Upgrade the cluster with the above changes and ensure all pods are coming up.

Step 4: Customer instruments their frontend application

The Frontend SDK setup is documented here. Below is the guideline for how customer needs to supply the SDK initialization parameters:

  • applicationId: '<APPLICATION_ID>' - this needs to match the application id defined above under the customer yaml global.rum.applications section

  • clientToken: '<CLIENT_TOKEN>' - use the string dummy as a value (this is update we implement auth ingest for RUM and actually validate)

  • site: '<SITE>' - use empty string

  • proxy: '<KFUSE_RUM_ENDPOINT>' - this needs to be https://<customers-kfuse-hostname>/ddrumproxy e.g. for the playground cluster this value will be https://playground.kloudfuse.io/ddrumproxy

  • service: '<APPLICATION_NAME>' - match the application name (should not contain any white-space e.g. kf-frontend)

  • env: 'production' - whatever the customer wants (e.g. test or production or staging etc)

  • version: '1.0.0' - if the customer has a way to identify the version of their frontend application then that version string should come here. Otherwise just use a default like 1.0.0

  • # recommend small number if site has many users
    sessionSampleRate: 100
    
    # recommend true to get session capture and replay feature
    enableSessionRecording: true,
    
    # recommend true to get frontend logs into their kfuse cluster
    enableLogCollection: true,

  • No labels