Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel6
outlinefalse
styledefault
typelist
printabletrue

Assumptions

  • If using an existing dd-agent version (and just pointing it to KloudFuse instead of DD HQ)

    • dd agent version needs to be 7.41 or higher.

    • validate using following steps:

      • check the chart version with helm list - should be 3.1.10 or higher

        Code Block
        helm list -n <namespace-where-agent-is-installed>
      • check the image version of the agent with describe pod on dd-agent pod.

        Code Block
        k describe pod -n <namespace-where-agent-is-installed> | grep Image
            Image:         gcr.io/datadoghq/agent:7.36.0

Install DataDog Agent

Agent install Setup Scenario

Steps

Kloudfuse stack & target both in same VPC and in same K8S cluster (Default)

This is the default scenario. Just use the provided values file for the agent and install.

Kloudfuse stack & target both in same VPC, but in different K8S cluster

Search for _url in the provided file. Wherever found, comment the Default Scenario and uncomment Scenario 1. For the IP address values needed, please see the following steps.

  • Get ingress internal IP

    Code Block
    kubectl get svc -n kfuse | grep kfuse-ingress-nginx-controller-internal
    kfuse-ingress-nginx-controller-internal   LoadBalancer   10.53.250.80    10.53.232.3   80:32716/TCP,443:30767/TCP   125m
  • replace all settings with _url suffix, for example:

    Code Block
    dd_url: http://10.53.250.80/ingester
    logs_dd_url: "10.53.250.80:80"

Kloudfuse stack hosted in a different VPC (hosted at “customer.kloudfuse.io”)

Search for _url in the values.yaml file provided in the first scenario. Wherever found, comment the Default Scenario and uncomment Scenario 2. Replace the customer.kloudfuse.io with your custom DNS name entry.

  • replace all settings with _url suffix, for example:

    Code Block
    dd_url: http://customer.kloudfuse.io/ingester
    logs_dd_url: "customer.kloudfuse.io:443"

Collect high cardinality tags

The Datadog Agent is installed with cardinality set to ‘orchestrator’ level - allowing granular (pod and container level) tagging of metrics.. The default setting in the Datadog Agent is set to ‘low’ - allowing tagging only at host level.

Install command

If you haven’t before, add datadog helm repo:

...

Code Block
# Create separate namespace for the agent to be installed (if required)
kubectl create namespace kfuse-agent
helm upgrade --install kfuse-agent -f dd-values-kfuse.yaml datadog/datadog -n kfuse-agent --version 3.6.7

Adding custom tags

Custom tags can be added to the agent such that all metrics collected by the agent will have the tag added. To add custom tags, please follow these steps.

...

Code Block
ingester:
  config:
    hostTagIncludes:
    - kf
    - kfuse
    - kube_cluster_name
    - kubernetes.io/hostname
    - node.kubernetes.io/instance-type
    - org_id
    - project
    - topology.kubernetes.io/region
    - topology.kubernetes.io/zone
    ...
    - custom_tag_name

Enabling Pods to be detected by Prometheus Autodiscovery

In addition to prometheusScrape to be enabled in the datadog values yaml, the pods needs to have the following annotations. Note that if the application pods are deployed using helm, typically the helm values support a podAnnotations section.

Code Block
  prometheus.io/path: <specify prometheus endpoint path, e.g., /metrics>
  prometheus.io/port: <SPECIFY promethus endpoint here, e.g., "9090">
  prometheus.io/scrape: "true"



Metadata for metrics collected using the openmetrics check

If using the above configuration, then the openmetrics check is enabled in the agent. See here for more details on what openmetrics check does. Kloudfuse agent (installed using the provided values file) employs a custom check (kf_openmetrics) to collect metadata (the “Description” and “Type” of metrics) for metrics collected using the openmetrics check, which. by default, doesn’t collect any metadata. To enable collection of metadata for these metrics, the sources are required to be annotated which will enable the agent auto-discovery for these pods and execute the custom check.

Kubernetes environment

To enable metrics metadata to be collected from a Kubernetes pod (as shown in an example here) follow these steps (Note that this can be done in helm as well for each of the deployment/sts):

Code Block
apiVersion: v1
kind: Pod
# (...)
metadata:
  name: '<POD_NAME>'
annotations:
    ad.datadoghq.com/<CONTAINER_IDENTIFIER>.check_names: '["kf_openmetrics"]'
    ad.datadoghq.com/<CONTAINER_IDENTIFIER>.init_configs: '[{}]'
    ad.datadoghq.com/<CONTAINER_IDENTIFIER>.instances: '[ { "openmetrics_endpoint": "<http://%%host%%:%%port%%/metrics>"}
      ]'
    # (...)
spec:
  containers:
    - name: '<CONTAINER_IDENTIFIER>'
# (...)

Advance monitoring using Knight

Enable kubernetes_state_metrics

Advance monitoring (Kfuse 1.3 or higher) currently has dependency on kubernetes_state_metrics (KSM) check which is not enabled in the newer version of the agent (2.0) by default. Please ensure that the agent continues to capture these metrics through KSM. To do that, please add/update the dd-agent values file as follows:

Code Block
datadog:
  kubeStateMetricsEnabled: true
  kubeStateMetricsCore:
    enabled: true
    ignoreLegacyKSMCheck: false

Enable Knight based monitoring in kfuse

Add knightEnabled in the custom-values.yaml and then upgrade the cluster.

...