In most cases, the ingestion to Kloudfuse data plane from various telemetry agents/collectors is secure (encrypted) due to using HTTPS (TLS) which provides transport layer encryption. More on the TLS handshake conducted between the clients and the server can be found online. This documentation explains how to add authentication to the ingestion in addition to confidentiality provided by HTTPS.
Kloudfuse supports many telemetry agents (choose the steps which are relevant to the agent you are using). To enable authentication for ingestion, follow these 3 steps. For step 2 choose the section relevant to the agents being used.
Step 1. Generate AUTH_TOKEN
Generate an auth token (referred to as AUTH_TOKEN
) and store this value in a safe location. You will need to use this later in more than one place.
AUTH_TOKEN=`cat /dev/urandom | env LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1`
Base64 encode the AUTH_TOKEN
(referred to as AUTH_TOKEN_ENCODED
)
AUTH_TOKEN_ENCODED=`echo -n $AUTH_TOKEN | base64`
Step 2. Configure Telemetry agents/sources
Follow instructions for the corresponding sources below.
AWS CloudWatch metrics & Logs (Kinesis)
Prometheus Remote Write
Update prometheus remote write configuration as shown below:
prometheus.yml: remote_write: - url: https://<customer>.kloudfuse.io/ingester/write authorization: credentials: <AUTH_TOKEN>
Fluent Bit
Update/Add the following Headers
field with AUTH_TOKEN
replaced with the one generated in step 1, in the HTTP
plugin section of the fluent-bit configuration file as shown below:
[OUTPUT] Name http Match <match_pattern> Host <kfuse_ingress_ip> Port 443 TLS on URI /ingester/v1/fluent_bit Headers Kf-Api-Key <AUTH_TOKEN>
Fluentd
Update/Add the fluentd output http plugin configuration to add a “headers
" field as described below using the Kf-Api-Key
and AUTH_TOKEN
:
<match *> # Match everything @type http endpoint http://<KFUSE_INGESTER_IP>:80/ingester/v1/fluentd headers {"Kf-Api-Key" : "<AUTH_TOKEN>"} ... </match>
Filebeat
Update/Add the filebeat configuration to include the api_key field within the output section:
output.elasticsearch: hosts: ["http://<ingress-ip>:80/ingester/api/v1/filebeat"] api_key: "<AUTH_TOKEN>"
OLTP Collector for Metrics/Logs/Traces
Update/Add following headers section in the exporters section.
exporters: otlphttp: endpoint: https://<ingress-address>/ingester/otlp/metrics traces_endpoint: https://<ingress-address>/ingester/otlp/traces headers: kf-api-key: <AUTH_TOKEN>
DD/Kfuse agent
Update/Add the dd-agent configuration file to add the AUTH_TOKEN
as the apiKey
as shown below:
datadog: apiKey: <AUTH_TOKEN> ...
AWS CloudWatch metrics & Logs (Kinesis)
When configuring kinesis firehose data stream to send logs/metrics from Cloudwatch, use the AUTH_TOKEN
value generated in step 1 as the access token
. If the firehose data stream is already setup, then update it to use AUTH_TOKEN
value as access token
.
AWS Eventbridge Events
When configuring Eventbridge to ingest to Kloudfuse, use AUTH_TOKEN
as the value for the Kf_Api_Key
.
Step 3: Configure kfuse
Use the base64 encoded value of the AUTH_TOKEN
(AUTH_TOKEN_ENCODED
) and create a kubernetes secret with the name kfuse-auth-ingest
:
apiVersion: v1 kind: Secret metadata: name: kfuse-auth-ingest type: Opaque data: authToken: <AUTH_TOKEN_ENCODED>
Update the custom-values.yaml file to include following in the ingester config section:
ingester: config: authConfig: enabled: true