Kloudfuse requires two extension layers to be added to the lambda function.
Datadog Extension Layer
Kloudfuse is tested with version 33
Layer ARN: arn:aws:lambda:us-west-2:464622532012:layer:Datadog-Extension:33
LambdaInsightsExtension
Kloudfuse is tested with version 21
Layer ARN: arn:aws:lambda:us-west-2:580247275435:layer:LambdaInsightsExtension:21
The layers can be added in the AWS Lambda console of the Lambda function
Add the following environment variables in the Lambda configuration
DD_API_KEY | If authenticated ingest is enabled, provide the configured auth token. Otherwise, provide any string. |
DD_APM_DD_URL |
|
DD_DD_URL |
|
DD_LOGS_CONFIG_LOGS_DD_URL |
|
DD_LOGS_CONFIG_LOGS_NO_SSL | false |
DD_LOGS_CONFIG_USE_V2_API | false |
DD_TRACE_ENABLED | true |
Lambda-related metrics are pushed to Cloudwatch by default. Refer to Cloudwatch integration to push Cloudwatch metrics to Kloudfuse.
Lambda-related events can be integrated with Kloudfuse using EventBridge. Refer to Eventbridge integration to push AWS events to Kloudfuse.
CloudTrail needs to be configured to send Lambda events to the EventBridge.
Create a new Trail from the AWS CloudTrail console
In the Choose log events
step, select Data events
and Lambda
as Data event type
.
In custom_values.yaml
enable lambda enrichment.
ingester: config: awsScrapeLambdaConfigs: true |
Since Kfuse needs to scrape Lambda configuration from AWS, Kfuse requires a policy with the following permissions:
"Action": [ "lambda:GetPolicy", "lambda:List*", "lambda:ListTags", ] |
Please make sure the permissions mapped to the correct nodepool being used for EKS cluster where Kloudfuse is hosted.
Please make sure the permissions mapped to the correct nodepool being used for EKS cluster where Kloudfuse is hosted.
Follow the instructions on the AWS page to create an IAM policy: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html
You can retrieve your aws credentials required for the next step here.
Create a kube secret name named “aws-access-key” with keys “accessKey” and “secretKey” in the kfuse namespace
kubectl create secret generic aws-access-key --from-literal=accessKey=<AWS_ACCESS_KEY_ID> --from-literal=secretKey=<AWS_SECRET_ACCESS_KEY> |
Specify the secretName in the custom values.yaml.
ingester: config: awsScraper: secretName: aws-access-key |
Kfuse by default attempts to scrape from all regions. This can be customized by adding the following configuration in the custom values.yaml
ingester: config: awsScraper: secretName: aws-access-key regions: - <add region> |
Do a helm upgrade for changes to take affect
helm upgrade --create-namespace --install kfuse . -f <custom_values.yaml> |
With this option, Kfuse can be configured to scrape multiple AWS accounts.
Add the scraper Role ARNs (created with the permissions above) in the awsRoleArns list to your custom values.yaml
ingester: config: awsRoleArns: - role: <ADD ROLE ARN HERE> |
Kfuse by default attempts to scrape from all regions. This can be customized by adding the following configuration in the custom values.yaml
ingester: config: awsRoleArns: - role: <ADD ROLE ARN HERE> regions: - <add region> |
If needed modify the Trust Relationship for the policy of the scrape role ARN to add the node-group (Node IAM Role ARN), in which Kloudfuse is running on, as the Principal on the Account.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::ACCOUNT-NUMBER:role/eksctl-XXXXX-nodegroup-ng-XXXXXX-NodeInstanceRole-XXXXXXXXXX" }, "Action": "sts:AssumeRole" } ] } |
The node-group (Node IAM Role ARN), in which Kloudfuse is running on, also needs to have the following permissions policy to assume the role.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": <REPLACE SCRAPER ROLE ARN HERE> } ] } |
Do a helm upgrade for changes to take affect
helm upgrade --create-namespace --install kfuse . -f <custom_values.yaml> |