Collection of AWS CloudWatch Logs and Metrics

AWS requires an external endpoint, with https. The default installation of Kloudfuse enables an ingress service with external IP enabled, but no https support.

Refer to for setting up https.

CloudWatch Metrics

Setup AWS Kinesis Firehose

  • Go to the Kinesis Firehose AWS console and create a new delivery stream.

    • Select Direct PUT as the source

    • Select HTTP Endpoint as the destination

    • In the destination settings, use the external facing endpoint of the Kfuse cluster and provide the following URL. https://<external facing endpoint of Kfuse cluster>/ingester/kinesis/metrics

    • Optionally use the “access token key” if needed.

    • In the Content encoding section, select GZIP

    • Provide an existing S3 bucket or create a new one for storing Kinesis records as a backup. The default of only backing up failed data should suffice.

    • Change the name of the stream if necessary.

Setup AWS CloudWatch Metrics Stream

  • Go to the Cloudwatch AWS console

  • On the Metrics section on the left side of the console, select Streams and create a metric stream

    • Select the metric namespaces to send to the stream (default is all metrics)

    • In the configuration section, Select an existing Firehose owned by your account and select the previously created Kinesis Firehose.

    • Under Change Output Format Make sure to select JSON for the output format. Kfuse currently only supports JSON format.

    • Change the name of the stream if necessary.

Enable AutoScaling Group Metrics

  • Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/, and choose Auto Scaling Groups from the navigation pane.

  • Select the check box next to your Auto Scaling group.

  • A split pane opens up at the bottom of the page.

  • On the Monitoring tab, select the Auto Scaling group metrics collectionEnable the check box located at the top of the page under Auto Scaling.

Enable Collection of Request Metrics in S3

Enable Enrichment of AWS Metrics

The metrics sent by AWS CloudWatch to the Kinesis Firehose only include minimal labels. Kloudfuse supports attaching more labels (and also user-defined custom tags from the AWS console) to the ingested metrics. This is done by scraping AWS.

To enable enrichment of AWS metrics, follow these steps:

  1. add the following configuration in the global section of the custom values.yaml

global: enrichmentEnabled: - aws
  1. Since Kfuse needs to scrape the additional labels from AWS to attach to the metrics, Kfuse requires a policy with the following permissions:

"Action": [ "acm:ListCertificates", "acm:ListTagsForCertificate", "autoscaling:DescribeAutoScalingGroups", "cloudwatch:ListMetrics", "cloudwatch:GetMetricStatistics", "ec2:DescribeInstances", "ec2:DescribeInstanceStatus", "ec2:DescribeSecurityGroups", "ec2:DescribeNatGateways", "ec2:DescribeVolumes", "elasticache:DescribeCacheClusters", "elasticache:ListTagsForResource", "elasticfilesystem:DescribeFileSystems", "elasticfilesystem:DescribeBackupPolicy", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:DescribeLoadBalancers", "es:ListDomainNames", "es:DescribeDomains", "es:ListTags", "firehose:DescribeDeliveryStream", "firehose:ListDeliveryStreams", "firehose:ListTagsForDeliveryStream", "lambda:GetPolicy", "lambda:List*", "lambda:ListTags", "mq:ListBrokers", "mq:DescribeBroker", "rds:DescribeDBInstances", "rds:ListTagsForResource", "rds:DescribeEvents", "redshift:DescribeClusters", "redshift:DescribeTags", "route53:ListHealthChecks", "route53:ListTagsForResource", "s3:ListAllMyBuckets", "s3:GetBucketTagging", "sns:ListTagsForResource", "sns:ListTopics", "sqs:ListQueues", "sqs:ListQueueTags", "dynamodb:ListTables", "dynamodb:DescribeTable", "dynamodb:ListTagsOfResource", "apigateway:GET", "glue:ListJobs", "glue:GetTags", "athena:ListWorkGroups", "athena:ListTagsForResource", "wafv2:ListWebACLs", "wafv2:ListRuleGroups", "wafv2:ListTagsForResource", "ecs:ListClusters", "ecs:ListContainerInstances", "ecs:ListServices", "ecs:DescribeContainerInstances", "ecs:DescribeServices", "ecs:ListTagsForResource", "events:ListRules", "events:ListTagsForResource", "events:ListEventBuses", "kafka:ListTagsForResource", "kafka:ListClustersV2" ]

Please make sure the permissions mapped to the correct nodepool being used for EKS cluster where Kloudfuse is hosted.

Step 3.1: Create an IAM scraper role with a policy to allow scraping on AWS labels.

Follow the instructions on the AWS page to create an IAM policy: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html

Step 3.2: Use one of the following options for kfuse to consume the policy created above

Option 1: Add your AWS credentials as a secret and use the secret in the ingester config.

You can retrieve your aws credentials required for the next step here.

Create a kube secret name named “aws-access-key” with keys “accessKey” and “secretKey” in the kfuse namespace

kubectl create secret generic aws-access-key --from-literal=accessKey=<AWS_ACCESS_KEY_ID> --from-literal=secretKey=<AWS_SECRET_ACCESS_KEY>

Specify the secretName in the custom values.yaml.

Kfuse by default attempts to scrape from all regions and all aws namespaces. This can be customized by adding the following configuration in the custom values.yaml

Do a helm upgrade for changes to take affect

Option 2: Add Role ARNs in the ingester config.

With this option, Kfuse can be configured to scrape multiple AWS accounts.

Add the scraper Role ARNs (created with the permissions above) in the awsRoleArns list to your custom values.yaml

Kfuse by default attempts to scrape from all regions and all aws namespaces. This can be customized by adding the following configuration in the custom values.yaml

If needed modify the Trust Relationship for the policy of the scrape role ARN to add the node-group (Node IAM Role ARN), in which Kloudfuse is running on, as the Principal on the Account.

  1. The node-group (Node IAM Role ARN), in which Kloudfuse is running on, also needs to have the trust relationship as below (add .

 

Do a helm upgrade for changes to take affect

CloudWatch Logs

Setup AWS Kinesis Firehose

Note that a different Firehose is needed for logs.

  • Go to the Kinesis Firehose AWS console and create a new delivery stream.

    • Select Direct PUT as the source

    • Select HTTP Endpoint as the destination

    • In the destination settings, use the external facing endpoint of the Kfuse cluster and provide the following URL. https://<external facing endpoint of Kfuse cluster>/ingester/kinesis/logs

    • Optionally use the “access token key” if needed

    • In the Content encoding section, select GZIP

    • Provide an existing S3 bucket or create a new one for storing Kinesis records as a backup. The default of only backing up failed data should suffice.

Create IAM Role to allow CloudWatch Logs to write to Kinesis Firehose

  • Go the IAM AWS Console

  • Go to Roles and select Create Role

    • Select Custom Trust Policy and Add the following (replace the region and AWS account accordingly):

       

    • Click Next to go to Add Permissions page and select Create Policy (This will open a new window).

      • Select JSON and add the following (Note the following allows all firehose in the same account. Adjust accordingly if only adding permission to a specific firehose):

         

    • Go back to the roles page and select the created permissions policy. Click “Create Role”.

    • Name and create the new role.

Setup CloudWatch logs subscriptions

  • Go to the Cloudwatch AWS console

  • On the Logs section on the left side of the console, select Log Groups

    • Go to the Log group that will be sent to the Kinesis Firehose.

    • Go to ActionsSubscription FiltersCreate Kinesis Firehose subscription filter

      • In the Kinesis Firehose delivery stream section, select the previously created Kinesis Firehose for Logs.

      • In the Grant Permission section, select the previously created role.

      • Provide a Subscription filter pattern (or leave it blank if streaming everything)

      • Provide Subscription filter name (required step, can be anything)

      • Select Start Streaming

 

Cloudtrail events as logs

  • There is an option to get the Cloudtrail events as logs and ingested by Kloudfuse. Please refer to and you can add a CloudWatch logs subscription for the log group as described above.

Supported AWS Component

Component

Dashboards

Alerts

Component

Dashboards

Alerts

Autoscaling Group

Yes

 

AmazonMQ (ActiveMQ)

Yes

 

AmazonMQ (RabbitMQ)

Coming soon

 

ApplicationELB

Yes

 

ACM

Yes

 

EBS

Yes

 

EC2

Yes

Yes

EFS

Yes

 

ElastiCache (Memcache)

Yes

 

ElasticCache (Redis)

Coming soon

 

ELB

Yes

 

Firehose

Yes

 

Lambda

Yes

Yes

NetworkELB

Yes

 

RDS

Yes

Yes

Redshift

Yes

 

S3

Yes

 

SNS

Yes

 

SQS

Yes

Yes

OpenSearch

Yes

 

DynamoDB

Yes

 

API Gateway

Yes

 

Glue

Yes

 

Athena

Yes

 

ECS

Yes

 

EventBridge

Yes

 

Kafka

Yes

 

AWS Namespaces

Component

Namespace

Component

Namespace

AmazonMQ (ActiveMQ)

AWS/AmazonMQ

ApplicationELB

AWS/ApplicationELB

ACM

AWS/CertificateManager

Route 53

AWS/Route53

EBS

AWS/EBS

EC2

AWS/EC2

EFS

AWS/EFS

ElastiCache

AWS/ElastiCache

ELB

AWS/ELB

Firehose

AWS/Firehose

Lambda

AWS/Lambda

NetworkELB

AWS/NetworkELB

RDS

AWS/RDS

Redshift

AWS/Redshift

S3

AWS/S3

SNS

AWS/SNS

SQS

AWS/SQS

OpenSearch

AWS/ES

DynamoDB

AWS/DynamoDB

API Gateway

AWS/ApiGateway

Glue

AWS/Glue

Athena

AWS/Athena

ECS

AWS/ECS

EventBridge

AWS/Events

Kafka

AWS/Kafka