Collection of AWS CloudWatch Logs and Metrics
- 1 CloudWatch Metrics
- 1.1 Setup AWS Kinesis Firehose
- 1.2 Setup AWS CloudWatch Metrics Stream
- 1.3 Enable AutoScaling Group Metrics
- 1.4 Enable Collection of Request Metrics in S3
- 1.5 Enable Enrichment of AWS Metrics
- 1.5.1 To enable enrichment of AWS metrics, follow these steps:Step1: Modify yaml
- 1.5.2 Step2: Create IAM scraper role in AWS account where the services are running.
- 1.5.3 Step 3: Use one of the following options for kfuse to consume the role created above
- 1.5.4 Step4: Modify the node-group IAM role on which kloudfuse is running
- 1.5.5 Step5: Helm Upgrade
- 2 CloudWatch Logs
- 3 Cloudtrail events as logs
- 4 Supported AWS Component
- 5 AWS Namespaces
AWS requires an external endpoint, with https
. The default installation of Kloudfuse enables an ingress service with external IP enabled, but no https
support.
Refer to HTTPS/TLS Setup on Kloudfuse Stack for setting up https
.
CloudWatch Metrics
Setup AWS Kinesis Firehose
In the account that emits the metrics, in the Kinesis Firehose AWS console, create a new delivery stream.
You should not use the same Firehose for logs and metrics.
Select
Direct PUT
as the sourceSelect
HTTP Endpoint
as the destinationIn the destination settings, use the external facing endpoint of the Kfuse cluster and provide the following URL.
https://<external facing endpoint of Kfuse cluster>/ingester/kinesis/metrics
Optionally use the “access token key” if needed.
In the Content encoding section, select
GZIP
Provide an existing S3 bucket or create a new one for storing Kinesis records as a backup. The default of only backing up failed data should suffice.
Change the name of the stream if necessary.
Setup AWS CloudWatch Metrics Stream
In the account that emits the metrics, in the Cloudwatch AWS console, in the Metrics section on the left side of the console, select Streams
and create a metric stream
Select the metric namespaces to send to the stream (default is all metrics)
In the configuration section,
Select an existing Firehose owned by your account
and select the previously created Kinesis Firehose.Under
Change Output Format
Make sure to selectJSON
for the output format. Kfuse currently only supports JSON format.Change the name of the stream if necessary.
Enable AutoScaling Group Metrics
In the account that emits the metrics,
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/, and choose Auto Scaling Groups from the navigation pane.
Select the check box next to your Auto Scaling group.
A split pane opens up at the bottom of the page.
On the Monitoring tab, select the Auto Scaling group metrics collection, Enable the check box located at the top of the page under Auto Scaling.
Enable Collection of Request Metrics in S3
In the account that emits the metrics, follow the instructions to enable the collection of request metrics for S3: Creating a CloudWatch metrics configuration for all the objects in your bucket - Amazon Simple Storage Service.
Enable Enrichment of AWS Metrics
The metrics sent by AWS CloudWatch to the Kinesis Firehose only include minimal labels. Kloudfuse supports attaching more labels (and also user-defined custom tags from the AWS console) to the ingested metrics. This is done by scraping AWS.
To enable enrichment of AWS metrics, follow these steps:
Step1: Modify yaml
Add the following configuration in the global section of the custom values.yaml
global:
enrichmentEnabled:
- aws
Step2: Create IAM scraper role in AWS account where the services are running.
In the account where the services are running whose metrics need to be captured, an IAM scraper role needs to be created which has the following policy attached to it for kloudfuse to scrape the additional labels from AWS. Refer to Define custom IAM permissions with customer managed policies - AWS Identity and Access Management for assistance.
"Action": [
"acm:ListCertificates",
"acm:ListTagsForCertificate",
"apigateway:GET",
"athena:ListWorkGroups",
"athena:ListTagsForResource",
"autoscaling:DescribeAutoScalingGroups",
"cloudwatch:ListMetrics",
"cloudwatch:GetMetricStatistics",
"dynamodb:ListTables",
"dynamodb:DescribeTable",
"dynamodb:ListTagsOfResource",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:DescribeSecurityGroups",
"ec2:DescribeNatGateways",
"ec2:DescribeVolumes",
"ecs:ListClusters",
"ecs:ListContainerInstances",
"ecs:ListServices",
"ecs:DescribeContainerInstances",
"ecs:DescribeServices",
"ecs:ListTagsForResource",
"elasticache:DescribeCacheClusters",
"elasticache:ListTagsForResource",
"elasticfilesystem:DescribeFileSystems",
"elasticfilesystem:DescribeBackupPolicy",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:DescribeLoadBalancers",
"es:ListDomainNames",
"es:DescribeDomains",
"es:ListTags",
"events:ListRules",
"events:ListTagsForResource",
"events:ListEventBuses",
"firehose:DescribeDeliveryStream",
"firehose:ListDeliveryStreams",
"firehose:ListTagsForDeliveryStream",
"glue:ListJobs",
"glue:GetTags",
"kafka:ListTagsForResource",
"kafka:ListClustersV2",
"kinesis:ListStreams",
"kinesis:ListTagsForStream",
"kinesis:DescribeStream",
"lambda:GetPolicy",
"lambda:List*",
"lambda:ListTags",
"logs:DescribeLogGroups",
"logs:ListTagsLogGroup"
"mq:ListBrokers",
"mq:DescribeBroker",
"rds:DescribeDBInstances",
"rds:ListTagsForResource",
"rds:DescribeEvents",
"redshift:DescribeClusters",
"redshift:DescribeTags",
"route53:ListHealthChecks",
"route53:ListTagsForResource",
"s3:ListAllMyBuckets",
"s3:GetBucketTagging",
"sns:ListTagsForResource",
"sns:ListTopics",
"sqs:ListQueues",
"sqs:ListQueueTags",
"wafv2:ListWebACLs",
"wafv2:ListRuleGroups",
"wafv2:ListTagsForResource"
]
Modify the Trust Relationship for the policy of the scrape role ARN to add the node-group (Node IAM Role ARN), in which Kloudfuse is running on, as the Principal on the Account.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT-NUMBER:role/eksctl-XXXXX-nodegroup-ng-XXXXXX-NodeInstanceRole-XXXXXXXXXX"
},
"Action": "sts:AssumeRole"
}
]
}
Please make sure the permissions mapped to the correct nodepool being used for EKS cluster where Kloudfuse is hosted.
Step 3: Use one of the following options for kfuse to consume the role created above
Option 1: Add your AWS credentials as a secret and use the secret in the ingester config.
You can retrieve your aws credentials required for the next step here.
Create a kube secret name named “aws-access-key” with keys “accessKey” and “secretKey” in the kfuse namespace
Specify the secretName in the custom values.yaml.
Kfuse by default attempts to scrape from all regions and all aws namespaces. This can be customized by adding the following configuration in the custom values.yaml
Do a helm upgrade for changes to take affect
Option 2: Add Role ARNs in the ingester config.
With this option, Kfuse can be configured to scrape multiple AWS accounts.
Add the scraper Role ARNs (created with the permissions above) in the awsRoleArns list to your custom values.yaml
Kfuse by default attempts to scrape from all regions and all aws namespaces. This can be customized by adding the following configuration in the custom values.yaml
Step4: Modify the node-group IAM role on which kloudfuse is running
The node-group (Node IAM Role ARN), in which Kloudfuse is running on, also needs to have the following permissions policy to assume the role.
Step5: Helm Upgrade
Do a helm upgrade for changes to take affect
CloudWatch Logs
Setup AWS Kinesis Firehose
In the account that emits the logs, in the Kinesis Firehose AWS console, create another Firehose delivery stream for logs.
Select
Direct PUT
as the sourceSelect
HTTP Endpoint
as the destinationIn the destination settings, use the external facing endpoint of the Kfuse cluster and provide the following URL.
https://<external facing endpoint of Kfuse cluster>/ingester/kinesis/logs
Optionally use the “access token key” if needed
In the Content encoding section, select
GZIP
Provide an existing S3 bucket or create a new one for storing Kinesis records as a backup. The default of only backing up failed data should suffice.
Create IAM Role to allow CloudWatch Logs to write to Kinesis Firehose
In the account that emits the logs, in the IAM AWS Console, under Roles
, select Create Role
Select
Custom Trust Policy
and Add the following (replace the region and AWS account accordingly):Click
Next
to go toAdd Permissions page
and selectCreate Policy
(This will open a new window).Select
JSON
and add the following (Note the following allows all firehose in the same account. Adjust accordingly if only adding permission to a specific firehose):Go back to the roles page and select the created permissions policy. Click “Create Role”.
Name and create the new role.
Setup CloudWatch logs subscriptions
In the account that emits the logs, in the Cloudwatch AWS console, perform these steps:
Navigate to the Logs section on the left side of the console, and select
Log Groups
.Go to the Log group that will be sent to the Kinesis Firehose.
Go to
Actions
→Subscription Filters
→Create Kinesis Firehose subscription filter
In the
Kinesis Firehose delivery stream
section, select the previously created Kinesis Firehose for Logs.In the
Grant Permission
section, select the previously created role.Provide a
Subscription filter pattern
(or leave it blank if streaming everything)Provide
Subscription filter name
(required step, can be anything)Select
Start Streaming
Enable Enrichment of AWS Logs
In the account where you installed Klouduse to capture the logs, perform these steps.
Similar to CloudWatch metrics, CloudWatch logs sent to the Kinesis Firehose only include minimal labels. Kloudfuse supports attaching user-defined custom tags of log groups to the ingested logs. This is done by scraping AWS.
To enable log enrichment, follow the same steps as enrichment for metrics, and add the following permissions:
Also, specify the AWS/Logs
namespace in the ingester config.
Cloudtrail events as logs
There is an option to get the Cloudtrail events as logs and ingested by Kloudfuse. Please refer to Sending events to CloudWatch Logs - AWS CloudTrail and you can add a CloudWatch logs subscription for the log group as described above.
Supported AWS Component
Component | Dashboards | Alerts |
---|---|---|
Autoscaling Group | Yes |
|
AmazonMQ (ActiveMQ) | Yes |
|
AmazonMQ (RabbitMQ) | Coming soon |
|
ApplicationELB | Yes |
|
ACM | Yes |
|
EBS | Yes |
|
EC2 | Yes | Yes |
EFS | Yes |
|
ElastiCache (Memcache) | Yes |
|
ElasticCache (Redis) | Coming soon |
|
ELB | Yes |
|
Firehose | Yes |
|
Lambda | Yes | Yes |
NetworkELB | Yes |
|
RDS | Yes | Yes |
Redshift | Yes |
|
S3 | Yes |
|
SNS | Yes |
|
SQS | Yes | Yes |
OpenSearch | Yes |
|
DynamoDB | Yes |
|
API Gateway | Yes |
|
Glue | Yes |
|
Athena | Yes |
|
ECS | Yes |
|
EventBridge | Yes |
|
Kafka | Yes |
|
Log Groups | NA |
|
AWS Namespaces
Component | Namespace |
---|---|
AmazonMQ (ActiveMQ) | AWS/AmazonMQ |
ApplicationELB | AWS/ApplicationELB |
ACM | AWS/CertificateManager |
Route 53 | AWS/Route53 |
EBS | AWS/EBS |
EC2 | AWS/EC2 |
EFS | AWS/EFS |
ElastiCache | AWS/ElastiCache |
ELB | AWS/ELB |
Firehose | AWS/Firehose |
Lambda | AWS/Lambda |
NetworkELB | AWS/NetworkELB |
RDS | AWS/RDS |
Redshift | AWS/Redshift |
S3 | AWS/S3 |
SNS | AWS/SNS |
SQS | AWS/SQS |
OpenSearch | AWS/ES |
DynamoDB | AWS/DynamoDB |
API Gateway | AWS/ApiGateway |
Glue | AWS/Glue |
Athena | AWS/Athena |
ECS | AWS/ECS |
EventBridge | AWS/Events |
Kafka | AWS/Kafka |
WAF | AWS/WAFV2 |
Log Groups | AWS/Logs |