AWS requires an external endpoint, with https
. The default installation of Kloudfuse enables an ingress service with external IP enabled, but no https
support.
Refer to HTTPS/TLS Setup on Kloudfuse Stack for setting up https
.
CloudWatch Metrics
Setup AWS Kinesis Firehose
In the account that emits the metrics, in the Kinesis Firehose AWS console, create a new delivery stream.
You should not use the same Firehose for logs and metrics.
Select
Direct PUT
as the sourceSelect
HTTP Endpoint
as the destinationIn the destination settings, use the external facing endpoint of the Kfuse cluster and provide the following URL.
https://<external facing endpoint of Kfuse cluster>/ingester/kinesis/metrics
Optionally use the “access token key” if needed.
In the Content encoding section, select
GZIP
Provide an existing S3 bucket or create a new one for storing Kinesis records as a backup. The default of only backing up failed data should suffice.
Change the name of the stream if necessary.
Setup AWS CloudWatch Metrics Stream
In the account that emits the metrics, in the Cloudwatch AWS console, in the Metrics section on the left side of the console, select Streams
and create a metric stream
Select the metric namespaces to send to the stream (default is all metrics)
In the configuration section,
Select an existing Firehose owned by your account
and select the previously created Kinesis Firehose.Under
Change Output Format
Make sure to selectJSON
for the output format. Kfuse currently only supports JSON format.Change the name of the stream if necessary.
Enable AutoScaling Group Metrics
In the account that emits the metrics,
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/, and choose Auto Scaling Groups from the navigation pane.
Select the check box next to your Auto Scaling group.
A split pane opens up at the bottom of the page.
On the Monitoring tab, select the Auto Scaling group metrics collection, Enable the check box located at the top of the page under Auto Scaling.
Enable Collection of Request Metrics in S3
In the account that emits the metrics, follow the instructions to enable the collection of request metrics for S3: https://docs.aws.amazon.com/AmazonS3/latest/userguide/configure-request-metrics-bucket.html.
Enable Enrichment of AWS Metrics
The metrics sent by AWS CloudWatch to the Kinesis Firehose only include minimal labels. Kloudfuse supports attaching more labels (and also user-defined custom tags from the AWS console) to the ingested metrics. This is done by scraping AWS.
To enable enrichment of AWS metrics, follow these steps:
add the following configuration in the global section of the custom values.yaml
global: enrichmentEnabled: - aws
In the account where the services are running whose metrics need to be captured, an IAM role needs to be created which has the following policy attached to it for kloudfuse to scrape the additional labels from AWS.
"Action": [ "acm:ListCertificates", "acm:ListTagsForCertificate", "apigateway:GET", "athena:ListWorkGroups", "athena:ListTagsForResource", "autoscaling:DescribeAutoScalingGroups", "cloudwatch:ListMetrics", "cloudwatch:GetMetricStatistics", "dynamodb:ListTables", "dynamodb:DescribeTable", "dynamodb:ListTagsOfResource", "ec2:DescribeInstances", "ec2:DescribeInstanceStatus", "ec2:DescribeSecurityGroups", "ec2:DescribeNatGateways", "ec2:DescribeVolumes", "ecs:ListClusters", "ecs:ListContainerInstances", "ecs:ListServices", "ecs:DescribeContainerInstances", "ecs:DescribeServices", "ecs:ListTagsForResource", "elasticache:DescribeCacheClusters", "elasticache:ListTagsForResource", "elasticfilesystem:DescribeFileSystems", "elasticfilesystem:DescribeBackupPolicy", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:DescribeLoadBalancers", "es:ListDomainNames", "es:DescribeDomains", "es:ListTags", "events:ListRules", "events:ListTagsForResource", "events:ListEventBuses", "firehose:DescribeDeliveryStream", "firehose:ListDeliveryStreams", "firehose:ListTagsForDeliveryStream", "glue:ListJobs", "glue:GetTags", "kafka:ListTagsForResource", "kafka:ListClustersV2", "kinesis:ListStreams", "kinesis:ListTagsForStream", "kinesis:DescribeStream", "lambda:GetPolicy", "lambda:List*", "lambda:ListTags", "logs:DescribeLogGroups", "logs:ListTagsLogGroup" "mq:ListBrokers", "mq:DescribeBroker", "rds:DescribeDBInstances", "rds:ListTagsForResource", "rds:DescribeEvents", "redshift:DescribeClusters", "redshift:DescribeTags", "route53:ListHealthChecks", "route53:ListTagsForResource", "s3:ListAllMyBuckets", "s3:GetBucketTagging", "sns:ListTagsForResource", "sns:ListTopics", "sqs:ListQueues", "sqs:ListQueueTags", "wafv2:ListWebACLs", "wafv2:ListRuleGroups", "wafv2:ListTagsForResource" ]
Please make sure the permissions mapped to the correct nodepool being used for EKS cluster where Kloudfuse is hosted.
Step 3.1: Create an IAM scraper role with a policy to allow scraping on AWS labels.
Follow the instructions on the AWS page to create an IAM policy: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html
Step 3.2: Use one of the following options for kfuse to consume the policy created above
Option 1: Add your AWS credentials as a secret and use the secret in the ingester config.
You can retrieve your aws credentials required for the next step here.
Create a kube secret name named “aws-access-key” with keys “accessKey” and “secretKey” in the kfuse namespace
kubectl create secret generic aws-access-key --from-literal=accessKey=<AWS_ACCESS_KEY_ID> --from-literal=secretKey=<AWS_SECRET_ACCESS_KEY>
Specify the secretName in the custom values.yaml.
ingester: config: awsScraper: secretName: aws-access-key
Kfuse by default attempts to scrape from all regions and all aws namespaces. This can be customized by adding the following configuration in the custom values.yaml
ingester: config: awsScraper: secretName: aws-access-key namespaces: - <add namespace> regions: - <add region>
Do a helm upgrade for changes to take affect
helm upgrade --create-namespace --install kfuse . -f <custom_values.yaml>
Option 2: Add Role ARNs in the ingester config.
With this option, Kfuse can be configured to scrape multiple AWS accounts.
Add the scraper Role ARNs (created with the permissions above) in the awsRoleArns list to your custom values.yaml
ingester: config: awsRoleArns: - role: <ADD ROLE ARN HERE>
Kfuse by default attempts to scrape from all regions and all aws namespaces. This can be customized by adding the following configuration in the custom values.yaml
ingester: config: awsRoleArns: - role: <ADD ROLE ARN HERE> namespaces: - <add namespace> regions: - <add region>
If needed modify the Trust Relationship for the policy of the scrape role ARN to add the node-group (Node IAM Role ARN), in which Kloudfuse is running on, as the Principal on the Account.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::ACCOUNT-NUMBER:role/eksctl-XXXXX-nodegroup-ng-XXXXXX-NodeInstanceRole-XXXXXXXXXX" }, "Action": "sts:AssumeRole" } ] }
The node-group (Node IAM Role ARN), in which Kloudfuse is running on, also needs to have the following permissions policy to assume the role.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": <REPLACE SCRAPER ROLE ARN HERE> } ] }
Do a helm upgrade for changes to take affect
helm upgrade --create-namespace --install kfuse . -f <custom_values.yaml>
CloudWatch Logs
Setup AWS Kinesis Firehose
In the account that emits the logs, in the Kinesis Firehose AWS console, create another Firehose delivery stream for logs.
You should not use the same Firehose for logs and metrics.
Select
Direct PUT
as the sourceSelect
HTTP Endpoint
as the destinationIn the destination settings, use the external facing endpoint of the Kfuse cluster and provide the following URL.
https://<external facing endpoint of Kfuse cluster>/ingester/kinesis/logs
Optionally use the “access token key” if needed
In the Content encoding section, select
GZIP
Provide an existing S3 bucket or create a new one for storing Kinesis records as a backup. The default of only backing up failed data should suffice.
Create IAM Role to allow CloudWatch Logs to write to Kinesis Firehose
In the account that emits the logs, in the IAM AWS Console, under Roles
, select Create Role
Select
Custom Trust Policy
and Add the following (replace the region and AWS account accordingly):{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "logs.<region>.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringLike": { "aws:SourceArn": "arn:aws:logs:<region>:<aws account number>:*" } } } ] }
Click
Next
to go toAdd Permissions page
and selectCreate Policy
(This will open a new window).Select
JSON
and add the following (Note the following allows all firehose in the same account. Adjust accordingly if only adding permission to a specific firehose):{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "firehose:*" ], "Resource": [ "arn:aws:firehose:<region>:<aws account number>:*" ] } ] }
Go back to the roles page and select the created permissions policy. Click “Create Role”.
Name and create the new role.
Setup CloudWatch logs subscriptions
In the account that emits the logs, in the Cloudwatch AWS console, perform these steps:
Navigate to the Logs section on the left side of the console, and select
Log Groups
.Go to the Log group that will be sent to the Kinesis Firehose.
Go to
Actions
→Subscription Filters
→Create Kinesis Firehose subscription filter
In the
Kinesis Firehose delivery stream
section, select the previously created Kinesis Firehose for Logs.In the
Grant Permission
section, select the previously created role.Provide a
Subscription filter pattern
(or leave it blank if streaming everything)Provide
Subscription filter name
(required step, can be anything)Select
Start Streaming
Enable Enrichment of AWS Logs
In the account where you installed Klouduse to capture the logs, perform these steps.
Similar to CloudWatch metrics, CloudWatch logs sent to the Kinesis Firehose only include minimal labels. Kloudfuse supports attaching user-defined custom tags of log groups to the ingested logs. This is done by scraping AWS.
To enable log enrichment, follow the same steps as enrichment for metrics, and add the following permissions:
"Action": [ "logs:DescribeLogGroups", "logs:ListTagsLogGroup" ]
Also, specify the AWS/Logs
namespace in the ingester config.
Cloudtrail events as logs
There is an option to get the Cloudtrail events as logs and ingested by Kloudfuse. Please refer to https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html and you can add a CloudWatch logs subscription for the log group as described above.
Supported AWS Component
Component | Dashboards | Alerts |
---|---|---|
Autoscaling Group | Yes | |
AmazonMQ (ActiveMQ) | Yes | |
AmazonMQ (RabbitMQ) | Coming soon | |
ApplicationELB | Yes | |
ACM | Yes | |
EBS | Yes | |
EC2 | Yes | Yes |
EFS | Yes | |
ElastiCache (Memcache) | Yes | |
ElasticCache (Redis) | Coming soon | |
ELB | Yes | |
Firehose | Yes | |
Lambda | Yes | Yes |
NetworkELB | Yes | |
RDS | Yes | Yes |
Redshift | Yes | |
S3 | Yes | |
SNS | Yes | |
SQS | Yes | Yes |
OpenSearch | Yes | |
DynamoDB | Yes | |
API Gateway | Yes | |
Glue | Yes | |
Athena | Yes | |
ECS | Yes | |
EventBridge | Yes | |
Kafka | Yes | |
Log Groups | NA |
AWS Namespaces
Component | Namespace |
---|---|
AmazonMQ (ActiveMQ) | AWS/AmazonMQ |
ApplicationELB | AWS/ApplicationELB |
ACM | AWS/CertificateManager |
Route 53 | AWS/Route53 |
EBS | AWS/EBS |
EC2 | AWS/EC2 |
EFS | AWS/EFS |
ElastiCache | AWS/ElastiCache |
ELB | AWS/ELB |
Firehose | AWS/Firehose |
Lambda | AWS/Lambda |
NetworkELB | AWS/NetworkELB |
RDS | AWS/RDS |
Redshift | AWS/Redshift |
S3 | AWS/S3 |
SNS | AWS/SNS |
SQS | AWS/SQS |
OpenSearch | AWS/ES |
DynamoDB | AWS/DynamoDB |
API Gateway | AWS/ApiGateway |
Glue | AWS/Glue |
Athena | AWS/Athena |
ECS | AWS/ECS |
EventBridge | AWS/Events |
Kafka | AWS/Kafka |
WAF | AWS/WAFV2 |
Log Groups | AWS/Logs |