Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 84 Next »

AWS requires an external endpoint, with https. The default installation of Kloudfuse enables an ingress service with external IP enabled, but no https support.

Refer to HTTPS/TLS Setup on Kloudfuse Stack for setting up https.

CloudWatch Metrics

Setup AWS Kinesis Firehose

In the account that emits the metrics, in the Kinesis Firehose AWS console, create a new delivery stream.

You should not use the same Firehose for logs and metrics.

  1. Select Direct PUT as the source

  2. Select HTTP Endpoint as the destination

  3. In the destination settings, use the external facing endpoint of the Kfuse cluster and provide the following URL. https://<external facing endpoint of Kfuse cluster>/ingester/kinesis/metrics

  4. Optionally use the “access token key” if needed.

  5. In the Content encoding section, select GZIP

  6. Provide an existing S3 bucket or create a new one for storing Kinesis records as a backup. The default of only backing up failed data should suffice.

  7. Change the name of the stream if necessary.

Setup AWS CloudWatch Metrics Stream

In the account that emits the metrics, in the Cloudwatch AWS console, in the Metrics section on the left side of the console, select Streams and create a metric stream

  1. Select the metric namespaces to send to the stream (default is all metrics)

  2. In the configuration section, Select an existing Firehose owned by your account and select the previously created Kinesis Firehose.

  3. Under Change Output Format Make sure to select JSON for the output format. Kfuse currently only supports JSON format.

  4. Change the name of the stream if necessary.

Enable AutoScaling Group Metrics

In the account that emits the metrics,

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/, and choose Auto Scaling Groups from the navigation pane.

  2. Select the check box next to your Auto Scaling group.

  3. A split pane opens up at the bottom of the page.

  4. On the Monitoring tab, select the Auto Scaling group metrics collectionEnable the check box located at the top of the page under Auto Scaling.

Enable Collection of Request Metrics in S3

In the account where you installed Klouduse to capture the metrics, follow the instructions to enable the collection of request metrics for S3: https://docs.aws.amazon.com/AmazonS3/latest/userguide/configure-request-metrics-bucket.html

Enable Enrichment of AWS Metrics

In the account where you installed Klouduse to capture the metrics, perform these steps.

The metrics sent by AWS CloudWatch to the Kinesis Firehose only include minimal labels. Kloudfuse supports attaching more labels (and also user-defined custom tags from the AWS console) to the ingested metrics. This is done by scraping AWS.

To enable enrichment of AWS metrics, follow these steps:

  1. add the following configuration in the global section of the custom values.yaml

global:
  enrichmentEnabled:
    - aws
  1. Since Kfuse needs to scrape the additional labels from AWS to attach to the metrics, Kfuse requires a policy with the following permissions:

			"Action": [
				"acm:ListCertificates",
				"acm:ListTagsForCertificate",
				"apigateway:GET",
				"athena:ListWorkGroups",
				"athena:ListTagsForResource",
				"autoscaling:DescribeAutoScalingGroups",
				"cloudwatch:ListMetrics",
				"cloudwatch:GetMetricStatistics",
				"dynamodb:ListTables",
				"dynamodb:DescribeTable",
				"dynamodb:ListTagsOfResource",
				"ec2:DescribeInstances",
				"ec2:DescribeInstanceStatus",
				"ec2:DescribeSecurityGroups",
				"ec2:DescribeNatGateways",
				"ec2:DescribeVolumes",
				"ecs:ListClusters",
				"ecs:ListContainerInstances",
				"ecs:ListServices",
				"ecs:DescribeContainerInstances",
				"ecs:DescribeServices",
				"ecs:ListTagsForResource",
				"elasticache:DescribeCacheClusters",
				"elasticache:ListTagsForResource",
				"elasticfilesystem:DescribeFileSystems",
				"elasticfilesystem:DescribeBackupPolicy",
				"elasticloadbalancing:DescribeTags",
				"elasticloadbalancing:DescribeLoadBalancers",
				"es:ListDomainNames",
				"es:DescribeDomains",
				"es:ListTags",
				"events:ListRules",
				"events:ListTagsForResource",
				"events:ListEventBuses",
				"firehose:DescribeDeliveryStream",
				"firehose:ListDeliveryStreams",
				"firehose:ListTagsForDeliveryStream",
				"glue:ListJobs",
				"glue:GetTags",
				"kafka:ListTagsForResource",
				"kafka:ListClustersV2",
				"kinesis:ListStreams",
				"kinesis:ListTagsForStream",
				"kinesis:DescribeStream",
				"lambda:GetPolicy",
				"lambda:List*",
				"lambda:ListTags",
				"mq:ListBrokers",
				"mq:DescribeBroker",
				"rds:DescribeDBInstances",
				"rds:ListTagsForResource",
				"rds:DescribeEvents",
				"redshift:DescribeClusters",
				"redshift:DescribeTags",
				"route53:ListHealthChecks",
				"route53:ListTagsForResource",
				"s3:ListAllMyBuckets",
				"s3:GetBucketTagging",
				"sns:ListTagsForResource",
				"sns:ListTopics",
				"sqs:ListQueues",
				"sqs:ListQueueTags",
				"wafv2:ListWebACLs",
				"wafv2:ListRuleGroups",
				"wafv2:ListTagsForResource"
			]

Please make sure the permissions mapped to the correct nodepool being used for EKS cluster where Kloudfuse is hosted.

Step 3.1: Create an IAM scraper role with a policy to allow scraping on AWS labels.

Follow the instructions on the AWS page to create an IAM policy: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html

Step 3.2: Use one of the following options for kfuse to consume the policy created above

Option 1: Add your AWS credentials as a secret and use the secret in the ingester config.

You can retrieve your aws credentials required for the next step here.

Create a kube secret name named “aws-access-key” with keys “accessKey” and “secretKey” in the kfuse namespace

kubectl create secret generic aws-access-key --from-literal=accessKey=<AWS_ACCESS_KEY_ID> --from-literal=secretKey=<AWS_SECRET_ACCESS_KEY>

Specify the secretName in the custom values.yaml.

ingester:
  config:
    awsScraper:
      secretName: aws-access-key

Kfuse by default attempts to scrape from all regions and all aws namespaces. This can be customized by adding the following configuration in the custom values.yaml

ingester:
  config:
    awsScraper:
      secretName: aws-access-key
      namespaces:
        - <add namespace>
      regions:
        - <add region>

Do a helm upgrade for changes to take affect

helm upgrade --create-namespace --install kfuse . -f <custom_values.yaml>
Option 2: Add Role ARNs in the ingester config.

With this option, Kfuse can be configured to scrape multiple AWS accounts.

Add the scraper Role ARNs (created with the permissions above) in the awsRoleArns list to your custom values.yaml

ingester:
  config:
    awsRoleArns:
      - role: <ADD ROLE ARN HERE>

Kfuse by default attempts to scrape from all regions and all aws namespaces. This can be customized by adding the following configuration in the custom values.yaml

ingester:
  config:
    awsRoleArns:
      - role: <ADD ROLE ARN HERE>
        namespaces:
          - <add namespace>
        regions:
          - <add region>

If needed modify the Trust Relationship for the policy of the scrape role ARN to add the node-group (Node IAM Role ARN), in which Kloudfuse is running on, as the Principal on the Account.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::ACCOUNT-NUMBER:role/eksctl-XXXXX-nodegroup-ng-XXXXXX-NodeInstanceRole-XXXXXXXXXX"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
  1. The node-group (Node IAM Role ARN), in which Kloudfuse is running on, also needs to have the following permissions policy to assume the role.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": <REPLACE SCRAPER ROLE ARN HERE>
        } 
    ]
}

Do a helm upgrade for changes to take affect

helm upgrade --create-namespace --install kfuse . -f <custom_values.yaml>

CloudWatch Logs

Setup AWS Kinesis Firehose

In the account that emits the logs, in the Kinesis Firehose AWS console, create another Firehose delivery stream for logs.

You should not use the same Firehose for logs and metrics.

  1. Select Direct PUT as the source

  2. Select HTTP Endpoint as the destination

  3. In the destination settings, use the external facing endpoint of the Kfuse cluster and provide the following URL. https://<external facing endpoint of Kfuse cluster>/ingester/kinesis/logs

  4. Optionally use the “access token key” if needed

  5. In the Content encoding section, select GZIP

  6. Provide an existing S3 bucket or create a new one for storing Kinesis records as a backup. The default of only backing up failed data should suffice.

Create IAM Role to allow CloudWatch Logs to write to Kinesis Firehose

In the account that emits the logs, in the IAM AWS Console, under Roles, select Create Role

  1. Select Custom Trust Policy and Add the following (replace the region and AWS account accordingly):

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "logs.<region>.amazonaws.com"
                },
                "Action": "sts:AssumeRole",
                "Condition": {
                    "StringLike": {
                        "aws:SourceArn": "arn:aws:logs:<region>:<aws account number>:*"
                    }
                }
            }
        ]
    }

  2. Click Next to go to Add Permissions page and select Create Policy (This will open a new window).

  3. Select JSON and add the following (Note the following allows all firehose in the same account. Adjust accordingly if only adding permission to a specific firehose):

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "firehose:*"
                ],
                "Resource": [
                    "arn:aws:firehose:<region>:<aws account number>:*"
                ]
            }
        ]
    }

  4. Go back to the roles page and select the created permissions policy. Click “Create Role”.

  5. Name and create the new role.

Setup CloudWatch logs subscriptions

In the account that emits the logs, in the Cloudwatch AWS console, perform these steps:

  1. Navigate to the Logs section on the left side of the console, and select Log Groups.

  2. Go to the Log group that will be sent to the Kinesis Firehose.

  3. Go to ActionsSubscription FiltersCreate Kinesis Firehose subscription filter

  4. In the Kinesis Firehose delivery stream section, select the previously created Kinesis Firehose for Logs.

  5. In the Grant Permission section, select the previously created role.

  6. Provide a Subscription filter pattern (or leave it blank if streaming everything)

  7. Provide Subscription filter name (required step, can be anything)

  8. Select Start Streaming

Enable Enrichment of AWS Logs

In the account where you installed Klouduse to capture the logs, perform these steps.

Similar to CloudWatch metrics, CloudWatch logs sent to the Kinesis Firehose only include minimal labels. Kloudfuse supports attaching user-defined custom tags of log groups to the ingested logs. This is done by scraping AWS.

To enable log enrichment, follow the same steps as enrichment for metrics, and add the following permissions:

			"Action": [
				"logs:DescribeLogGroups",
				"logs:ListTagsLogGroup"
			]

Also, specify the AWS/Logs namespace in the ingester config.

Cloudtrail events as logs

Supported AWS Component

Component

Dashboards

Alerts

Autoscaling Group

Yes

AmazonMQ (ActiveMQ)

Yes

AmazonMQ (RabbitMQ)

Coming soon

ApplicationELB

Yes

ACM

Yes

EBS

Yes

EC2

Yes

Yes

EFS

Yes

ElastiCache (Memcache)

Yes

ElasticCache (Redis)

Coming soon

ELB

Yes

Firehose

Yes

Lambda

Yes

Yes

NetworkELB

Yes

RDS

Yes

Yes

Redshift

Yes

S3

Yes

SNS

Yes

SQS

Yes

Yes

OpenSearch

Yes

DynamoDB

Yes

API Gateway

Yes

Glue

Yes

Athena

Yes

ECS

Yes

EventBridge

Yes

Kafka

Yes

Log Groups

NA

AWS Namespaces

Component

Namespace

AmazonMQ (ActiveMQ)

AWS/AmazonMQ

ApplicationELB

AWS/ApplicationELB

ACM

AWS/CertificateManager

Route 53

AWS/Route53

EBS

AWS/EBS

EC2

AWS/EC2

EFS

AWS/EFS

ElastiCache

AWS/ElastiCache

ELB

AWS/ELB

Firehose

AWS/Firehose

Lambda

AWS/Lambda

NetworkELB

AWS/NetworkELB

RDS

AWS/RDS

Redshift

AWS/Redshift

S3

AWS/S3

SNS

AWS/SNS

SQS

AWS/SQS

OpenSearch

AWS/ES

DynamoDB

AWS/DynamoDB

API Gateway

AWS/ApiGateway

Glue

AWS/Glue

Athena

AWS/Athena

ECS

AWS/ECS

EventBridge

AWS/Events

Kafka

AWS/Kafka

WAF

AWS/WAFV2

Log Groups

AWS/Logs

  • No labels