Kfuse Profiler Server (Continuous Profiling)
What is Continuous Profiling?
Continuous Profiling is a powerful addition to our observability platform. While traditional monitoring methods—metrics, logs, and tracing—provide valuable insights, they often leave gaps when it comes to understanding application performance at a granular level. Continuous Profiling fills this void by offering in-depth, line-level insights into your application’s code, allowing developers to see precisely how resources are utilized.
This low-overhead feature gathers profiles from production systems and stores them in a database for later analysis. This helps provide a comprehensive view of the application and its behavior in production, including CPU usage, memory allocation, and disk I/O, ensuring that every line of code operates efficiently.
Key Benefits of Continuous Profiling:
Granular Insights: Continuous Profiling offers a detailed view of application performance that goes beyond traditional observability tools, providing line-level insights into resource utilization.
In-Depth Code Analysis: With a comprehensive understanding of code performance and system interactions, developers can easily identify how specific code segments use resources, facilitating thorough analysis and optimization.
Read more on our blog post.
Configuration setup:
Enable
kfuse-profiling
incustom-values.yaml
file -
global:
kfuse-profiling:
enabled: true
By default, the data will be saved in the PVC with size of 50GB.
Long-Term Retention
To retain profiling data for a longer duration, additional configuration is required. Depending on the storage provider, configure one of the following options in the custom-values.yaml
file:
For AWS S3 Storage:
Add the necessary AWS S3 configuration to store profiles.For GCP Bucket:
Include the required GCP Bucket configuration to store profiles data.
Note: Profiles are stored in parquet format on AWS or GCP.
pyroscope:
pyroscope:
# Add support for storage in s3 and gcs for saving profiles data
# Additional configuration is needed depending on where the storage is hosted (AWS S3 or GCP GCS)
# Choose the appropriate configuration based on your storage provider.
# AWS S3 Configuration Instructions:
# 1. Set the 'backend' to 's3'
# 2. Configure the following S3-specific settings:
# - bucket_name: Name of your S3 bucket
# - region: AWS region where your bucket is located
# - endpoint: S3 endpoint for your region
# - access_key_id: Your AWS access key ID
# - secret_access_key: Your AWS secret access key
# - insecure: Set to true if using HTTP instead of HTTPS (not recommended for production)
# Example AWS S3 configuration:
config: |
storage:
backend: s3
s3:
bucket_name: your-bucket-name
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
access_key_id: YOUR_ACCESS_KEY_ID
secret_access_key: YOUR_SECRET_ACCESS_KEY
insecure: false
# GCP GCS Configuration Instructions:
# 1. Set the 'backend' to 'gcs'
# 2. Configure the following GCS-specific settings:
# - bucket_name: Name of your GCS bucket
# - service_account: JSON key file for your GCP service account
# Prerequisites for GCP GCS:
# - Create a GCP service account with access to the GCS bucket
# - Download the JSON key file for the service account
# Example GCP GCS configuration:
config: |
storage:
backend: gcs
gcs:
bucket_name: your-gcs-bucket-name
service_account: |
{
"type": "service_account",
"project_id": "your-project-id",
"private_key_id": "your-private-key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
"client_email": "your-service-account-email@your-project-id.iam.gserviceaccount.com",
"client_id": "your-client-id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-service-account-email%40your-project-id.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}
Setup Kfuse-profiler agent to scrape the profiling data
Prerequisite:
1. Ensure your Golang application exposes pprof endpoints.
2. In pull mode, the collector, Alloy, periodically retrieves profiles from Golang applications, specifically targeting the /debug/pprof/*
endpoints.
3. If your go code is not setup to generate profiles, you need to setup Golang profiling. 4. Alloy then queries the pprof endpoints of your Golang application, collects the profiles, and forwards them to the Kfuse Profiler server.
To setup scraping of data:
Enable
alloy
in yourcustom-values.yaml
file:
pyroscope:
alloy:
enabled: true
Configure alloy scraper config:
Configure these two blocks in the above Alloy configuration file:
1. pyroscope.write
2. pyroscope.scrape
1. Add pyroscope.write
Block
The pyroscope.write
block is used to define the endpoint where profiling data will be sent.
Replace
"write_job_name"
with a unique name for the write job.Update the
url
with the appropriate endpoint for your Pyroscope server.
2. Add pyroscope.scrape
Block
The pyroscope.scrape
block is used to define the scraping configuration for profiling data.
Replace
"scrape_job_name"
with a unique name for the scrape job.Update the
targets
field with the appropriate service address and name.
Configuration Details
pyroscope.write
:Defines where profiling data should be written.
The
url
specifies the endpoint where profiles need to be sent/written.
pyroscope.scrape
:Specifies the targets to scrape profiling data from.
The
forward_to
field connects the scrape job to the write job.The
profiling_config
block enables or disables specific profiles:profile.process_cpu
: Enables CPU profiling.profile.godeltaprof_memory
: Enables delta memory profiling.profile.memory
: Disabled to avoid redundancy withgodeltaprof_memory
.profile.godeltaprof_mutex
: Enables delta mutex profiling.profile.mutex
: Disabled to avoid redundancy withgodeltaprof_mutex
.profile.godeltaprof_block
: Enables delta block profiling.profile.block
: Disabled to avoid redundancy withgodeltaprof_block
.profile.goroutine
: Enables goroutine profiling.
Save the Configuration
After adding the above blocks to the Alloy configuration file, save the changes and install alloy
.