Customer wants to store data which can be brought into the system at a later point in time. The data stored should be in a format which the customer can understand (json like structure)
Log Archive
In addition to regular deep storage, you can archive logs directly from the log stream into archival storage. …..
The general workflow for log archiving is:
Configure your agent’s yaml configuration file to start archiving.
…
...
Log Hydration
To bring logs back into the system from archival storage, …
The general workflow for log hydration is:
The user issues the hydration request in Kloudfuse.
The hydration service reads archive from
...
Logs Archive and Hydration
You may have to save transactional information for compliance, legal, or other regulatory requirements. In addition to processing logs for observability and analytics, Kloudfuse introduced a supplementary mechanism for archiving pre-processed logs (with identified filters, facets, and so on) into longer-term storage, and a separate mechanism to hydrate these logs to examine them for the relevant data.
The benefits of this approach extend beyond basic regulatory compliance:
You store important historical data in a cost-effective compressed format in a location that you own and control.
When uncompressed, the logs are human-readable and highly searchable because of the high level of indexing through labels and other data attributes.
You can configure the archival instructions in a manner that categorizes data consumption by internal cost center.
We currently support log archive and hydration for AWS S3.
Contact us at support@kloudfuse.com to enable this feature in your Kloudfuse cluster.
Archiving
Based on your own set of archival rules and configurations, you can add an archive
section to the deployments.yaml
file to specify the logs you need to write to your own archive storage.
Code Block |
---|
archive:
enabled: true
prefix: "<Example_Cluster/Example_Folder>" # Optional, can specify as ""
useSecret: true # Security, 4 methods, see note
createSecret: true
secretName: "<Example_Secret>"
type: s3 # AWS storage
s3:
region: <Example_Region> # Such as us-west-2
bucket: <Example_Bucket_Name> # You MUST create the bucket in you archival storage location
accessKey: <Example_Access_Key>
secretKey: <Example_Secret_Key>
rules: |-
- archive: # Define first archive
args:
archiveName: a1
doNotIndex: false # Both archive and index
conditions:
- matcher: "#source"
value: "s1"
op: "=="
- matcher: "@label"
value: "l1"
op: "=="
- archive: # Define next archive
args:
archiveName: a2
doNotIndex: false # Both archive and index
conditions:
- matcher: "#source"
value: "s1"
op: "==" |
Info |
---|
|
The archive rules apply in order, and must match all conditions. In the preceding example, a log line from source “s1” gets mapped to archive a1
if it also contains label l1
. If it does not contain label l1
, it gets mapped to archive a2
.
Archive prerequisites
You must grant access for Kloudfuse to write archives into the specified storage. There are four security approaches:
createSecret = true, useSecret=true
The helm creates the k8s secret based on the provider’s
accessKey
andsecretKey
. Thedeployments.yaml
file gets the value from secret.createSecret=false, useSecret=true
The customer creates the k8s secret. The
deployments.yaml
file works automatically because it picks up theenv var
fromsecret
.createSecret=false, useSecret=false
This approach assumes that the customer already configured the node IAM role to have permission to access the S3 bucket, so there is no requirement to set
env var
.Customer creates a service account with access permission to S3.
That service account maps to serviceAccountName. There is no requirement to set
env var
, and the pod inherits the permissions of the service account.
Archive writes
Kloudfuse writes the logs to the specified archive location after ingestion, applying facets, labels, and so on.
Hydration
Whenever you need to examine a record, you can access it directly in your archival storage because of our simple storage and compression rubric: by date (yyyymmdd
format), and then by hour. When you decompress and open a log file, you can see all the facets, labels, and other tags attributed to the log by Kloudfuse.
Additionally, you can hydrate the archived logs into Kloudfuse. We run them through the metadata analysis, labeling, and so on. For older logs, this gives us the opportunity to apply the current (newer) set of grammar and rules, making them compatible (and comparable) with current logs.
To use the logs hydration UI, see Logs hydration.