Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The detail view can be accessed by clicking on the log event detail icon that shows up when you hover on a log line, or by click+enter from the log list view. From the log event detail view, the user can filter using the fingerprint, log facets, and labels associated with the fingerprint.

...

...

As a result of Fingerprinting incoming log lines, we auto-detect tokens like numbers, IP address, duration, size, and UUID. These tokens are automatically assigned log facet names as _number_0, _ip_address_0, _duration_0 respectively. These auto-assigned log facet names can be renamed to something more readable by the user from the log event detail view. The renaming is scoped to a fingerprint.

...

Fingerprints View

Fingerprints view provides a concise view of the unique logs ingested by the stack. This bird’s eye view is very helpful when looking at the huge number of logs that are typically emitted by production systems. For example, by filtering the logs by severity level of ERROR, the user can easily highlight different kinds of error events that are being observed by the system without hunting for them one by one. The fingerprints can be searched just like log events. Moreover, the logs can be filtered to only include/exclude logs with the selected fingerprint. This serves as a smarter grep where you can quickly filter logs without finding unique strings to grep or grep -v by.

...

Log Analytics

Logs contain a lot of valuable information. Quite often developers instrument the application to log the various application metrics within the log line. Though convenient at development time this makes it harder to analyze the system in production as now the logs must be indexed properly to extract such metrics. Kloudfuse platform makes the metric generation from the log line fairly easy though due to its unique fingerprinting technology. Kloudfuse stack can auto-extract metric facets and highlight them under each source category. The “Log Analytics” view allows the users to select the numeric facets to be charted along with what aggregates to apply. The log lines containing the metric of interest can be filtered as its done in log search (or can be skipped as well). Range Aggregate allows time aggregation to be applied so that events from multiple log lines can be aggregated across everything or some common dimensions that can be selected using the “Grouping options”. These dimensions can include facets extracted from the log line or environment tags like pod_name, service name, etc. The chart is generated dynamically from the log lines and can be used for ad-hoc analysis during troubleshooting. For saving this metric refer to the next section.

...

  1. Log facet selector

    1. selector for log facet or count based metric to chart

  2. Facet normalization function

    1. function used to normalize the log facet to a numerical value

      1. number - parse the log facet as a double value

      2. count - normalize to 1 if the selected facet exists

      3. duration - normalize a duration string to seconds. Valid time units are ns, us (or "µs"), ms, s, m, h. example: 1h30m

      4. bytes - normalize a size string to bytes. Valid size units are KB, MB, GB, TB, PB, KiB, MiB, GiB, TiB, PiB. example: 10MB

  3. Range (time) aggregate: aggregate discrete points in time in time to produce one value per time-series and time-step. The aggregates are applied to log events that satisfy the log filters

    1. Count based log metrics

      1. rate : rate of log events at every time-step. i,e. count/time-step_seconds

      2. count_over_time : count of log events at every time-step

    2. Log Facet based log metrics

      1. rate_counter : rate of monotonically increasing counter

      2. sum_over_time

      3. avg_over_time

      4. max_over_time

      5. min_over_time

      6. first_over_time

      7. last_over_time

      8. stdvar_over_time

      9. stddev_over_time

      10. quantile_over_time

  4. Range aggregate grouping

    1. labels that define the time-series. log events are grouped by the labels and each group becomes a time-series

    2. default grouping behavior is to group everything into one time series (except for rate and rate_counter which do not support grouping)

  5. Vector (space) aggregate: Reduce the number of time series by aggregating across time-series at a given time step

    1. sum

    2. avg

    3. min

    4. max

    5. stddev

    6. stdvar

    7. count

    8. topk

    9. bottomk

  6. Vector aggregate grouping

    1. labels that define the final time-series to collapse into. Must be a subset of the range aggregate grouping

    2. default grouping behavior is to group everything into one time series

  7. Generate chart button to chart the log-derived metric

  8. Visualization type

  9. Save metric icon

Log Analytics exploration workflow

  • Add any log filters as described in the Log Search View to filter down logs for charting

  • Count based log metrics

    • Choose count_log_eventsfrom the log facet selector

    • Choose number as the normalization function

    • Choose rate or count_over_time as the Range/time aggregation function

    • Click on Generate chartto chart the count based metric

...

  • Log facet metrics

    • Choose the log facet to chart from the log facet selector

    • Choose number/bytes/durationas the normalization function to normalize the facet value. Choose countto count the number of times the log facet appears in the time-step

    • Choose one of the Log facet based range aggregation function

    • Click on Generate chart

...

Save Metrics

The metrics that are explored can be saved as well to keep them for longer retention or further analysis. To save the explored metric, “Save Metrics” button can be used. The user can enter a unique name for the metric along with the dimensions that need to be saved for the metric series. By default the UI selects the dimensions that were used for metric exploration. The saved metric is pushed to the in-built metric storage.

The metrics that are saved are listed in the “Metrics” view at the bottom of the page. Any saved metric that is no longer required can be deleted from this list. The user can explore the saved metric using standard kfuse metric exploration (by clicking on the icon) or through Grafana metric explorer. Support for exporting the metrics to an external metric system will be coming in the future.

...

Fingerprints View

Fingerprints view provides a concise view of the unique logs ingested by the stack. This bird’s eye view is very helpful when looking at the huge number of logs that are typically emitted by production systems. For example, by filtering the logs by severity level of ERROR, the user can easily highlight different kinds of error events that are being observed by the system without hunting for them one by one. The fingerprints can be searched just like log events. Moreover, the logs can be filtered to only include/exclude logs with the selected fingerprint. This serves as a smarter grep where you can quickly filter logs without finding unique strings to grep or grep -v by.

...

Log Analytics

Logs contain a lot of valuable information. Quite often developers instrument the application to log the various application metrics within the log line. Though convenient at development time this makes it harder to analyze the system in production as now the logs must be indexed properly to extract such metrics. Kloudfuse platform makes the metric generation from the log line fairly easy though due to its unique fingerprinting technology. Kloudfuse stack can auto-extract metric facets and highlight them under each source category. The “Log Analytics” view allows the users to select the numeric facets to be charted along with what aggregates to apply. The log lines containing the metric of interest can be filtered as its done in log search (or can be skipped as well). Range Aggregate allows time aggregation to be applied so that events from multiple log lines can be aggregated across everything or some common dimensions that can be selected using the “Grouping options”. These dimensions can include facets extracted from the log line or environment tags like pod_name, service name, etc. The chart is generated dynamically from the log lines and can be used for ad-hoc analysis during troubleshooting. For saving this metric refer to the next section.

...

  1. Log facet selector

    1. selector for log facet or count based metric to chart

  2. Facet normalization function

    1. function used to normalize the log facet to a numerical value

      1. number - parse the log facet as a double value

      2. count - normalize to 1 if the selected facet exists

      3. duration - normalize a duration string to seconds. Valid time units are ns, us (or "µs"), ms, s, m, h. example: 1h30m

      4. bytes - normalize a size string to bytes. Valid size units are KB, MB, GB, TB, PB, KiB, MiB, GiB, TiB, PiB. example: 10MB

  3. Range (time) aggregate: aggregate discrete points in time in time to produce one value per time-series and time-step. The aggregates are applied to log events that satisfy the log filters

    1. Count based log metrics

      1. rate : rate of log events at every time-step. i,e. count/time-step_seconds

      2. count_over_time : count of log events at every time-step

    2. Log Facet based log metrics

      1. rate_counter : rate of monotonically increasing counter

      2. sum_over_time

      3. avg_over_time

      4. max_over_time

      5. min_over_time

      6. first_over_time

      7. last_over_time

      8. stdvar_over_time

      9. stddev_over_time

      10. quantile_over_time

  4. Range aggregate grouping

    1. labels that define the time-series. log events are grouped by the labels and each group becomes a time-series

    2. default grouping behavior is to group everything into one time series (except for rate and rate_counter which do not support grouping)

  5. Vector (space) aggregate: Reduce the number of time series by aggregating across time-series at a given time step

    1. sum

    2. avg

    3. min

    4. max

    5. stddev

    6. stdvar

    7. count

    8. topk

    9. bottomk

  6. Vector aggregate grouping

    1. labels that define the final time-series to collapse into. Must be a subset of the range aggregate grouping

    2. default grouping behavior is to group everything into one time series

  7. Generate chart button to chart the log-derived metric

  8. Visualization type

  9. Save metric icon

Log Analytics exploration workflow

  • Add any log filters as described in the Log Search View to filter down logs for charting

  • Count based log metrics

    • Choose count_log_eventsfrom the log facet selector

    • Choose number as the normalization function

    • Choose rate or count_over_time as the Range/time aggregation function

    • Click on Generate chartto chart the count based metric

...

  • Log facet metrics

    • Choose the log facet to chart from the log facet selector

    • Choose number/bytes/durationas the normalization function to normalize the facet value. Choose countto count the number of times the log facet appears in the time-step

    • Choose one of the Log facet based range aggregation function

    • Click on Generate chart

...

Log Source Integration

Kloudfuse stack can ingest from a variety of agents and cloud services. The following lists the various sources and how to configure them.

...

can be parsed with the following tokenizer:
'%{sourceIp} - - [%{timestamp}] "%{requestMethod} %{uri} %{_}" %{responseCode} %{contentLength}'

...