Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Current »

Overview

The Kloudfuse platform comes with catalog service which aims to provide following functionalities for ease of use/onboarding.

  • It presents ready-to-use artifacts (dashboards or alerts) for several commonly used software components (Postgres, Redis, Mongo, Kafka, K8s, Apache Pinot, etc. etc.) towards the “works out of the box” experience. These artifacts can be deployed directly into your environment through the catalog service. Learn more here.

  • The catalog service is also aimed at providing simpler migration of artifacts from other source platforms to kloudfuse platform. Learn more here.

Explore/Install through the Kloudfuse artifacts catalog

The UI for consuming (listing/exploring/installing) these artifacts is coming in an upcoming version. However, the users can use these easy-to-follow instructions to do the same by using a CLI. The CLI is available as part of the catalog-service which runs as a service within the kloudfuse installed stack. The steps are as follows:

Logging into the catalog service

# make sure you are connected to the kfuse k8s cluster and in the kfuse namespace
kubectx <your-kfuse-cluster>
kubens kfuse
kubectl exec -it catalog-service-<...> -- bash

Usage

# Make sure to understand the usage. 
# cd /catalog_service
python3 catalog.py -h
usage: catalog.py [-h] [--artifact_type {dashboards,alerts,contactpoints}] [--explore] [--list [LIST]] [--detailed] [--install] [--download_installed] [--grafana_address GRAFANA_ADDRESS] [--grafana_username GRAFANA_USERNAME]
                  [--grafana_password GRAFANA_PASSWORD] [-c CONFIG] [--no_upload]

catalog service

options:
  -h, --help            show this help message and exit
  --artifact_type {dashboards,alerts,contactpoints}
                        which artifact type you want to work with
  --explore             explore the catalog, making selections, and finally installing chosen artifacts to grafana
  --list [LIST]         list all the components for which artifacts are available. if list of comma separated components are provided, then those are used
  --detailed            show detailed list (only honored with --list option)
  --install             directly installs artifacts of specified type as per --list option without exploring/prompting
  --download_installed  download already installed artifacts from provided grafana instance or from the grafana in the cluster context
  --grafana_address GRAFANA_ADDRESS
                        grafana server address. optional, default is http://kfuse-grafana:80)
  --grafana_username GRAFANA_USERNAME
                        grafana username. optional, if not provided, it is taken from environment
  --grafana_password GRAFANA_PASSWORD
                        grafana user password. taken from the environment
  -c CONFIG, --config CONFIG
                        Config file containing settings for install if selectivity is needed
  --no_upload           just process artifacts but do not upload  

List artifacts (optionally, of a given type)

# To list all artifacts in the catalog
python3 catalog.py
{'kafka': {'dashboards': {'count': 1}},
 'kubernetes': {'alerts': {'count': 1}, 'dashboards': {'count': 5}},
 'sock-shop': {'dashboards': {'count': 1}}}
 
 # To list only alerts in the catalog
python3 catalog.py --artifact_type=alerts
{'kafka': {}, 'kubernetes': {'alerts': {'count': 1}}, 'sock-shop': {}}

Explore artifacts (optionally, of a given type)

# catalog lets you explore artifacts and make "selections" along the way to what 
# you want to install, and installing it after one confirms the selection.
# Provding --artifact_type as an argument confines the exploration to a given artifact
# type.
# Follow the help and prompt to make selections. Following example uses
# "dashboards" artifact type and makes "kafka" and "kubernetes" selection
# for components and then "Select All" for installing all dashboards for these
# components.

python3 catalog.py --explore --artifact_type=dashboards
? Select component:

Note: use up or down arrow to navigate
    : use <space> to select or unselect
    : multiple choices can be selected
    : press <enter> to go to next step

❯ Select All
  kafka
  kubernetes
  sock-shop
  
['kafka', 'kubernetes']
2023-02-24T01:40:11.377805Z listing artifacts for          component=['kafka', 'kubernetes']
? Select artifacts:

Note: use up or down arrow to navigate
    : use <space> to select or unselect
    : multiple choices can be selected
    : press <enter> to go to next step

❯ Select All
  kafka: dashboards: kafka.json
  kubernetes: dashboards: dd-Kubernetes-O_grafana.json
  kubernetes: dashboards: dd-KubernetesAPIServerO_grafana.json
  kubernetes: dashboards: dd-KubernetesClusterOverviewDa_grafana.json
  kubernetes: dashboards: dd-KubernetesJobsandCronJobsO_grafana.json
  kubernetes: dashboards: dd-KubernetesNodesO_grafana.json  
  
   ['Select All']

Following artifacts will be downloaded and installed:

 selection=['kafka: dashboards: kafka.json',
 'kubernetes: dashboards: dd-Kubernetes-O_grafana.json',
 'kubernetes: dashboards: dd-KubernetesAPIServerO_grafana.json',
 'kubernetes: dashboards: dd-KubernetesClusterOverviewDa_grafana.json',
 'kubernetes: dashboards: dd-KubernetesJobsandCronJobsO_grafana.json',
 'kubernetes: dashboards: dd-KubernetesNodesO_grafana.json',
 'sock-shop: dashboards: sock-shop-perf.json']
?

Proceed? (Y/n)

Install kubernetes artifacts

# Install all kubernetes dashboard artifacts.
# NOTE THAT --install DOES NOT ASK FOR ANY CONFIRMATION. Use --explore for that.

python3 catalog.py --install --artifact_type=dashboards --list kubernetes

Install all artifacts

# Install all available artifacts (optionally, of a given type. if artifact_type
# is not supplied then all artifact types will be installed).
# NOTE THAT --install DOES NOT ASK FOR ANY CONFIRMATION. Use --explore for that.

python3 catalog.py --install --artifact_type=dashboards

Migrating artifacts from other platforms

Migrating from grafana

Download from existing

  • connect to kfuse cluster and log into the catalog service pod

    # make sure you are connected to the kfuse k8s cluster and in the kfuse namespace
    kubectx <your-kfuse-cluster>
    kubens kfuse
    kubectl exec -it catalog-service-<...> -- bash
  • Download existing artifacts (alerts in the following example) from source grafana instance using the catalog script (check usage with -h for more options), appropriately specifying arguments (be sure that the catalog service pod can reach the source grafana address):

    # Review usage with -h
    # 
    cd /catalog_service
    python3 catalog.py --download_installed --artifact_type=alerts --grafana_address=SOURCE_GRAFANA_ADDRESS --grafana_username="SOURCE_GRAFANA_USERNAME" --grafana_password="SOURCE_GRAFANA_PASSWD"

Upload to Kloudfuse

  • The above step will download the artifacts to /tmp/<artifact_type>_downloads directory. Use following command to explore and install these artifacts into the kfuse embedded grafana.

    python3 catalog.py --explore --artifact_type=alerts --use_dir=/tmp/alerts_downloads --grafana_address=https://<target-cluster-addr> --grafana_username="SOURCE_GRAFANA_USERNAME" --grafana_password="SOURCE_GRAFANA_PASSWD"

Note: If these commands can’t be executed from pod due to <target-cluster-addr> reachability, you can use following instructions to extract the catalog tool used above to a location where <target-cluster-addr> can be reached:

# Requires python 3.10

mkdir /tmp/catalog_service/
cd /tmp/

# Get the catalog-service pod id for the following command. 
kubectl cp catalog-service-<...>:/catalog_service ./catalog_service/
pip3 install -r ./catalog_service/requirements.txt

# migrate alerts (use -h for help)
python3 ./catalog_service/catalog.py --download_installed --grafana_address https://<source-kfs-cluster>.kloudfuse.io --artifact_type alerts
python3 ./catalog_service/catalog.py --grafana_address https://<target-kfs-cluster>.kloudfuse.io --artifact_type alerts --explore --use_dir /tmp/alerts_downloads

# migrate dashboards (use -h for help)

python3 ./catalog_service/catalog.py --download_installed --grafana_address https://<source-kfs-cluster>.kloudfuse.io --artifact_type dashboards
python3 ./catalog_service/catalog.py --grafana_address https://<target-kfs-cluster>.kloudfuse.io --artifact_type dashboards --explore --use_dir /tmp/dashboards_downloads

Manually uploading artifacts (dashboards/alerts) to Kloudfuse

Download from existing

Ensure you’ve downloaded the alerts/dashboards in json format to be uploaded to Kloudfuse and are available in the catalog-service pod. You can connect to catalog service pod with above instructions

Upload to Kloudfuse

  • Create a directory structure as follows in your <<download_directory>>

    • For alerts mkdir -p <<download_directory>>/assets/upload/alerts

    • For dashboards mkdir -p <<download_directory>>/assets/upload/dashboards

  • Copy the alerts and/or dashboards to respective directories. Use following command to explore and install these artifacts into the kfuse embedded grafana.

    python3 catalog.py --explore --artifact_type=alerts --use_dir=<<download_directory>>

Caveats

  • the artifacts that are installed using the catalog are considered to be “staged” for the user to tweak as needed. This, ideally, shouldn’t be considered as the final destination of how users should manage their artifacts and their life-cycle (as code) as each organization has their own way of managing these. The user should ideally move these to other folders as desired and then use their life-cycle management workflows from there.

  • if artifacts with same name in the same folder are already installed (found), the catalog will NOT overwrite them and will skip them by default. If users need to update their installed artifacts, then they should first delete the existing artifacts. Also, a sample config.yaml file is provided which can be used to control how artifacts are merged (applies to alerts only for now).

  • No labels