Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
# To list all artifacts in the catalog
python3 catalog.py
--list
{'kafka': {'dashboards': {'count': 1}},
 'kubernetes': {'alerts': {'count': 1}, 'dashboards': {'count': 5}},
 'sock-shop': {'dashboards': {'count': 1}}}
 
 # To list only alerts in the catalog
python3 catalog.py --list --artifact_type=alerts
{'kafka': {}, 'kubernetes': {'alerts': {'count': 1}}, 'sock-shop': {}}

...

Code Block
# catalog lets you explore artifacts and make "selections" along the way to what 
# you want to install, and installing it after one confirms the selection.
# Provding --artifact_type as an argument confines the exploration to a given artifact
# type.
# Follow the help and prompt to make selections. Following example uses
# "dashboards" artifact type and makes "kafka" and "kubernetes" selection
# for components and then "Select All" for installing all dashboards for these
# components.

python3 catalog.py --explore --artifact_type=dashboards
? Select component:

Note: use up or down arrow to navigate
    : use <space> to select or unselect
    : multiple choices can be selected
    : press <enter> to go to next step

❯ Select All
  kafka
  kubernetes
  sock-shop
  
['kafka', 'kubernetes']
2023-02-24T01:40:11.377805Z listing artifacts for          component=['kafka', 'kubernetes']
? Select artifacts:

Note: use up or down arrow to navigate
    : use <space> to select or unselect
    : multiple choices can be selected
    : press <enter> to go to next step

❯ Select All
  kafka: dashboards: kafka.json
  kubernetes: dashboards: dd-Kubernetes-O_grafana.json
  kubernetes: dashboards: dd-KubernetesAPIServerO_grafana.json
  kubernetes: dashboards: dd-KubernetesClusterOverviewDa_grafana.json
  kubernetes: dashboards: dd-KubernetesJobsandCronJobsO_grafana.json
  kubernetes: dashboards: dd-KubernetesNodesO_grafana.json  
  
   ['Select All']

Following artifacts will be downloaded and installed:

 selection=['kafka: dashboards: kafka.json',
 'kubernetes: dashboards: dd-Kubernetes-O_grafana.json',
 'kubernetes: dashboards: dd-KubernetesAPIServerO_grafana.json',
 'kubernetes: dashboards: dd-KubernetesClusterOverviewDa_grafana.json',
 'kubernetes: dashboards: dd-KubernetesJobsandCronJobsO_grafana.json',
 'kubernetes: dashboards: dd-KubernetesNodesO_grafana.json',
 'sock-shop: dashboards: sock-shop-perf.json']
?

Proceed? (Y/n)

Install kubernetes artifacts

Code Block
# Install all kubernetes dashboard artifacts.
# NOTE THAT --install DOES NOT ASK FOR ANY CONFIRMATION. Use --explore for that.

python3 catalog.py --install --artifact_type=dashboards --list kubernetes

Install all artifacts

Code Block
# Install all available artifacts (optionally, of a given type. if artifact_type
# is not supplied then all artifact types will be installed).
# NOTE THAT --install DOES NOT ASK FOR ANY CONFIRMATION. Use --explore for that.

python3 catalog.py --install --artifact_type=dashboards

...

  • The above step will download the artifacts to /tmp/<artifact_type>_downloads directory. Use following command to explore and install these artifacts into the kfuse embedded grafana.

Note/Limitation: If uploading directory, make sure the your alerts are in directory structure like /tmp/alerts_downloads/assests/<any_folder_name>/alerts/ and then use the below command, other you may not see any alerts to select from.

Code Block
python3 catalog.py --explore --artifact_type=alerts --use_dir=/tmp/alerts_downloads --grafana_address=https://<target-cluster-addr> --grafana_username="SOURCE_GRAFANA_USERNAME" --grafana_password="SOURCE_GRAFANA_PASSWD"

Note: If these commands can’t be executed from pod due to <target-cluster-addr> reachability, you can use following instructions to extract the catalog tool used above to a location where <target-cluster-addr> can be reached:

Code Block
# Requires python 3.10

mkdir /tmp/catalog_service/
cd /tmp/

# Get the catalog-service pod id for the following command. 
kubectl cp catalog-service-<...>:/catalog_service ./catalog_service/
pip3 install -r ./catalog_service/requirements.txt

# migrate alerts (use -h for help)
python3 ./catalog_service/catalog.py --download_installed --grafana_address https://<source-kfs-cluster>.kloudfuse.io --artifact_type alerts
python3 ./catalog_service/catalog.py --grafana_address https://<target-kfs-cluster>.kloudfuse.io --artifact_type alerts --explore --use_dir /tmp/alerts_downloads

# migrate dashboards (use -h for help)

python3 ./catalog_service/catalog.py --download_installed --grafana_address https://<source-kfs-cluster>.kloudfuse.io --artifact_type dashboards
python3 ./catalog_service/catalog.py --grafana_address https://<target-kfs-cluster>.kloudfuse.io --artifact_type dashboards --explore --use_dir /tmp/dashboards_downloads

Manually uploading artifacts (dashboards/alerts) to Kloudfuse

Download from existing

Ensure you’ve downloaded the alerts/dashboards in json format to be uploaded to Kloudfuse and are available in the catalog-service pod. You can connect to catalog service pod with above instructions

Upload to Kloudfuse

  • Create a directory structure as follows in your <<download_directory>>

    • For alerts mkdir -p <<download_directory>>/assets/upload/alerts

    • For dashboards mkdir -p <<download_directory>>/assets/upload/dashboards

  • Copy the alerts and/or dashboards to respective directories. Use following command to explore and install these artifacts into the kfuse embedded grafana.

    Code Block
    python3 catalog.py --explore --artifact_type=alerts --use_dir=<<download_directory>>

Caveats

  • the artifacts that are installed using the catalog are considered to be “staged” for the user to tweak as needed. This, ideally, shouldn’t be considered as the final destination of how users should manage their artifacts and their life-cycle (as code) as each organization has their own way of managing these. The user should ideally move these to other folders as desired and then use their life-cycle management workflows from there.

  • if artifacts with same name in the same folder are already installed (found), the catalog will NOT overwrite them and will skip them by default. If users need to update their installed artifacts, then they should first delete the existing artifacts. Also, a sample config.yaml file is provided which can be used to control how artifacts are merged (applies to alerts only for now).

...