Veeam Kasten Tools

The k10tools binary has commands that can help with validating if a cluster is setup correctly before installing Veeam Kasten and for debugging Veeam Kasten's micro services. The latest version of k10tools can be found here.

Binaries are available for the following operating systems and architectures:

Operating System

x86_84 (amd64)

Arm (arm64/v8)

Power (ppc64le)

Linux

Yes

Yes

Yes

MacOS

Yes

Yes

No

Windows

Yes

Yes

No

Authentication Service

The k10tools debug auth sub command can be used to debug Veeam Kasten's Authentication service when it is setup with Active Directory or OpenShift based authentication. Provide -d openshift flag for OpenShift based authentication. It verifies connection to the OpenShift OAuth server and the OpenShift Service Account token. It also searches for any error events in Service Account.

./k10tools debug auth

Dex:
  OIDC Provider URL: https://api.test
  Release name: k10
  Dex well known URL:https://api.test/k10/dex/.well-known/openid-configuration
  Trying to connect to Dex without TLS (insecureSkipVerify=false)
  Connection succeeded  -  OK

./k10tools debug auth -d openshift

Verify OpenShift OAuth Server Connection:
  Openshift URL - https://api.test:6443/.well-known/oauth-authorization-server
  Trying to connect to Openshift without TLS (insecureSkipVerify=false)
  Connection failed, testing other options
  Trying to connect to Openshift with TLS but verification disabled (insecureSkipVerify=true)
  Connection succeeded  -  OK

Verify OpenShift Service Account Token:
  Initiating token verification
  Fetched ConfigMap - k10-dex
  Service Account for OpenShift authentication - k10-dex-sa
  Service account fetched
  Secret - k10-dex-sa-token-7fwm7 retrieved
  Token retrieved from Service Account secrets
  Token retrieved from ConfigMap
  Token matched  -  OK

Get Service Account Error Events:
  Searching for events with error in Service Account - k10-dex-sa
  Found event/s in service account with error
  {"type":"Warning","from":"service-account-oauth-client-getter","reason":"NoSAOAuthRedirectURIs","object":"ServiceAccount/k10-dex-sa","message":"system:serviceaccount:kasten-io:k10-dex-sa has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>","timestamp":"2021-04-08 05:06:06 +0000 UTC"} ({"message":"service account event error","function":"kasten.io/k10/kio/tools/k10primer/k10debugger.(*OpenshiftDebugger).getServiceAccountErrEvents","linenumber":224})  -  Error

Catalog Service

The k10tools debug catalog size sub command can be used to obtain the size of K10's catalog and the disk usage of the volume where the catalog is stored.

% ./k10tools debug catalog size

 Catalog Size:
   total 380K
 -rw------- 1 kio kio 512K Jan 26 23:57 model-store.db
 Catalog Volume Disk Usage:
   Filesystem                                                                Size  Used Avail Use% Mounted on
 /dev/disk/by-id/scsi-0DO_Volume_pvc-4acee649-5c24-4a79-955f-9d8fdfb10ac7   20G   45M   19G   1% /mnt/k10state

Backup Actions

The k10tools debug backupactions sub command can be used to obtain the backupactions created in the respective cluster. Use the -o json flag to obtain more information in the JSON format.

% ./k10tools debug backupactions

Name                            Namespace     CreationTimestamp                           PolicyName      PolicyNamespace
scheduled-6wbzw                 default               2021-01-29 07:57:08 +0000 UTC     default-backup        kasten-io
scheduled-5thsg                 default               2021-01-29 05:37:03 +0000 UTC     default-backup        kasten-io

Kubernetes Nodes

The k10tools debug node sub command can be used to obtain information about the Kubernetes nodes. Use the -o json flag to obtain more information in the JSON format.

% ./k10tools debug node

  Name                 |OS Image
  onkar-1-pool-1-3d1cf |Debian GNU/Linux 10 (buster)
  onkar-1-pool-1-3d1cq |Debian GNU/Linux 10 (buster)
  onkar-1-pool-1-3d1cy |Debian GNU/Linux 10 (buster)

Application Information

The k10tools debug applications sub command can be used to obtain information about the applications running in given namespace. Use the -o json flag to obtain more information in the JSON format (Note: Right now, JSON format support is only provided for PVCs). Use -n to provide the namespace. In case the namespace is not provided, application information will be fetched from the default namespace. e.g. -n kasten-io

% ./k10tools debug applications

  Fetching information from namespace - kasten-io | resource - ingresses
  Name        |Hosts |Address        |Ports |Age |
  k10-ingress |*     |138.68.228.199 |80    |36d |

  Fetching information from namespace - kasten-io | resource - daemonsets
  Resources not found

  PVC Information -
  Name                |Volume                                     |Capacity
  catalog-pv-claim    |pvc-4fc67966-aee7-493c-b2fd-c6251933875c   |20Gi
  jobs-pv-claim       |pvc-cdda0458-6b63-48a6-8e7f-c1b947600c9f   |20Gi
  logging-pv-claim    |pvc-36a92c5b-d018-4ce8-ba79-970d15554387   |20Gi
  metering-pv-claim   |pvc-8c0c6477-216d-4227-a6af-9725ce2a3dc1   |2Gi
  prometheus-server   |pvc-1b14f51c-5abf-45f5-8bd9-1a58d86d58ef   |8Gi

Veeam Kasten Primer for Pre-Flight Checks

The k10tools primer sub command can be used to run pre-flight checks before installing Veeam Kasten. Refer to the section about Pre-Flight Checks for more details.

The code block below shows an example of the output when executed on a Kubernetes cluster deployed in Digital Ocean.

% ./k10tools primer

Kubernetes Version Check:
  Valid kubernetes version (v1.17.13)  -  OK

RBAC Check:
  Kubernetes RBAC is enabled  -  OK

Aggregated Layer Check:
  The Kubernetes Aggregated Layer is enabled  -  OK

CSI Capabilities Check:
  Using CSI GroupVersion snapshot.storage.k8s.io/v1alpha1  -  OK

Validating Provisioners:
kube-rook-ceph.rbd.csi.ceph.com:
  Is a CSI Provisioner  -  OK
  Storage Classes:
    rook-ceph-block
      Valid Storage Class  -  OK
  Volume Snapshot Classes:
    csi-rbdplugin-snapclass
      Has k10.kasten.io/is-snapshot-class annotation set to true  -  OK
      Has deletionPolicy 'Retain'  -  OK

dobs.csi.digitalocean.com:
  Is a CSI Provisioner  -  OK
  Storage Classes:
    do-block-storage
      Valid Storage Class  -  OK
  Volume Snapshot Classes:
    do-block-storage
      Has k10.kasten.io/is-snapshot-class annotation set to true  -  OK
      Missing deletionPolicy, using default

Validate Generic Volume Snapshot:
  Pod Created successfully  -  OK
  GVS Backup command executed successfully  -  OK
  Pod deleted successfully  -  OK

Veeam Kasten Primer for Upgrades

The k10tools primer upgrade sub command can be used to find the recommended upgrade path of your Veeam Kasten version and to check there is adequate space to perform the upgrades. It only provides commands for Helm deployments. See Upgrading Veeam Kasten for additional details. This tool requires Internet access to http://gcr.io

% ./k10tools primer upgrade
Catalog Volume Disk Usage:
  Filesystem      Size  Used Avail Use% Mounted on
/dev/sdf         20G  1.3G   19G   7% /mnt/k10state

Current K10 Version: 4.5.5
Latest K10 Version: 4.5.6
Helm Install: true

* To upgrade successfully you must have at least 50% free in catalog storage

Recommended upgrade path:
  helm repo update && \
    helm get values k10 --output yaml --namespace=kasten-io > k10_val.yaml && \
    helm upgrade k10 kasten/k10 --namespace=kasten-io -f k10_val.yaml --version=4.5.6

Veeam Kasten Primer for Storage Connectivity Checks

Note

Run k10tools primer storage connect --help command to observe all supported sub-commands.

The k10tools primer storage connect command family can be used to check a given storage provider accessibility.

Currently the following storage providers are supported for this group of checks:

  • Azure

  • Google Cloud Storage (GCS)

  • Portworx (PWX)

  • S3 Compatible Storage

  • Veeam Backup Server (VBR)

  • vSphere

Each sub-command corresponding to a particular storage provider accepts a configuration file with parameters required for making connection. The configuration file format can be observed by issuing config-yaml sub-command in the following way (example is for GCS):

% ./k10tools primer storage connect gcs config-yaml
# The geography in which Google Cloud Storage buckets are located
region: <gcs_region> # Example: us-central1
# Google Cloud Platform project ID
project_id: <gcs_project_id>
# Google Cloud Platform service key
service_key: <gcs_service_key>
# Maximum number of buckets to collect during checking connectivity to Google Cloud Storage.
list_buckets_limit: 10 # Default is 0
# Google Cloud Storage operations with required parameters to check (Optional).
# Use the same parameters to run actions against the same objects.
operations:
  - action: PutObject
    container_name: <gcs_bucket_name> # Container name
    object_name: <gcs_object_name> # Object name
    content_string: <object_content> # Object content string
  - action: ListObjects
    container_name: <gcs_bucket_name> # Container name
    limit: 100 # Maximum number of items to collect (Optional). Default is 0
  - action: DeleteObject
    container_name: <gcs_bucket_name> # Container name
    object_name: <gcs_object_name> # Object name

The output below is an example of running GCS connectivity checker:

% ./k10tools primer storage connect gcs -f ./gcs_check.yaml
Using "./gcs_check.yaml " file content as config source
Connecting to Google Cloud Storage (region: us-west1)
-> Connect to Google Cloud Storage
-> List Google Cloud Storage containers
-> Put Google Cloud Storage object
-> List Google Cloud Storage objects
-> Delete Google Cloud Storage object
Google Cloud Storage Connection Checker:
  Connected to Google Cloud Storage with provided credentials  -  OK
  Listed Google Cloud Storage containers: [testbucket20221123 55-demo 66-demo 77-demo]  -  OK
  Added Google Cloud Storage object testblob20221123 to container testbucket20221123  -  OK
  Listed Google Cloud Storage container testbucket20221123 objects: [testblob20221123]  -  OK
  Deleted Google Cloud Storage object testblob20221123 from container testbucket20221123  -  OK

Veeam Kasten Primer for Storage Integration Checks

Note

Run k10tools primer storage check --help command to observe all supported sub-commands.

CSI Capabilities Check

The k10tools primer storage check csi sub-command can be used to check a specified CSI storage class is able to carry out snapshot and restoration activities or report configuration issues if not. It creates a temporary application to test this.

The command accepts a configuration file in the following format:

% cat ./csi_check.yaml
storage_class: standard-rwo # specifies the storage class
run_as_user: 1000           # specifies the user the pod runs as

The output below is an example of running CSI checker:

% ./k10tools primer storage check csi -f ./csi_check.yaml
Using "./csi_check.yaml" file content as config source
Starting CSI Checker. Could take up to 5 minutes
Creating application
  -> Created pod (kubestr-csi-original-podr2rkz) and pvc (kubestr-csi-original-pvc2fx6s)
Taking a snapshot
  -> Created snapshot (kubestr-snapshot-20220608113008)
Restoring application
  -> Restored pod (kubestr-csi-cloned-podhgx57) and pvc (kubestr-csi-cloned-pvccfh8w)
Cleaning up resources
CSI Snapshot Walkthrough:
  Using annotated VolumeSnapshotClass (my-snapshotclass)
  Successfully tested snapshot restore functionality.  -  OK

Direct Cloud Provider Integration Checks

The k10tools primer storage check sub-command family allows checking snapshot/restore capabilities through native API integration of capable cloud storage providers via direct storage API invocations.

For now the following cloud providers are supported:

  • Amazon Elastic Block Store (AWS EBS)

  • Azure Persistent Disk

  • Google Compute Engine Persistent Disk (GCE PD)

To run a desired check the k10tools primer storage check command should be appended with either awsebs, or azure, or gcepd suffix. Each of these sub-commands accepts parameters passed via configuration files to create a test application performing snapshot/restore via vendor specific storage APIs. The format of which sub-command can be observed by executing k10tools primer storage check <awsebs|azure|gcepd> config-yaml.

Example configuration file format for GCE PD checker:

% ./k10tools primer storage check gcepd config-yaml
# GCP Project ID
project_id: <gcp_project_id>
# GCP Service Key
service_key: <gcp_service_key>
# Size of a GCE PD volume (in mebibytes) to be created during the test
volume_size: 100

The output below is an example of running GCE PD provider check:

% ./k10tools primer storage check gcepd -f ./gcepd_check.yaml
Using "./gcepd_check.yaml" file content as config source
Checking Backup/Restore capabilities of GCE PD storage provider
-> Setup Provider
-> Create Namespace
-> Create Affinity Pod
-> Create Volume
-> Create Test Pod
-> Write Data
-> Create Snapshot
-> Delete Test Pod
-> Delete Volume
-> Restore Volume
-> Restore Test Pod
-> Verify Data
-> Delete Test Pod
-> Delete Affinity Pod
-> Delete Namespace
-> Delete Snapshot
-> Delete Volume
GCE PD Backup/Restore Checker:
  Created storage provider  -  OK
  Created namespace 'primer-test-ns-8q9cl'  -  OK
  Created affinity pod 'primer-affinity-pod-9ctmj'  -  OK
  Created volume 'vol-2d7d9b2a-7701-11ed-8664-6a5ef5ff8566'  -  OK
  Created test pod 'primer-test-pod-v6nc8'  -  OK
  Wrote data '2022-12-08 18:04:25.144008 +0400 +04 m=+30.117055584' to pod 'primer-test-pod-v6nc8'  -  OK
  Created snapshot 'snap-39be78a0-7701-11ed-8664-6a5ef5ff8566' for volume 'vol-2d7d9b2a-7701-11ed-8664-6a5ef5ff8566'  -  OK
  Deleted test pod 'primer-test-pod-v6nc8'  -  OK
  Deleted volume 'vol-2d7d9b2a-7701-11ed-8664-6a5ef5ff8566'  -  OK
  Restored volume 'vol-65f2d4c0-7701-11ed-8664-6a5ef5ff8566' from snapshot 'snap-39be78a0-7701-11ed-8664-6a5ef5ff8566'  -  OK
  Created test pod 'primer-test-pod-k7knx'  -  OK
  Verified restored data  -  OK
  Deleted test pod 'primer-test-pod-k7knx'  -  OK
  Deleted affinity pod 'primer-affinity-pod-9ctmj'  -  OK
  Deleted namespace 'primer-test-ns-8q9cl'  -  OK
  Deleted snapshot 'snap-39be78a0-7701-11ed-8664-6a5ef5ff8566'  -  OK
  Deleted volume 'vol-65f2d4c0-7701-11ed-8664-6a5ef5ff8566'  -  OK

vSphere First Class Disk Integration Check

Due to limited functionality provided by vSphere CSI driver Veeam Kasten has to use both volume provisioning via CSI interface and manual calling vSphere API for doing snapshots and restores of volumes.

The k10tools primer storage check vsphere sub-command provisions a First Class Disk (FCD) volume using a CSI storage class and performs snapshot/restore via vSphere API.

The command accepts a configuration file in the following format (can be observed by running config-yaml command):

% cat ./vsphere_check.yaml
endpoint: test.endpoint.local     # The vSphere endpoint
username: *****                   # The vSphere username
password: *****                   # The vSphere password
storage_class: test-storage-class # vSphere CSI provisioner storage class name
volume_size: 100                  # Size of a vSphere volume (in mebibytes) to be created during the test

The output below is an example of running vSphere CSI checker:

% ./k10tools primer storage check vsphere -f ./vsphere_check.yaml
Using "./vsphere_check.yaml" file content as config source
-> Setup Provider
-> Create Namespace
-> Create Volume
-> Create Test Pod
-> Write Data
-> Create Snapshot
-> Delete Test Pod
-> Delete Volume
   - Delete PVC 'primer-test-vsphere-pvc-b825l'
-> Restore Volume
   - Restore vSphere FCD
   - Restore PV
   - Restore PVC
-> Restore Test Pod
-> Verify Data
-> Delete Test Pod
-> Delete Snapshot
-> Delete Volume
   - Delete PVC 'primer-test-vsphere-pvc-9blfz'
-> Delete Namespace
VSphere backup/restore checker:
  Created storage provider  -  OK
  Created namespace 'primer-test-ns-fwgfl'  -  OK
  Created PVC 'primer-test-vsphere-pvc-b825l'  -  OK
  Created test pod 'primer-test-pod-2frfw'  -  OK
  Wrote data '2022-12-08 18:36:21.252404 +0400 +04 m=+29.712849501' to pod 'primer-test-pod-2frfw'  -  OK
  Created snapshot '50cf961e-3e87-4cc0-8031-a09b6c6b6a2e:127c586c-5251-4a3d-976c-d728cd370926' for FCD '50cf961e-3e87-4cc0-8031-a09b6c6b6a2e' (PV 'pvc-ab60f6a8-d1b6-4861-9c95-b7404e1c1ea5')  -  OK
  Deleted test pod 'primer-test-pod-2frfw'  -  OK
  Deleted PVC 'primer-test-vsphere-pvc-b825l', PV 'pvc-ab60f6a8-d1b6-4861-9c95-b7404e1c1ea5' and FCD '50cf961e-3e87-4cc0-8031-a09b6c6b6a2e'  -  OK
  Restored FCD '253cebc3-80cb-470c-90a8-e9e80a4f2188', PV 'primer-test-vsphere-pv-lqvmw' and PVC 'primer-test-vsphere-pvc-9blfz' from snapshot '50cf961e-3e87-4cc0-8031-a09b6c6b6a2e:127c586c-5251-4a3d-976c-d728cd370926'  -  OK
  Created test pod 'primer-test-pod-5lk26'  -  OK
  Verified restored data  -  OK
  Deleted test pod 'primer-test-pod-5lk26'  -  OK
  Deleted snapshot '50cf961e-3e87-4cc0-8031-a09b6c6b6a2e:127c586c-5251-4a3d-976c-d728cd370926'  -  OK
  Deleted PVC 'primer-test-vsphere-pvc-9blfz', PV 'primer-test-vsphere-pv-lqvmw' and FCD '253cebc3-80cb-470c-90a8-e9e80a4f2188'  -  OK
  Deleted namespace 'primer-test-ns-fwgfl'  -  OK

Veeam Kasten Primer Block Mount Check

The k10tools primer storage check blockmount sub-command is provided to test if the PersistentVolumes provisioned by a StorageClass can be supported in block mode by Veeam Kasten. If a StorageClass passes this test then see Block Mode Exports for how to indicate this fact to Veeam Kasten.

The checker performs two tests:

  1. The kubestr block mount test is used to verify that the StorageClass volumes can be used with Block VolumeMounts.

  2. If first test succeeds, then a second test is run to verify that Veeam Kasten can restore block data to such volumes. This step is performed only if Veeam Kasten does not use provisioner specific direct network APIs to restore data to a block volume during import.

Both tests independently allocate and release the Kubernetes resources they need, and it takes a few minutes for the test to complete.

The checker can be invoked by the k10primer.sh script in a manner similar to that described in the Pre-flight Checks:

% curl https://docs.kasten.io/tools/k10_primer.sh | bash /dev/stdin blockmount -s ${STORAGE_CLASS_NAME}

Alternatively, for more control over the invocation of the checker, use a local copy of the k10tools program to obtain a YAML configuration file as follows:

% ./k10tools primer storage check blockmount config-yaml
# Storage class name (string)
storage_class: <storage class being tested>
# PVC size (Kubernetes Quantity string format)
pvc_size: 1Gi
# The user identifier for pods. (int64)
run_as_user: 1000
# Cleanup only (bool)
cleanup_only: false
# Mount test timeout seconds (uint32)
mount_test_timeout_seconds: 60
# Import test timeout seconds (uint32)
import_test_timeout_seconds: 300
# Disable the invocation of the kubestr blockmount test (bool)
disable_mount_test: false
# Disable the invocation of the import validation test (bool)
disable_import_test: false

The YAML output should be saved to a file and edited to set the desired StorageClass. Only the storage_class property is required; other properties will default to the values displayed in the output if not explicitly set.

Then run the checker as follows:

% ./k10tools primer storage check blockmount -f ./blockmount.yaml

The test emits multiple messages as it progresses. On success, you will see a summary message like this at the end:

Block mount checker:
StorageClass standard-rwo supports Block volume mode
StorageClass standard-rwo is supported by K10 in Block volume mode

On failure, the summary message would look like this:

Block mount checker:
StorageClass efs-sc does not support Block volume mode: had issues creating Pod: 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.  -  Error

The checker may produce spurious errors if the StorageClass specifies the Immediate VolumeBindingMode and the PersistentVolumes provisioned by the test have different node affinities. In such a case use a variant of the StorageClass that specifies the WaitForFirstConsumer VolumeBindingMode instead.

Use the -h flag to get all command usage options.

Veeam Kasten Primer for Authentication Service Checks

Note

Run k10tools primer auth check --help command to observe all supported sub-commands.

The k10tools primer auth check sub-command family allows doing basic sanity checks for 3rd-party authentication services. Currently it supports checkers for ActiveDirectory/LDAP and OIDC.

Each service specific command accepts required parameters via a configuration file, format of which can be observed by running config-yaml sub-command (example is for OIDC checker):

% ./k10tools primer auth check oidc config-yaml
# OIDC provider URL
provider_url: <provider_url> # Example: https://accounts.google.com

The output below is an example of running OIDC checker:

% ./k10tools primer auth check oidc -f ./oidc_check.yaml
Using "./oidc_check.yaml" file content as config source
Checking the OIDC provider: https://accounts.google.com
OIDC Provider Checker:
  Successfully connected to the OIDC provider  -  OK

Generic Volume Snapshot Capabilities Check

The k10tools primer gvs-cluster-check command can be used to check if the cluster is compatible for Veeam Kasten Generic Volume Snapshot. Veeam Kasten Generic backup commands are executed on a pod running kanister-tools image and checked for appropriate output.

Use -n flag to provide namespace. By default, kasten-io namespace will be used.

Use -s flag to provide a storageclass for the checks to be run against. By default, no storage class will be used and the checks will be done using temporary storage from the node the pod runs on.

Use --service-account flag to specify the service account to be used by pods during GVS checks. By default, default service account will be used.

Note

By default, the k10tools command will use the publicly available kanister-tools image at gcr.io/kasten-images/kanister-tools:<K10 version>. Since this image is not available in air-gapped environments, to override the default image, set the KANISTER_TOOLS environment variable to the kanister-tools image that is available in the air-gapped environment's local registry.

Example:

export KANISTER_TOOLS=<your local registry>/<your local repository name>/kanister-tools:k10-<K10 version>

% ./k10tools primer gvs-cluster-check
  Validate Generic Volume Snapshot:
    Pod Created successfully  -  OK
    GVS Backup command executed successfully  -  OK
    Pod deleted successfully  -  OK

Veeam Kasten Generic Storage Backup Sidecar Injection

The k10tools k10genericbackup can be used to make Kubernetes workloads compatible for K10 Generic Storage Backup by injecting a Kanister sidecar and setting the forcegenericbackup=true annotation on the workloads.

Note

By default, the k10tools command will use the publicly available kanister-tools image at gcr.io/kasten-images/kanister-tools:<K10 version>. Since this image is not available in air-gapped environments, to override the default image, set the KANISTER_TOOLS environment variable to the kanister-tools image that is available in the air-gapped environment's local registry.

Example:

export KANISTER_TOOLS=<your local registry>/<your local repository name>/kanister-tools:k10-<K10 version>

## Usage ##
% ./k10tools k10genericbackup --help

k10genericbackup makes Kubernetes workloads compatible for K10 Generic Storage Backup by
injecting a Kanister sidecar and setting the forcegenericbackup=true annotation on the workloads.
To know more about K10 Generic Storage Backup, visit https://docs.kasten.io/latest/install/generic.html

Usage:
  k10tools k10genericbackup [command]

Available Commands:
  inject      Inject Kanister sidecar to workloads to enable K10 Generic Storage Backup
  uninject    Uninject Kanister sidecar from workloads to disable K10 Generic Storage Backup

Flags:
      --all-namespaces         resources in all the namespaces
  -h, --help                   help for k10genericbackup
      --k10-namespace string   namespace where K10 services are deployed (default "kasten-io")
  -n, --namespace string       namespace (default "default")

Global Flags:
  -o, --output string   Options(json)

Use "k10tools k10genericbackup [command] --help" for more information about a command.


## Example: Inject a Kanister sidecar to all the workloads in postgres namespace ##
% ./k10tools k10genericbackup inject all -n postgres

Inject deployment:

Inject statefulset:
  Injecting sidecar to statefulset postgres/mysql
  Updating statefulset postgres/mysql
  Waiting for statefulset postgres/mysql to be ready
  Sidecar injection successful on statefulset postgres/mysql!  -  OK
  Injecting sidecar to statefulset postgres/postgres-postgresql
  Updating statefulset postgres/postgres-postgresql
  Waiting for statefulset postgres/postgres-postgresql to be ready
  Sidecar injection successful on statefulset postgres/postgres-postgresql!  -  OK

Inject deploymentconfig:
  Skipping. Env is not compatible for Kanister sidecar injection

CA Certificate Check

The k10tools debug ca-certificate command can be used to check if the CA certificate is installed properly in Veeam Kasten. The -n flag can be used to provide namespace and it defaults to kasten-io. More information on installation process.

% ./k10tools debug ca-certificate
  CA Certificate Checker:
    Fetching configmap which contains CA Certificate information : custom-ca-bundle-store
    Certificate exists in configmap  -  OK
    Found container : aggregatedapis-svc to extract certificate
    Certificate exists in container at /etc/ssl/certs/custom-ca-bundle.pem
    Certificates matched successfully  -  OK

Installation of Veeam Kasten in OpenShift clusters

The k10tools openshift prepare-install command can be used to prepare an OpenShift cluster for installation of Veeam Kasten. It extracts a CA Certificate from the cluster, installs it in the namespace where Veeam Kasten will be installed, and generates the helm command to be used for installing Veeam Kasten. The -n flag can be used to provide the namespace where Veeam Kasten will be installed. The default namespace is kasten-io. --recreate-resources flag recreates resources that may have been created by previous execution of this command. Set --insecure-ca flag to true if Certificate Issuing Authority is not trusted.

% ./k10tools openshift prepare-install
Openshift Prepare Install:
  Certificate found in Namespace 'openshift-ingress-operator' in secret 'router-ca'  -  OK
  Checking if namespace 'kasten-io' exists
  Namespace 'kasten-io' exists  -  OK
  Created configmap 'custom-ca-bundle-store' with custom certificate in it  -  OK
  Searching for Apps Base Domain Name in Ingress Controller
  Found Apps Base Domain 'apps.test.aws.kasten.io'  -  OK
  Created Service Account 'k10-dex-sa' successfully  -  OK

Please use below helm command to start K10 installation
--------------------------------------------------------------------
 helm repo add kasten https://charts.kasten.io/
 helm install k10 kasten/k10 --namespace=kasten-io \
 --set scc.create=true \
 --set route.enabled=true \
 --set route.tls.enabled=true \
 --set auth.openshift.enabled=true \
 --set auth.openshift.serviceAccount=k10-dex-sa \
 --set auth.openshift.clientSecret=<your key will be here automatically>\
 --set auth.openshift.dashboardURL=https://k10-route-kasten-io.apps.test.aws.kasten.io/k10/ \
 --set auth.openshift.openshiftURL=https://api.test.aws.kasten.io:6443 \
 --set auth.openshift.insecureCA=false \
 --set cacertconfigmap.name=custom-ca-bundle-store

Extracting OpenShift CA Certificates

The k10tools openshift extract-certificates command is used to extract CA certificates from OpenShift clusters to the Veeam Kasten namespace. The following flags can be used to configure the command:

  • --ca-cert-configmap-name. The name of the Kubernetes ConfigMap that contains all certificates required for Veeam Kasten. If no name is provided, the default name custom-ca-bundle-store will be used.

    • If the ConfigMap with the used name does not exist, the command will generate a new ConfigMap.

    • If the ConfigMap with the used name exists, the command will merge newly extracted certificates with the existing certificates in the ConfigMap without creating duplicates.

  • --k10-namespace or -n. The Kubernetes namespace where Veeam Kasten is expected to be installed. The default value is kasten-io.

  • --release-name. The K10 Release Name. The default value is k10.

% ./k10tools openshift extract-certificates
% kubectl get configmap custom-ca-bundle-store -n kasten-io
NAME                     DATA   AGE
custom-ca-bundle-store   1      46s

Listing vSphere snapshots created by Veeam Kasten

Veeam Kasten integrates with the vSphere clusters using direct integration. Veeam Kasten snapshots can be listed using k10tools.

export VSPHERE_ENDPOINT=<url of ESXi or vCenter instance to connect to>
export VSPHERE_USERNAME=<vSphere username>
export VSPHERE_PASSWORD=<vSphere password>

k10tools provider-snapshots list -t FCD

Note

Only snapshots created starting with version 5.0.7 will be listed by the current version of the tool. Earlier snapshots might be listed if they had been created using a vSphere infrastructure profile with the tagging option enabled (Deprecated since then). To list earlier snapshots, k10tools v6.5.0 should be used with an additional environment variable:

# category name can be found from the vSphere infrastructure profile, in the form of "k10:<UUID>"

export VSPHERE_SNAPSHOT_TAGGING_CATEGORY=$(kubectl -n kasten-io get profiles $(kubectl -n kasten-io get profiles -o=jsonpath='{.items[?(@.spec.infra.type=="VSphere")].metadata.name}') -o jsonpath='{.spec.infra.vsphere.categoryName}')