Storage Integration

K10 supports direct integration with public cloud storage vendors, direct Ceph support, as well as CSI integration. While most integrations are transparent, the below sections document the configuration needed for the exceptions.

Direct Provider Support

K10 supports seamless and direct storage integration with a number of storage providers. The following storage providers are automatically discovered and configured within K10:

  • Amazon Elastic Block Store (EBS)

  • Amazon Elastic File System (EFS)

  • Azure Managed Disks

  • Google Persistent Disk

  • IBM Cloud Block Storage

Container Storage Interface (CSI) Support

Apart from direct storage provider integration, K10 also supports invoking volume snapshots operations via the Container Storage Interface (CSI). To ensure that this works correctly, please ensure the following requirements are met.

CSI Requirements

  • Kubernetes v1.14.0 or higher

  • The VolumeSnapshotDataSource feature has been enabled in the Kubernetes cluster

  • A CSI driver that has Volume Snapshot support. Please look at the list of CSI drivers to confirm snapshot support.

CSI Snapshot Configuration

For each CSI driver, ensure that a VolumeSnapshotClass has been added with K10 annotation (k10.kasten.io/is-snapshot-class: "true") and that the deletionPolicy setting is set to a value of Retain.

Setting the deletionPolicy to Retain is required to ensure that snapshot cleanup is under K10 control and to allow for fast recovery from accidental application deletion. In particular, CSI snapshots have a namespaced VolumeSnapshot object and a non-namespaced VolumeSnapshotContent object. If the deletionPolicy is not set to Retain and the namespace gets accidentally deleted, the cleanup of the namespaced VolumeSnapshot object will lead to the cascading delete of the VolumeSnapshotContent object and therefore the underlying storage snapshot.

Note that while this setting is recommended, it is not always sufficient. For example, some storage systems will force snapshot deletion if the associated volume is deleted (snapshot lifecycle is not independent of the volume). Similarly, it might be possible to force-delete snapshots through the storage array's native management interface. Enabling backups together with volume snapshots is therefore always recommended for safety.

VolumeSnapshotClass Configuration

apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotClass
metadata:
  annotations:
    k10.kasten.io/is-snapshot-class: "true"
  name: k10-snapshot-class
snapshotter: pd.csi.storage.gke.io
deletionPolicy: Retain

Given the configuration requirements, the above code illustrates a correctly-configured VolumeSnapshotClass for K10. If the VolumeSnapshotClass does not match the above template, please follow the below instructions to modify it. If the existing VolumeSnapshotClass cannot be modified, a new one can be created with the required annotation and policy setting.

  1. The deletion policy on the VolumeSnapshotClass can be edited via the following command:

    $ kubectl patch volumesnapshotclass ${VSC_NAME} \
        -p '{"deletionPolicy":"Retain"}' --type=merge
    
  2. Whenever K10 detects volumes that were provisioned via a CSI driver, it will look for a VolumeSnapshotClass with K10 annotation for the identified CSI driver and use it to create snapshots. You can easily annotate an existing VolumeSnapshotClass using:

    $ kubectl annotate volumesnapshotclass ${VSC_NAME} \
        k10.kasten.io/is-snapshot-class=true
    

    Verify that only one VolumeSnapshotClass per storage provisioner has the K10 annotation. Currently, if no VolumeSnapshotClass or more than one has the K10 annotation, snapshot operations will fail.

    # List the VolumeSnapshotClasses with K10 annotation
    $ kubectl get volumesnapshotclass -o json | \
        jq '.items[] | select (.metadata.annotations["k10.kasten.io/is-snapshot-class"]=="true") | .metadata.name'
    k10-snapshot-class
    

Migration Requirements

If application migration across clusters is needed, ensure that the VolumeSnapshotClass names match between both clusters. As the VolumeSnapshotClass is also used for restoring volumes, an identical name is required.

CSI Snapshotter Minimum Requirements

Finally, ensure that the csi-snapshotter container for all CSI drivers you might have installed has a minimum version of v1.2.2. If your CSI driver ships with an older version that has known bugs, it might be possible to transparently upgrade in place using the following code.

# For example, if you installed the GCP Persistent Disk CSI driver
# in namespace ${DRIVER_NS} with a statefulset (or deployment)
# name ${DRIVER_NAME}, you can check the snapshotter version as below:
$ kubectl get statefulset ${DRIVER_NAME} --namespace=${DRIVER_NS} \
    -o jsonpath='{range .spec.template.spec.containers[*]}{.image}{"\n"}{end}'
gcr.io/gke-release/csi-provisioner:v1.0.1-gke.0
gcr.io/gke-release/csi-attacher:v1.0.1-gke.0
quay.io/k8scsi/csi-snapshotter:v1.0.1
gcr.io/dyzz-csi-staging/csi/gce-pd-driver:latest

# Snapshotter version is old (v1.0.1), update it to the required version.
$ kubectl set image statefulset/${DRIVER_NAME} csi-snapshotter=quay.io/k8scsi/csi-snapshotter:v1.2.2 \
  --namespace=${DRIVER_NS}

Pure Storage Support

For integrating K10 with Pure Storage, please follow Pure Storage's instructions on deploying the Pure Storage Orchestrator and the VolumeSnapshotClass.

Once the above two steps are completed, follow the instructions for K10 CSI integration. In particular, the Pure VolumeSnapshotClass needs to be edited using the following commands.

$ kubectl annotate volumesnapshotclass pure-snapshotclass \
    k10.kasten.io/is-snapshot-class=true
$ kubectl patch volumesnapshotclass pure-snapshotclass \
    -p '{"deletionPolicy":"Retain"}' --type=merge

NetApp Trident Support

For integrating K10 with NetApp Trident, please follow NetApp's instructions on deploying Trident as a CSI provider and then follow the instructions above.

Ceph RBD (Non-CSI) Support

Note

Non-CSI support for Ceph will be deprecated in an upcoming release in favor of direct CSI integration

Apart from integration with Ceph's CSI driver, K10 also has native support for Ceph to protect persistent volumes provisioned using the Ceph RBD provisioner.

To enable this, K10 must be installed or upgraded with additional tools enabled. The version of K10 running needs to be specified below so that no unintentional upgrade of K10 occurs. You can obtain the current version of K10 by looking at the footer on the dashboard or running helm list.

$ helm upgrade k10 kasten/k10 \
    --set toolsImage.enabled=true \
    --set toolsImage.image="kasten-k10.jfrog.io/kasten-images/github-kastenhq-ceph-tools:0.0.2" \
    --version=<current version> \
    --reuse-values
$ helm upgrade k10 kasten/k10 --namespace=kasten-io \
    --set toolsImage.enabled=true \
    --set toolsImage.image="kasten-k10.jfrog.io/kasten-images/github-kastenhq-ceph-tools:0.0.2" \
    --version=<current version> \
    --reuse-values