As of March 5, 2024, "Azure Active Directory" has been renamed as "Microsoft Entra ID." Throughout this documentation, references to "Azure Active Directory" will be updated to use both the new and old names. Both names will be used for a while, after which the documentation will be updated to use only the new name.
Storage Integration
Veeam Kasten supports direct integration with public cloud storage vendors as well as CSI integration. While most integrations are transparent, the below sections document the configuration needed for the exceptions.
Direct Provider Integration
Veeam Kasten supports seamless and direct storage integration with a number of storage providers. The following storage providers are either automatically discovered and configured within Veeam Kasten or can be configured for direct integration:
- Amazon Elastic Block Store (EFS)
- Azure Managed Disks (Azure Managed Disks)
- Google Persistent Disk
- Ceph
- Cinder-based providers on OpenStack
- vSphere Cloud Native Storage (CNS)
- Portworx
- Veeam Backup (snapshot data export only)
Container Storage Interface (CSI)
Apart from direct storage provider integration, Veeam Kasten also supports invoking volume snapshots operations via the Container Storage Interface (CSI). To ensure that this works correctly, please ensure the following requirements are met.
CSI Requirements
- Kubernetes v1.14.0 or higher
- The
VolumeSnapshotDataSourcefeature has been enabled in the Kubernetes cluster - A CSI driver that has Volume Snapshot support. Please look at the list of CSI drivers to confirm snapshot support.
Pre-Flight Checks
Assuming that the default kubectl context is pointed to a cluster with CSI enabled, CSI pre-flight checks can be run by deploying the primer tool with a specified StorageClass. This tool runs in a pod in the cluster and performs the following operations:
- Creates a sample application with a persistent volume and writes some data to it
- Takes a snapshot of the persistent volume
- Creates a new volume from the persistent volume snapshot
- Validates the data in the new persistent volume
First, run the following command to derive the list of provisioners along with their StorageClasses and VolumeSnapshotClasses.
curl -s https://docs.kasten.io/downloads/7.5.10/tools/k10_primer.sh | bash
Then, run the following command with a valid StorageClass to deploy the pre-check tool:
curl -s https://docs.kasten.io/downloads/7.5.10/tools/k10_primer.sh | bash /dev/stdin csi -s ${STORAGE_CLASS}
CSI Snapshot Configuration
For each CSI driver, ensure that a VolumeSnapshotClass has been added with Veeam Kasten annotation (k10.kasten.io/is-snapshot-class: "true").
Note that CSI snapshots are not durable. In particular, CSI snapshots have a namespaced VolumeSnapshot object and a non-namespaced VolumeSnapshotContent object. With the default (and recommended) deletionPolicy, if there is a deletion of a volume or the namespace containing the volume, the cleanup of the namespaced VolumeSnapshot object will lead to the cascading delete of the VolumeSnapshotContent object and therefore the underlying storage snapshot.
Setting deletionPolicy to Delete isn't sufficient either as some storage systems will force snapshot deletion if the associated volume is deleted (snapshot lifecycle is not independent of the volume). Similarly, it might be possible to force-delete snapshots through the storage array's native management interface. Enabling backups together with volume snapshots is therefore required for a durable backup.
Veeam Kasten creates a clone of the original VolumeSnapshotClass with the DeletionPolicy set to 'Retain'. When restoring a CSI VolumeSnapshot, an independent replica is created using this cloned class to avoid any accidental deletions of the underlying VolumeSnapshotContent.
VolumeSnapshotClass Configuration
- Alpha CSI Snapshot API
- Beta CSI Snapshot API
apiVersion: snapshot.storage.k8s.io/v1alpha1
snapshotter: hostpath.csi.k8s.io
kind: VolumeSnapshotClass
metadata:
annotations:
k10.kasten.io/is-snapshot-class: "true"
name: csi-hostpath-snapclass
apiVersion: snapshot.storage.k8s.io/v1beta1
driver: hostpath.csi.k8s.io
kind: VolumeSnapshotClass
metadata:
annotations:
k10.kasten.io/is-snapshot-class: "true"
name: csi-hostpath-snapclass
Given the configuration requirements, the above code illustrates a correctly-configured VolumeSnapshotClass for Veeam Kasten. If the VolumeSnapshotClass does not match the above template, please follow the below instructions to modify it. If the existing VolumeSnapshotClass cannot be modified, a new one can be created with the required annotation.
-
Whenever Veeam Kasten detects volumes that were provisioned via a CSI driver, it will look for a VolumeSnapshotClass with Veeam Kasten annotation for the identified CSI driver and use it to create snapshots. You can easily annotate an existing VolumeSnapshotClass using:
$ kubectl annotate volumesnapshotclass ${VSC_NAME} \
k10.kasten.io/is-snapshot-class=trueVerify that only one VolumeSnapshotClass per storage provisioner has the Veeam Kasten annotation. Currently, if no VolumeSnapshotClass or more than one has the Veeam Kasten annotation, snapshot operations will fail.
# List the VolumeSnapshotClasses with Veeam Kasten annotation
$ kubectl get volumesnapshotclass -o json | \
jq '.items[] | select (.metadata.annotations["k10.kasten.io/is-snapshot-class"]=="true") | .metadata.name'
k10-snapshot-class
StorageClass Configuration
As an alternative to the above method, a StorageClass can be annotated
with the following-(k10.kasten.io/volume-snapshot-class: "VSC_NAME").
All volumes created with this StorageClass will be snapshotted by the
specified VolumeSnapshotClass:
$ kubectl annotate storageclass ${SC_NAME} \
k10.kasten.io/volume-snapshot-class=${VSC_NAME}
Migration Requirements
If application migration across clusters is needed, ensure that the VolumeSnapshotClass names match between both clusters. As the VolumeSnapshotClass is also used for restoring volumes, an identical name is required.
CSI Snapshotter Minimum Requirements
Finally, ensure that the csi-snapshotter container for all CSI drivers
you might have installed has a minimum version of v1.2.2. If your CSI
driver ships with an older version that has known bugs, it might be
possible to transparently upgrade in place using the following code.
# For example, if you installed the GCP Persistent Disk CSI driver
# in namespace ${DRIVER_NS} with a statefulset (or deployment)
# name ${DRIVER_NAME}, you can check the snapshotter version as below:
$ kubectl get statefulset ${DRIVER_NAME} --namespace=${DRIVER_NS} \
-o jsonpath='{range .spec.template.spec.containers[*]}{.image}{"\n"}{end}'
gcr.io/gke-release/csi-provisioner:v1.0.1-gke.0
gcr.io/gke-release/csi-attacher:v1.0.1-gke.0
quay.io/k8scsi/csi-snapshotter:v1.0.1
gcr.io/dyzz-csi-staging/csi/gce-pd-driver:latest
# Snapshotter version is old (v1.0.1), update it to the required version.
$ kubectl set image statefulset/${DRIVER_NAME} csi-snapshotter=quay.io/k8scsi/csi-snapshotter:v1.2.2 \
--namespace=${DRIVER_NS}
AWS Storage
Veeam Kasten supports Amazon Web Services (AWS) storage integration, including Amazon Elastic Block Storage (EBS) and Amazon Elastic File System (EFS)
Amazon Elastic Block Storage (EBS) Integration
Veeam Kasten currently supports backup and restores of EBS CSI volumes as well as Native (In-tree) volumes. In order to work with the In-tree provisioner, or to migrate snapshots within AWS, Veeam Kasten requires an Infrastructure Profile. Please refer to AWS Infrastructure Profile on how to create one. Block Mode Exports of EBS volumes use the AWS EBS Direct API.
Amazon Elastic File System (EFS) Integration
Veeam Kasten currently supports backup and restores of statically
provisioned EFS CSI volumes. Since statically provisioned volumes use
the entire file system we are able to utilize AWS APIs to take backups.
While the EFS CSI driver has begun supporting dynamic provisioning, it
does not create new EFS volumes. Instead, it creates and uses access
points within existing EFS volumes. The current AWS APIs do not support
backups of individual access points.
However, Veeam Kasten can take backups of these dynamically
provisioned EFS volumes using the
[Shareable Volume Backup and Restore](./shareable-volume.md mechanism).
For all other operations, EFS requires an Infrastructure Profile. Please refer to AWS Infrastructure Profile on how to create one.
AWS Infrastructure Profile
To enable Veeam Kasten to take snapshots and restore volumes from AWS,
an Infrastructure Profile must be created from the Infrastructure page
of the Profiles menu in the navigation sidebar.


Using AWS IAM Service Account Credentials that Veeam Kasten was installed with is also
possible with the Authenticate with AWS IAM Role checkbox. An
additional AWS IAM Role can be provided if the user requires Veeam
Kasten to assume a different role. The provided credentials are verified
for both EBS and EFS.
Currently, Veeam Kasten also supports the legacy mode of providing AWS credentials via Helm. In this case, an AWS Infrastructure Profile will be created automatically with the values provided through Helm, and can be seen on the Dashboard. This profile can later be replaced or updated manually if necessary, such as when the credentials change.
In future releases, providing AWS credential via Helm will be deprecated.
Azure Managed Disks
Veeam Kasten supports backups and restores for both CSI volumes and
in-tree volumes within Azure Managed Disks. To work with the Azure
in-tree provisioner, Veeam Kasten requires the creation of an
Infrastructure Profile from the Infrastructure page of the Profiles
menu in the navigation sidebar.
Veeam Kasten can perform block mode exports with changed block tracking (CBT)
for volumes provisioned using the disk.csi.azure.com CSI driver. This
capability is automatically utilized when the following conditions are met:
- Veeam Kasten includes a valid Azure Infrastructure Profile
- Either the Azure Disk storage class or individual PVC enables Block Mode Exports
- The Azure Disk volume snapshot class enables incremental snapshots, as shown in the example below:
$ kubectl get volumesnapshotclass csi-azuredisk-vsc -o yaml
apiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: Delete
driver: disk.csi.azure.com
kind: VolumeSnapshotClass
metadata:
annotations:
k10.kasten.io/is-snapshot-class: "true"
snapshot.storage.kubernetes.io/is-default-class: "true"
creationTimestamp: "2024-10-28T14:48:50Z"
generation: 1
name: csi-azuredisk-vsc
resourceVersion: "2502"
uid: 9ebec324-0f09-42fa-aace-39440b3184b6
parameters:
incremental: "true" # available values: "true", "false" ("true" by default for Azure Public Cloud, and "false" by default for Azure Stack Cloud)
Service Principal
Veeam Kasten supports authentication with Microsoft Entra ID (formerly
Azure Active Directory) with Azure Client Secret credentials, as well as
Azure Managed Identity.
To authenticate with Azure Client Secret credentials, Veeam Kasten
requires Tenant ID, Client ID, and Client Secret.


Managed Identities
If Use Azure TenantID, Secret and ClientID to authenticate is chosen, users will
opt out of using Managed Identity and need to provide their own Tenant ID,
Client Secret and Client ID.
To use Managed Identity but provide a custom Client ID, users can choose
Custom Client ID and provide their own, otherwise the default Managed Identity will be used.

To authenticate with Azure Managed Identity, clusters must have Azure Managed Identity enabled.
Federated Identity
To authenticate with Azure Federated Identity (also known as workload identity), clusters must have Azure Federated Credentials set up. This can only be done via helm. More information can be found here.

|
Federated Identity is currently only supported on Openshift clusters with version 4.14 and later.
If you are using Federated Identity, you cannot edit or delete the infrastructure profile once created. You can edit or delete by using helm upgrade.
Other Configuration
In addition to authentication credentials, Veeam Kasten also requires
Subscription ID and Resource Group. For information on how to
retrieve the required data, please refer to Installing Veeam Kasten on
Azure.
Additionally, information for Azure Stack such as
Storage Environment Name, Resource Manager Endpoint, AD Endpoint,
and AD Resource can also be specified. These fields are not mandatory,
and default values will be used if they are not provided by the user.
| Field | Value |
|---|---|
| Storage Environment Name | AzurePublicCloud |
| Resource Manager Endpoint | https://management.azure.com/ |
| AD Endpoint | https://login.microsoftonline.com/ |
| AD Resource | https://management.azure.com/ |
Veeam Kasten also supports the legacy method of providing Azure credentials via Helm. In this case, an Azure Infrastructure Profile will be created automatically with the values provided through Helm, and can be seen on the Dashboard. This profile can later be replaced or updated manually if necessary, such as when the credentials change.
In future releases, providing Azure credentials via Helm will be deprecated.
Pure Storage
For integrating Veeam Kasten with Pure Storage, please follow Pure Storage's instructions on deploying the Pure Storage Orchestrator and the VolumeSnapshotClass.
Once the above two steps are completed, follow the instructions for
Veeam Kasten CSI integration<csi>. In
particular, the Pure VolumeSnapshotClass needs to be edited using the
following commands.
$ kubectl annotate volumesnapshotclass pure-snapshotclass \
k10.kasten.io/is-snapshot-class=true
NetApp Trident
For integrating Veeam Kasten with NetApp Trident, please follow
NetApp's instructions on deploying Trident as a CSI
provider and then follow the
instructions above<csi>.
Google Persistent Disk
Veeam Kasten supports Google Persistent Disk (GPD) storage integration
with both CSI and native (in-tree) drivers. In order to use GPD native
driver, an Infrastructure Profile must be created from the
Infrastructure page of the Profiles menu in the navigation sidebar.
The GCP Project ID and GCP Service Key fields are required. The
GCP Service Key takes the complete content of the service account json
file when creating a new service account.


Currently, Veeam Kasten also supports the legacy mode of providing Google credentials via Helm. In this case, a Google Infrastructure Profile will be created automatically with the values provided through Helm, and can be seen on the Dashboard. This profile can later be replaced or updated manually if necessary, such as when the credentials change.
In future releases, providing Google credential via Helm will be deprecated.
Ceph
Veeam Kasten supports Ceph RBD and Ceph FS snapshots and backups via their CSI drivers.
CSI Integration
If you are using Rook to install Ceph, Veeam Kasten only supports Rook v1.3.0 and above. Previous versions had bugs that prevented restore from snapshots.
Veeam Kasten supports integration with Ceph (RBD and FS) via its CSI
interface by following the instructions for
CSI integration<csi>. In particular, the
Ceph VolumeSnapshotClass needs to be edited using the following
commands.
$ kubectl annotate volumesnapshotclass csi-snapclass \
k10.kasten.io/is-snapshot-class=true
Ceph CSI RBD volume snapshots can be exported in block mode with the appropriate annotation on their StorageClass. The Ceph Rados Block Device API can enable direct access to data blocks through the network and provide information on the allocated blocks in a snapshot, which could reduce the size and duration of a backup; however, it is important to note that Changed Block Tracking is not supported for Ceph CSI RBD snapshots. The output of the Veeam Kasten Primer Block Mount Check command indicates if the API will be used:
...
Block mount checker:
StorageClass ocs-storagecluster-ceph-rbd is annotated with 'k10.kasten.io/sc-supports-block-mode-exports=true'
StorageClass ocs-storagecluster-ceph-rbd is supported by K10 in Block volume mode via vendor APIs (Ceph Rados Block Device)
