Location Configuration
K10 can usually invoke protection operations such as snapshots within a cluster without requiring additional credentials. While this might be sufficient if K10 is running in some of (but not all) the major public clouds and if actions are limited to a single cluster, it is not sufficient for essential operations such as performing real backups, enabling cross-cluster and cross-cloud application migration, and enabling DR of the K10 system itself.
To enable these actions that span the lifetime of any one cluster, K10
needs to be configured with access to external object storage or
external NFS file storage. This is accomplished via the creation of
Location
Profiles.
Location Profile creation can be accessed from the Location
page
of the Profiles
menu in the navigation sidebar or via the
CRD-based Profiles API.
Location Profiles
Location profiles are used to create backups from snapshots, move
applications and their data across clusters and potentially across
different clouds, and to subsequently import these backups or exports
into another cluster. To create a location profile, click New
Profile
on the profiles page.
Object Storage Location
Support is available for the following object storage providers:
K10 creates Kopia repositories in object store locations. K10 uses Kopia as a data mover which implicitly provides support to deduplicate, encrypt and compress data at rest. K10 performs periodic maintenance on these repositories to recover released storage.
Amazon S3 or S3 Compatible Storage
Enter the access key and secret, select the region and enter
the bucket name. The bucket must be in the region specified.
If the bucket has object locking enabled then set the
Enable Immutable Backups
toggle
(see Immutable Backups for details).
If the bucket is using S3 Intelligent-Tiering
, only
Standard-IA
, One Zone-IA
and Glacier Instant Retrieval
storage classes are supported by K10.
An IAM role may be specified for an Amazon S3 location profile by
selecting the Execute Operations Using an AWS IAM Role
button.
If an S3-compatible object storage system is used that is not hosted by one of the supported cloud providers, an S3 endpoint URL will need to be specified and optionally, SSL verification might need to be disabled. Disabling SSL verification is only recommended for test setups.
When Validate and Save
is selected, the config profile will be
created and a profile similar to the following will appear:
The minimum supported version for NetApp ONTAP S3 is 9.12.1.
Azure Storage
To use an Azure storage location, you are required to pick an
Azure Storage Account
, a Cloud Enviornment
and a Container
.
The Container
must be created beforehand.
Google Cloud Storage
In addition to authenticating with Google Service Account credentials, K10 also supports authentication with Google Workload Identity Federation with Kubernetes as the Identity Provider.
In order to use Google Workload Identity Federation, some additional Helm settings are necessary. Please refer to Installing K10 with Google Workload Identity Federation for details on how to install K10 with these settings.
Enter the project identifier and the appropriate credentials, i.e., the service key for the Google Service Account or the credential configuration file for Google Workload Identity Federation. Credentials should be in JSON or PKCS12 format. Then, select the region and enter a bucket name. The bucket must be in the specified location.
Note
When using Google Workload Identity Federation with Kubernetes as the Identity
Provider, ensure that the credential configuration file is configured with the
format type (--credential-source-type
) set to Text
, and specify the OIDC
ID token path (--credential-source-file
) as
/var/run/secrets/kasten.io/serviceaccount/GWIF/token
.
NFS File Storage Location
Requirements:
An NFS server reachable from the nodes where K10 is installed
An exported NFS share, mountable on all the nodes where K10 is installed
A Persistent Volume defining the exported NFS share similar to the example below:
apiVersion: v1 kind: PersistentVolume metadata: name: test-pv spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: nfs mountOptions: - hard - nfsvers=4.1 nfs: path: / server: 172.17.0.2A corresponding Persistent Volume Claim with the same storage class name in the K10 namespace (default
kasten-io
):apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pvc namespace: kasten-io spec: storageClassName: nfs accessModes: - ReadWriteMany resources: requests: storage: 10Gi
Once the above requirements are met, an NFS FileStore
location profile can
be created on the profiles
page using the PVC created above.
When Validate and Save
is selected, the config profile will be
created and a profile similar to the following will appear:
By default, K10 will use the root
user to access
the NFS Filestore
location profile.
To use a different user,
the Supplemental Group
and Path
fields can be set.
The Path
field must refer to the directory
located within the PVC
specified in the Claim Name
.
The group specified in the Supplemental Group
field must have read, write, and execute access to this directory.
Veeam Repository Location
A Veeam Repository
can be used for exported vSphere CSI provisioned volume snapshot data
when using a supported vSphere cluster.
To create such a location profile, click New Profile
on the profiles
page and choose the Veeam provider type in the dialog
which results in a form similar to the following:
Provide the DNS name or the IP address of the Veeam backup server in the
Veeam Backup Server
field.
The Veeam Backup Server API Port
field is pre-configured with the
installation default value and may be changed if necessary.
Specify the name of a backup repository on this server in the
Backup Repository
field.
If you have an immutable Veeam Repository, follow the instructions to set it up in K10.
Warning
Please be aware that using more than one unique VBR host in location profiles is not supported and may cause synchronization issues between k10 and VBR. You can create as many location profiles as you need, but they should use the same server. If you try to save a host which is not used in other profiles you will get a warning, it's possible to proceed, but you should use this option only for temporary product reconfiguration cases.
Provide access credentials in the Username
and Password
fields.
Note
Make sure that Access Permissions are granted for this account or its group on Veeam backup repositories where you want to keep backups exported by K10 policies. Check Veeam User Guide for the details.
When you click Save Profile
the dialog will validate the input data.
Communication with the server uses SSL and requires that the server's
certificate be trusted by K10.
If such trust is not established but you trust your environment,
you may check the
Skip certificate chain and hostname verification
option
to disable certificate validation.
Location Settings for Migration
If the location profile is used for exporting an application for cross-cluster migration, it will be used to store application restore point metadata and, when moving across infrastructure providers, bulk data too. Similarly, location profiles are also used for importing applications into a cluster that is different than the source cluster where the application was captured.
Note
In case of NFS File Storage Location, the exported NFS share must be reachable from the destination cluster and mounted on all the nodes where K10 is installed.
Immutable Backups
The frequency of ransomware attacks on enterprise customers is increasing. Backups are essential for recovering from these attacks, acting as a first line of defense for recovering critical data. Attackers are now targeting backups as well, to make more difficult, if not impossible for their victims to recover.
K10 can leverage object-locking and immutability features available in many object store providers to ensure its exported backups are protected from tampering. When exporting to a locked bucket, the restore point data cannot be deleted or modified within a set period, even with administrator privileges. If an attacker obtains privileged object store credentials and attempts to disrupt the backups stored there, K10 can restore the protected application by reading back the original, immutable and unadulterated restore point.
Immutable backups are supported for AWS S3 and other S3 compatible object stores. Additionally, they are supported for Azure.
More information on the full Immutable Backups Workflow.
Warning
The generic storage and shareable volume backup and restore workflows are not compatible with the protections afforded by immutable backups. Use of a location profile enabled for immutable backups can be used for backup and restore, but the protection period is ignored, and the profile is treated as a non-immutability-enabled location. Please note that using an object-locking bucket for such use cases can amplify storage usage without any additional benefit. Please contact support for any inquiries.
S3 Locked Bucket Setup
To prepare K10 to export immutable backups, a bucket must be prepared in advance.
The bucket must be created on AWS S3 or an S3 compatible object store.
The bucket must be created with object locking enabled. Note: On some S3-compatible implementations, the object locking property of a bucket can only be enabled/configured at bucket creation time.
A sample Minio CLI mc script that will set up an immutable-backup-eligible locked bucket in AWS S3:
# Set up the following variables:
BUCKET_NAME=<choose a unique bucket name>
REGION=<pick the region for the bucket>
AWS_ACCESS_KEY_ID=<access key ID>
AWS_SECRET_ACCESS_KEY=<secret access key>
# Alias the s3 account credentials
mc alias set s3 https://s3.amazonaws.com ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
# Make the bucket with locking enabled
mc mb --region=${REGION} s3/${BUCKET_NAME} --with-lock
For more information on setting up object-locking: * Using S3 Object Lock
Profile Creation
Once a bucket has been prepared with each of these requirements met, a profile can be created from the K10 dashboard to point to it.
Follow the steps for setting up a profile as normal. Enter a profile name, object store credentials, region, and bucket name.
It is recommended that the credentials provided to access the locked bucket are granted minimal permissions only:
List objects
List object versions
Determine if bucket exists
Get object lock configuration
Get bucket versioning state
Get/Put object retention
Get/Put/Remove object
Get object metadata
See Using K10 with AWS S3 for a list of required permissions.
After selecting the checkbox labeled "Enable Immutable Backups" a new button labeled "Validate Bucket <bucket-name>" will appear. Click the Validate Bucket button to initiate a series of checks against the bucket, verifying the bucket can be reached and meets all of the requirements denoted above. All conditions must be met for the check to succeed and for the profile form to be submitted.
If the provided bucket meets all of the conditions, a Protection Period slider will appear. The protection period is a user-selectable time period that K10 will use when maintaining an ongoing immutable retention period for each exported restore point. A longer protection period means a longer window in which to detect, and safely recover from, an attack; backup data remains immutable and unadulterated for longer. The trade-off is increased storage costs, as potentially stale data cannot be removed until the object's immutable retention expires.
K10 limits the maximum protection period that can be selected to 90 days. A safety buffer is added to the desired protection period. This is to ensure K10 can always find and maintain ongoing protection of any new objects written to the bucket before their retention lapses. The minimum protection period is 1 day.
Push the "Save Profile" button. The profile will be submitted to K10 and will appear in the list of Location Profiles. The card will reflect the object immutability status of the referenced bucket, as well as the selected protection period. This profile can now be selected as an export destination, and any restore points exported there will be immutable and protected for the selected protection period.
Protecting applications with Immutable Backups
Selecting the locked bucket profile as the Export Location Profile in the Backups procedure will render all application data immutable for the duration of the protection period. Additionally, to ensure K10 can restore that application data, K10 should also be protected with an immutable locked-bucket Disaster Recovery (DR) profile.
In a situation where the cluster and/or object store has been corrupted, attacked, or otherwise tampered with, K10 might be just as susceptible to being compromised as any other application. Protecting both (apps and K10) with immutable locked-bucket profiles will ensure the data is intact, and that K10 knows how to restore it. Therefore, if one or more locked bucket location profiles are being used to back up and protect vital applications, it is highly recommended that a locked bucket profile should also be used with K10 DR.
When setting up a location profile for K10 DR, one should choose a protection period that is AT LEAST as long as the longest protection period in use for application backups. For example if one application is being backed up using a profile with a 1 week protection period, and another using a 1 year protection period, the protection period for the K10 DR backup profile should be at least 1 year to ensure the latter application can always be recovered by K10 in the required 1-year time window.
See Restore K10 Backup for instructions on how to restore K10 to a point-in-time.
Azure Immutability Setup
To set up immutability in Azure, follow a process similar to S3, but take into account the following requirements:
Ensure that the container exists and is reachable with the credentials provided in the profile form.
Enable versioning on the related storage account.
Ensure support for version-level immutability on the container or related storage account.
Since K10 ignores retention policies, it is not necessary to set one on the container. As an alternative, choose the desired protection period, and the files will be initially protected for that amount plus a safety buffer to ensure protection compliance.