Advanced Install Options
FREE Veeam Kasten Edition and Licensing
By default, Veeam Kasten comes with an embedded free edition license. The free edition license allows you to use the software on a cluster with at most 50 worker nodes in the first 30 days, and then 5 nodes after the 30-day period. In order to continue using the free license, regular updates to stay within the 6 month support window might be required. You can remove the node restriction of the free license by updating to Enterprise Edition and obtaining the appropriate license from the Kasten team.
Using a Custom License During Install
To install a license that removes the node restriction,
please add the following to any of the helm install
commands:
--set license=<license-text>
or, to install a license from a file:
--set-file license=<path-to-license-file>
Note
Changing Licenses
To add a new license to Veeam Kasten, a secret needs to be created in
the Veeam Kasten namespace (default is kasten-io
) with the requirement
that the license text be set in a field named license
. To do this
from the command line, run:
$ kubectl create secret generic <license-secret-name> \
--namespace kasten-io \
--from-literal=license="<license-text>"
or, to add a license from a file:
$ kubectl create secret generic <license-secret-name> \
--namespace kasten-io \
--from-file=license="<path-to-license-file>"
Note
The resulting license will look like:
apiVersion: v1
data:
license: Y3Vz...
kind: Secret
metadata:
creationTimestamp: "2020-04-14T23:50:05Z"
labels:
app: k10
app.kubernetes.io/instance: k10
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: k10
helm.sh/chart: k10-7.5.1
heritage: Helm
release: k10
name: k10-custom-license
namespace: kasten-io
type: Opaque
Similarly, old licenses can be removed by deleting the secret that contains it.
$ kubectl delete secret <license-secret-name> \
--namespace kasten-io
Add Licenses via Dashboard
It is possible to add a license via the Licenses
page of the Settings
menu in the navigation sidebar. The license can be pasted directly into the
text field or loaded from a .lic
file.
License Grace period
If the license status of the cluster becomes invalid (e.g., the licensed node limit is exceeded), the ability to perform manual actions or creating new policies will be disabled but your previously scheduled policies will continue to run for 50 days. The displayed warning will be look like:
By default, Veeam Kasten provides a grace period of 50 days to ensure that applications remain protected while a new license is obtained or the cluster is brought back into compliance by reducing the number of nodes. Veeam Kasten will stop the creation of any new jobs (scheduled or manual) after the grace period expires.
If the cluster's license status frequently swaps between valid and invalid states, the amount of time the cluster license spends in an invalid status will be subtracted from subsequent grace periods.
You can see node usage from the last two months via the Licenses
page of
the Settings
menu in the navigation sidebar. Usage starts being tracked
from the installation date of 4.5.8+. From 5.0.11+ you can see the same
information through Prometheus.
Manually Creating or Using an Existing Service Account
Note
For more information regarding ServiceAccount restrictions with Kasten, please refer to this documentation.
The following instructions can be used to create a new Service Account
that grants Veeam Kasten the required permissions to Kubernetes resources
and the use the given Service Account as a part of the install process.
The instructions assume that you will be installing Veeam Kasten in
the kasten-io
namespace.
# Create kasten-io namespace if have not done it yet.
$ kubectl create namespace kasten-io
# Create a ServiceAccount for k10 k10-sa
$ kubectl --namespace kasten-io create sa k10-sa
# Create a cluster role binding for k10-sa
$ kubectl create clusterrolebinding k10-sa-rb \
--clusterrole cluster-admin \
--serviceaccount=kasten-io:k10-sa
Following the SA creation, you can install Veeam Kasten using:
$ helm install k10 kasten/k10 --namespace=kasten-io \
--set rbac.create=false \
--set serviceAccount.create=false \
--set serviceAccount.name=k10-sa
Pinning Veeam Kasten to Specific Nodes
While not generally recommended, there might be situations (e.g., test environments, nodes reserved for infrastructure tools, or clusters without autoscaling enabled) where Veeam Kasten might need to be pinned to a subset of nodes in your cluster. You can do this easily with an existing deployment by using a combination of NodeSelectors and Taints and Tolerations.
The process to modify a deployment to accomplish this is demonstrated
in the following example. The example assumes that the nodes you want
to restrict Veeam Kasten to have the label selector-key: selector-value
and a taint set to taint-key=taint-value:NoSchedule
.
$ cat << EOF > patch.yaml
spec:
template:
spec:
nodeSelector:
selector-key: selector-value
tolerations:
- key: "taint-key"
operator: "Equal"
value: "taint-value"
effect: "NoSchedule"
EOF
$ kubectl get deployment --namespace kasten-io | awk 'FNR == 1 {next} {print $1}' \
| xargs -I DEP kubectl patch deployments DEP --namespace kasten-io --patch "$(cat patch.yaml)"
Running Veeam Kasten Containers as a Specific User
Veeam Kasten service containers run with UID
and fsGroup
1000 by
default. If the storage class Veeam Kasten is configured to use for its own
services requires the containers to run as a specific user, then the user
can be modified.
This is often needed when using shared storage, such as NFS, where permissions on the target storage require a specific user.
To run as a specific user (e.g., root (0), add the following to the Helm install command:
--set services.securityContext.runAsUser=0 \
--set services.securityContext.fsGroup=0 \
--set prometheus.server.securityContext.runAsUser=0 \
--set prometheus.server.securityContext.runAsGroup=0 \
--set prometheus.server.securityContext.runAsNonRoot=false \
--set prometheus.server.securityContext.fsGroup=0
Other SecurityContext settings for the Veeam Kasten service containers can be specified using the --set service.securityContext.<setting name>
and --set prometheus.server.securityContext.<setting name>
options.
Configuring Prometheus
Prometheus is an open-source system monitoring and alerting toolkit bundled with Veeam Kasten.
When passing value from the command line, the value key has to be prefixed
with the prometheus.
string:
--set prometheus.server.persistentVolume.storageClass=default.sc
When passing values in a YAML file, all prometheus settings should be
under the prometheus
key:
# values.yaml
# global values - apply to both Veeam Kasten and prometheus
global:
persistence:
storageClass: default-sc
# Veeam Kasten specific settings
auth:
basicAuth: enabled
# prometheus specific settings
prometheus:
server:
persistentVolume:
storageClass: another-sc
Note
To modify the bundled Prometheus configuration, only use the helm values listed in the Complete List of Veeam Kasten Helm Options. Any undocumented configurations may affect the functionality of the Veeam Kasten. Additionally, Veeam Kasten does not support disabling Prometheus service, which may lead to unsupported scenarios, potential monitoring and logging issues, and overall functionality disruptions. It is recommended to keep these services enabled to ensure proper functionality and prevent unexpected behavior.
Complete List of Veeam Kasten Helm Options
The following table lists the configurable parameters of the K10 chart and their default values.
Parameter |
Description |
Default |
---|---|---|
|
Whether to enable accept EULA before installation |
|
|
Company name. Required field if EULA is accepted |
|
|
Contact email. Required field if EULA is accepted |
|
|
License string obtained from Kasten |
|
|
Whether to enable RBAC with a specific cluster role and binding for K10 |
|
|
Whether to create a SecurityContextConstraints for K10 ServiceAccounts |
|
|
Sets the SecurityContextConstraints priority |
|
|
Whether the dashboardbff Pods may use the node network |
|
|
Whether the executor Pods may use the node network |
|
|
Whether the aggregatedapis Pods may use the node network |
|
|
Specifies whether a ServiceAccount should be created |
|
|
The name of the ServiceAccount to use. If not set, a name is derived using the release and chart names. |
|
|
Specifies whether the K10 dashboard should be exposed via ingress |
|
|
Optional name of the Ingress object for the K10 dashboard. If not set, the name is formed using the release name. |
|
|
Cluster ingress controller class: |
|
|
FQDN (e.g., |
|
|
URL path for K10 Dashboard (e.g., |
|
|
Specifies the path type for the ingress resource |
|
|
Additional Ingress object annotations |
|
|
Configures a TLS use for |
|
|
Optional TLS secret name |
|
|
Configures the default backend backed by a service for the K10 dashboard Ingress (mutually exclusive setting with |
|
|
The name of a service referenced by the default backend (required if the service-backed default backend is used). |
|
|
The port name of a service referenced by the default backend (mutually exclusive setting with port |
|
|
The port number of a service referenced by the default backend (mutually exclusive setting with port |
|
|
Configures the default backend backed by a resource for the K10 dashboard Ingress (mutually exclusive setting with |
|
|
Optional API group of a resource backing the default backend. |
|
|
The type of a resource being referenced by the default backend (required if the resource default backend is used). |
|
|
The name of a resource being referenced by the default backend (required if the resource default backend is used). |
|
|
Default global size of volumes for K10 persistent services |
|
|
Size of a volume for catalog service |
|
|
Size of a volume for jobs service |
|
|
Size of a volume for logging service |
|
|
Size of a volume for metering service |
|
|
Specified StorageClassName will be used for PVCs |
|
|
Configures custom labels to be set to all Kasten Pods |
|
|
Configures custom annotations to be set to all Kasten Pods |
|
|
Specify the helm repository for offline (airgapped) installation |
|
|
Provide secret which contains docker config for private repository. Use |
|
|
Provide external prometheus host name |
|
|
Provide external prometheus port number |
|
|
Provide Base URL of external prometheus |
|
|
Enable |
|
|
Enable Google Workload Identity Federation for K10 |
|
|
Identity Provider type for Google Workload Identity Federation for K10 |
|
|
Audience for whom the ID Token from Identity Provider is intended |
|
|
AWS access key ID (required for AWS deployment) |
|
|
AWS access key secret |
|
|
ARN of the AWS IAM role assumed by K10 to perform any AWS operation. |
|
|
The secret that contains AWS access key ID, AWS access key secret and AWS IAM role for AWS |
|
|
Non-default base64 encoded GCP Service Account key |
|
|
Sets Google Project ID other than the one used in the GCP Service Account |
|
|
Azure tenant ID (required for Azure deployment) |
|
|
Azure Service App ID |
|
|
Azure Service APP secret |
|
|
The secret that contains ClientID, ClientSecret and TenantID for Azure |
|
|
Resource Group name that was created for the Kubernetes cluster |
|
|
Subscription ID in your Azure tenant |
|
|
Resource management endpoint for the Azure Stack instance |
|
|
Azure Active Directory login endpoint |
|
|
Azure Active Directory resource ID to obtain AD tokens |
|
|
Microsoft Entra ID login endpoint |
|
|
Microsoft Entra ID resource ID to obtain AD tokens |
|
|
Azure Cloud Environment ID |
|
|
vSphere endpoint for login |
|
|
vSphere username for login |
|
|
vSphere password for login |
|
|
The secret that contains vSphere username, vSphere password and vSphere endpoint |
|
|
Set base64 encoded docker config to use for image pull operations. Alternative to the |
|
|
Use |
|
|
Name of the ConfigMap that contains a certificate for a trusted root certificate authority |
|
|
Cluster name for better logs visibility |
|
|
Sets AWS_REGION for metering service |
|
|
Control license reporting (set to |
|
|
Sets metric report collection period (in seconds) |
|
|
Sets metric report push period (in seconds) |
|
|
Sets K10 promotion ID from marketing campaigns |
|
|
Sets AWS cloud metering license mode |
|
|
Sets AWS managed license mode |
|
|
Sets Red Hat cloud metering license mode |
|
|
Sets AWS managed license config secret |
|
|
Configures an external gateway for K10 API services |
|
|
Standard annotations for the services |
|
|
Domain name for the K10 API services |
|
|
Supported gateway type: |
|
|
ARN for the AWS ACM SSL certificate used in the K10 API server |
|
|
Configures basic authentication for the K10 dashboard |
|
|
A username and password pair separated by a colon character |
|
|
Name of an existing Secret that contains a file generated with htpasswd |
|
|
A list of groups whose members are granted admin level access to K10's dashboard |
|
|
A list of users who are granted admin level access to K10's dashboard |
|
|
Configures token based authentication for the K10 dashboard |
|
|
Configures Open ID Connect based authentication for the K10 dashboard |
|
|
URL for the OIDC Provider |
|
|
URL to the K10 gateway service |
|
|
Space separated OIDC scopes required for userinfo. Example: "profile email" |
|
|
The type of prompt to be used during authentication (none, consent, login or select_account) |
|
|
Client ID given by the OIDC provider for K10 |
|
|
Client secret given by the OIDC provider for K10 |
|
|
The secret that contains the Client ID and Client secret given by the OIDC provider for K10 |
|
|
The claim to be used as the username |
|
|
Prefix that has to be used with the username obtained from the username claim |
|
|
Name of a custom OpenID Connect claim for specifying user groups |
|
|
All groups will be prefixed with this value to prevent conflicts |
|
|
Maximum OIDC session duration |
|
|
Enable OIDC Refresh Token support |
|
|
Enables access to the K10 dashboard by authenticating with the OpenShift OAuth server |
|
|
Name of the service account that represents an OAuth client |
|
|
The token corresponding to the service account |
|
|
The secret that contains the token corresponding to the service account |
|
|
The URL used for accessing K10's dashboard |
|
|
The URL for accessing OpenShift's API server |
|
|
To turn off SSL verification of connections to OpenShift |
|
|
Set this to true to use the CA certificate corresponding to the Service Account |
|
|
Set this to false to disable the OCP CA certificates automatic extraction to the K10 namespace |
|
|
Configures Active Directory/LDAP based authentication for the K10 dashboard |
|
|
To force a restart of the authentication service Pod (useful when updating authentication config) |
|
|
The URL used for accessing K10's dashboard |
|
|
Host and optional port of the AD/LDAP server in the form |
|
|
Required if the AD/LDAP host is not using TLS |
|
|
To turn off SSL verification of connections to the AD/LDAP host |
|
|
When set to true, ldap:// is used to connect to the server followed by creation of a TLS session. When set to false, ldaps:// is used. |
|
|
The Distinguished Name(username) used for connecting to the AD/LDAP host |
|
|
The password corresponding to the |
|
|
The name of the secret that contains the password corresponding to the |
|
|
The base Distinguished Name to start the AD/LDAP search from |
|
|
Optional filter to apply when searching the directory |
|
|
Attribute used for comparing user entries when searching the directory |
|
|
AD/LDAP attribute in a user's entry that should map to the user ID field in a token |
|
|
AD/LDAP attribute in a user's entry that should map to the email field in a token |
|
|
AD/LDAP attribute in a user's entry that should map to the name field in a token |
|
|
AD/LDAP attribute in a user's entry that should map to the preferred_username field in a token |
|
|
The base Distinguished Name to start the AD/LDAP group search from |
|
|
Optional filter to apply when searching the directory for groups |
|
|
The AD/LDAP attribute that represents a group's name in the directory |
|
|
List of field pairs that are used to match a user to a group. |
|
|
Attribute in the user's entry that must match with the |
|
|
Attribute in the group's entry that must match with the |
|
|
A list of groups whose members are allowed access to K10's dashboard |
|
|
Custom security context for K10 service containers |
|
|
User ID K10 service containers run as |
|
|
Group ID K10 service containers run as |
|
|
FSGroup that owns K10 service container volumes |
|
|
Whether to enable writing K10 audit event logs to stdout (standard output) |
|
|
Directory path for saving audit logs in a cloud object store |
|
|
Whether to enable sending K10 audit event logs to AWS S3 |
|
|
Enables injection of sidecar container required to perform Generic Volume Backup into workload Pods |
|
|
Set of labels to select namespaces in which sidecar injection is enabled for workloads |
|
|
Set of labels to filter workload objects in which the sidecar is injected |
|
|
Port number on which the mutating webhook server accepts request |
|
|
Resource requests and limits for gateway Pod |
|
|
Specifies the gateway services external port |
|
|
Specifies resource requests and limits for generic backup sidecar and all temporary Kasten worker Pods. Superseded by ActionPodSpec |
|
|
Choose whether to enable the multi-cluster system components and capabilities |
|
|
Choose whether to setup cluster as a multi-cluster primary |
|
|
Primary cluster name |
|
|
Primary cluster dashboard URL |
|
|
(optional) Set Prometheus image registry. |
|
|
(optional) Set Prometheus image repository. |
|
|
(optional) Whether to create Prometheus RBAC configuration. Warning - this action will allow prometheus to scrape Pods in all k8s namespaces |
|
|
DEPRECATED: (optional) Enable Prometheus |
|
|
DEPRECATED: (optional) Set true to create ServiceAccount for |
|
|
DEPRECATED: (optional) Enable Prometheus |
|
|
DEPRECATED: (optional) Enable Prometheus |
|
|
DEPRECATED: (optional) Set true to create ServiceAccount for |
|
|
DEPRECATED: (optional) Enable Prometheus |
|
|
DEPRECATED: (optional) Set true to create ServiceAccount for |
|
|
DEPRECATED: (optional) Enable Prometheus ScrapeCAdvisor |
|
|
(optional) If false, K10's Prometheus server will not be created, reducing the dashboard's functionality. |
|
|
(optional) Set security context |
|
|
(optional) Enable security context |
|
|
(optional) Set security context |
|
|
(optional) Set security context |
|
|
(optional) K10 Prometheus data retention |
|
|
DEPRECATED: (optional) The number of Prometheus server Pods that can be created above the desired amount of Pods during an update |
|
|
DEPRECATED: (optional) The number of Prometheus server Pods that can be unavailable during the upgrade process |
|
|
DEPRECATED: (optional) Change default deployment strategy for Prometheus server |
|
|
DEPRECATED: (optional) If true, K10 Prometheus server will create a Persistent Volume Claim |
|
|
(optional) K10 Prometheus server data Persistent Volume size |
|
|
(optional) StorageClassName used to create Prometheus PVC. Setting this option overwrites global StorageClass value |
|
|
DEPRECATED: (optional) Prometheus configmap name to override default generated name |
|
|
(optional) Prometheus deployment name to override default generated name |
|
|
(optional) K10 Prometheus external url path at which the server can be accessed |
|
|
(optional) K10 Prometheus prefix slug at which the server can be accessed |
|
|
DEPRECATED: (optional) Set true to create ServiceAccount for Prometheus server service |
|
|
Overwriting the default K10 container resource requests and limits |
varies depending on the container |
|
Specifies whether the K10 dashboard should be exposed via route |
|
|
FQDN (e.g., |
|
|
URL path for K10 Dashboard (e.g., |
|
|
Additional Route object annotations |
|
|
Additional Route object labels |
|
|
Configures a TLS use for |
|
|
Specifies behavior for insecure scheme traffic |
|
|
Specifies the TLS termination of the route |
|
|
Specifies the number of executor-svc Pods used to process Kasten jobs |
3 |
|
Specifies the number of threads per executor-svc Pod used to process Kasten jobs |
8 |
|
Per action limit of concurrent manifest data snapshots, based on workload (ex. Namespace, Deployment, StatefulSet, VirtualMachine) |
5 |
|
Cluster-wide limit of concurrent CSI VolumeSnapshot creation requests |
|
|
Cluster-wide limit of concurrent non-CSI snapshot creation requests |
|
|
Per action limit of concurrent volume export operations |
|
|
Cluster-wide limit of concurrent volume export operations |
|
|
Cluster-wide limit of concurrent Generic Volume Backup operations |
|
|
Cluster-wide limit of concurrent ImageStream container image backup (i.e. copy from) and restore (i.e. copy to) operations |
|
|
Per action limit of concurrent manifest data restores, based on workload (ex. Namespace, Deployment, StatefulSet, VirtualMachine) |
3 |
|
Per action limit of concurrent CSI volume provisioning requests when restoring from VolumeSnapshots |
3 |
|
Per action limit of concurrent volume restore operations from an exported backup |
3 |
|
Cluster-wide limit of concurrent volume restore operations from exported backups |
|
|
Specifies the domain name of the cluster |
|
|
Specifies the timeout (in minutes) for Blueprint backup actions |
|
|
Specifies the timeout (in minutes) for Blueprint restore actions |
|
|
Specifies the timeout (in minutes) for Blueprint delete actions |
|
|
Specifies the timeout (in minutes) for Blueprint backupPrehook and backupPosthook actions |
|
|
Specifies the timeout (in minutes) for temporary worker Pods used to validate backup repository existence |
|
|
Specifies the timeout (in minutes) for temporary worker Pods used to collect repository statistics |
|
|
Specifies the timeout (in minutes) for temporary worker Pods used for shareable volume restore operations |
|
|
Specifies the timeout (in minutes) for all other temporary worker Pods used during Veeam Kasten operations |
|
|
Specifies the timeout (in minutes) for completing execution of any child job, after which the parent job will be canceled. If no value is set, a default of 10 hours will be used |
|
|
Duration of a session token generated by AWS for an IAM role. The minimum value is 15 minutes and the maximum value is the maximum duration setting for that IAM role. For documentation about how to view and edit the maximum session duration for an IAM role see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session. The value accepts a number along with a single character |
|
|
Specifies the AWS EFS backup vault name |
|
|
Specifies the timeout for VMWare operations |
|
|
Specifies the AWS CMK key ID for encrypting K10 Primary Key |
|
|
Sets garbage collection period (in seconds) |
|
|
Sets maximum actions to keep |
|
|
Enables action collectors |
|
|
Defines the time duration within which the VMs must be unfrozen while backing them up. To know more about format go doc can be followed |
|
|
Specifies a list of applications to be excluded from the dashboard & compliance considerations. Format should be a YAML array |
|
|
Enables a sidecar container for temporary worker Pods used to push Pod performance metrics to Prometheus |
|
|
Specifies the period after which metrics for an individual worker Pod are removed from Prometheus |
|
|
Specifies the frequency for pushing metrics into Prometheus |
|
|
Specifies resource requests and limits for the temporary worker Pod metric sidecar |
|
|
Forces any Pod created by a Blueprint to run as root user |
|
|
Specifies the default priority class name for all K10 deployments and ephemeral Pods |
|
|
Overrides the default priority class name for the specified deployment |
|
|
Set the percentage increase for the ephemeral Persistent Volume Claim's storage request, e.g. PVC size = (file raw size) * (1 + |
|
|
Specifies how many files can be uploaded in parallel to the data store |
|
|
Specifies how many files can be downloaded in parallel from the data store |
|
|
Enables K10 Quick Disaster Recovery |
|
|
Specifies whether K10 should be run in the FIPS mode of operation |
|
|
Specifies whether K10 should use |
|
|
Max CPU which might be setup in |
|
|
Max memory which might be setup in |
|
|
The name of |
|
|
The namespace of |
|
Helm Configuration for Parallel Upload to the Storage Repository
Veeam Kasten provides an option to manage parallelism for file mode uploads
to the storage repository through a configurable parameter,
datastore.parallelUploads
via helm. To upload N files in parallel to the
storage repository, configure this flag to N. This flag is adjusted when
dealing with larger PVCs to improve performance. By default, the value is set
to 8.
Note
This parameter should not be modified unless instructed by the support team.
Helm Configuration for Parallel Download from the Storage Repository
Veeam Kasten provides an option to manage parallelism for file mode downloads
from the storage repository through a configurable parameter,
datastore.parallelDownloads
via helm. To download N files in parallel from
the storage repository, configure this flag to N. This flag is adjusted when
dealing with larger PVCs to improve performance. By default, the value is set
to 8.
Note
This parameter should not be modified unless instructed by the support team.
Setting Custom Labels and Annotations on Veeam Kasten Pods
Veeam Kasten provides the ability to apply labels and annotations to all of its
pods. This applies to both core pods and all temporary worker pods created
as a result of Veeam Kasten operations. Labels and annotations are
applied using the global.podLabels
and global.podAnnotations
Helm
flags, respectively.
For example, if using a values.yaml
file:
global:
podLabels:
app.kubernetes.io/component: "database"
topology.kubernetes.io/region: "us-east-1"
podAnnotations:
config.kubernetes.io/local-config: "true"
kubernetes.io/description: "Description"
Alternatively, the Helm parameters can be configured using the --set
flag:
--set global.podLabels.labelKey1=value1 --set global.podLabels.labelKey2=value2 \
--set global.podAnnotations.annotationKey1="Example annotation" --set global.podAnnotations.annotationKey2=value2
Note
Labels and annotations passed using these Helm parameters
(global.podLabels
and global.podAnnotations
) apply to
the Prometheus pod as well, if it is managed by
Veeam Kasten. However, if labels and annotations are set in the Prometheus
sub-chart, they will be prioritized over the global pod labels
and annotations set.