Generic Storage Backup and Restore¶
Applications can often be deployed using non-shared storage (e.g., local SSDs) or on systems where K10 does not currently support the underlying storage provider. To protect data in these scenarios, K10 with Kanister gives you the ability, with extremely minor application modifications to add functionality to backup, restore, and migrate this application data in an efficient and transparent manner.
While a complete example is provided below, the only changes needed are the addition of a sidecar to your application deployment that can mount the application data volume and an annotation that requests generic backup.
The sidecar can be added either by leveraging K10's sidecar injection feature or by manually patching the resource as described below.
Enable Kanister Sidecar Injection¶
K10 implements a Mutating Webhook Server which mutates workload
objects by injecting a Kanister sidecar into the workload when the
workload is created. The Mutating Webhook Server also adds the
k10.kasten.io/forcegenericbackup annotation to the targeted
workloads to enforce generic backup. By default, the sidecar injection
feature is disabled. To enable this feature, the following options
need to be used when installing K10 via the Helm chart:
Once enabled, Kanister sidecar injection will be enabled for all
workloads in all namespaces. To perform sidecar injections on
workloads only in specific namespaces, the
labels can be set using the following option:
namespaceSelector labels, the Kanister sidecar will be
injected only in the workloads which will be created in the namespace
matching labels with
Similarly, to inject the sidecar for only specific workloads,
objectSelector option can be set as shown below:
For example, to inject sidecars into workloads that match the label
component: db and are in namespaces that are labeled with
k10/injectKanisterSidecar: true, the following options should be
added to the K10 Helm install command:
--set injectKanisterSidecar.enabled=true \ --set-string injectKanisterSidecar.objectSelector.matchLabels.component=db \ --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true
The labels set with
mutually inclusive. This means that if both the options are set to
perform sidecar injection, the workloads should have labels matching
the objectSelector labels AND they have to be created in the
namespace with labels that match the namespaceSelector
labels. Similarly, if multiple labels are specified for either
objectSelector, they will all needed to
match for a sidecar injection to occur.
For the sidecar to choose a security context that can read data from the volume, K10 performs the following checks in order:
If the primary container has a SecurityContext set, it will be used in the sidecar. If there are multiple primary containers, the list of containers will be iterated over and the first one which has a SecurityContext set will be used.
If the workload PodSpec has a SecurityContext set, the sidecar does not need an explicit specification and will automatically use the context from the PodSpec.
If the above criteria are not met, by default, no SecurityContext will be set.
Update the resource manifest¶
Alternatively, the Kanister sidecar can be added by updating the
resource manifest with the Kanister sidecar. An example, where
/data is used as an sample mount path, can be seen in the below
specification. Note that the sidecar must be named
kanister-sidecar and the side image version should be pinned to
the latest Kanister release.
- name: kanister-sidecar image: kanisterio/kanister-tools:0.31.0 command: ["bash", "-c"] args: - "tail -f /dev/null" volumeMounts: - name: data mountPath: /data
Once the above change is made, K10 will be able to automatically extract data and, using its data engine, efficiently deduplicate data and transfer it into an object store.
If you have multiple volumes used by your pod, you simply need to mount them all within this sidecar container. There is no naming requirement on the mount path as long as they are unique.
Generic Backup Annotation¶
Generic backups can be requested by adding the
k10.kasten.io/forcegenericbackup annotation to the workload as shown in the
apiVersion: apps/v1 kind: Deployment metadata: name: demo-app labels: app: demo annotations: k10.kasten.io/forcegenericbackup: "true"
The following is a
kubectl example to add the annotation to a running
# Add annotation to force generic backups $ kubectl annotate deployment <deployment-name> k10.kasten.io/forcegenericbackup="true" --namespace=<namespace-name>
Even when snapshot support from the storage provider is available,
generic backups can be enforced by adding the
k10.kasten.io/forcegenericbackup annotation to the workload as
Finally, note that the Kanister sidecar and Location profile must both be present for generic backups to work.
The below section provides a complete end-to-end example of how to extend your application to support generic backup and restore. A dummy application is used below but it should be straightforward to extend this example.
Make sure you have installed K10 with
namespaceSelectorlabels are set for
injectKanisterSidecar can be enabled by passing the following flags while
installing K10 helm chart
... --set injectKanisterSidecar.enabled=true \ --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true # Optional
Deploy the application¶
The following specification contains a complete example of how to
exercise generic backup and restore functionality. It consists of a an
application Deployment that use a Persistent Volume Claim (mounted
/data) for storing data.
Saving the below specification as a file,
recommended for reuse later.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: demo-pvc labels: app: demo pvc: demo spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: demo-app labels: app: demo spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - name: demo-container image: alpine:3.7 resources: requests: memory: 256Mi cpu: 100m command: ["tail"] args: ["-f", "/dev/null"] volumeMounts: - name: data mountPath: /data volumes: - name: data persistentVolumeClaim: claimName: demo-pvc
Create a namespace:
$ kubectl create namespace <namespace>
injectKanisterSidecar.namespaceSelectorlabels are set while installing K10, add the labels to namespace to match with
$ kubectl label namespace <namespace> k10/injectKanisterSidecar=true
Deploy the above application as follows:
# Deploying in a specific namespace $ kubectl apply --namespace=<namespace> -f deployment.yaml
Check status of deployed application:
List pods in the namespace. The demo-app pods can be seen created with two containers.
# List pods $ kubectl get pods --namespace=<namespace> | grep demo-app # demo-app-56667f58dc-pbqqb 2/2 Running 0 24s
Describe the pod and verify the
kanister-sidecarcontainer is injected with the same
volumeMounts: - name: data mountPath: /data
Create a Location Profile¶
If you haven't done so already, create a Location profile with the appropriate Location and Credentials information from the K10 settings page. Instructions for creating location profiles can be found here
The easiest way to insert data into the demo application is to simply copy it in:
# Get pods for the demo application from its namespace $ kubectl get pods --namespace=<namespace> | grep demo-app # Copy required data manually into the pod $ kubectl cp <file-name> <namespace>/<pod>:/data/ # Verify if the data was copied successfully $ kubectl exec --namespace=<namespace> <pod> -- ls -l /data
Backup the application data either by creating a Policy or running a Manual Backup from K10. This assumes that the application is running on a system where K10 does not support the provisioned disks (e.g., local storage). Make sure to specify the location profile in the advanced settings for the policy. This is required to perform Kanister operations.
To destroy the data manually, run the following command:
# Using kubectl $ kubectl exec --namespace=<namespace> <pod> -- rm -rf /data/<file-name>
Alternatively, the application and the PVC can be deleted and recreated.
Restore the data using K10 by selecting the appropriate restore point.
After restore, you should verify that the data is intact. One way to verify this is to use MD5 checksum tool.
# MD5 on the original file copied $ md5 <file-name> # Copy the restored data back to local env $ kubectl get pods --namespace=<namespace> | grep demo-app $ kubectl cp <namespace>/<pod>:/data/<filename> <new-filename> # MD5 on the new file $ md5 <new-filename>
The MD5 checksums should match.