Skip to main content
Version: 8.0.2 (latest)

Logical Elasticsearch Backup

If it hasn't been done already, the elastic Helm repository needs to be added to your local configuration:

# Add elastic helm repo
$ helm repo add elastic https://helm.elastic.co

Install the Elasticsearch chart from the elastic Helm repository:

$ kubectl create namespace elasticsearch
$ helm install --namespace elasticsearch elasticsearch elastic/elasticsearch \
--set antiAffinity=soft

Once Elasticsearch is installed, create an index and insert some documents in the Elasticsearch cluster by following the commands mentioned here.

Next create a file elasticsearch-blueprint.yaml with the following contents

apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
metadata:
name: elasticsearch-blueprint
actions:
backup:
outputArtifacts:
esBackup:
# Capture the kopia snapshot information for subsequent actions
# The information includes the kopia snapshot ID which is essential for restore and delete to succeed
# `kopiaOutput` is the name provided to kando using `--output-name` flag
kopiaSnapshot: "{{ .Phases.backupToStore.Output.kopiaOutput }}"
phases:
- func: MultiContainerRun
name: backupToStore
objects:
esMasterCredSecret:
kind: Secret
name: "{{ index .Object.metadata.labels.app }}-credentials"
namespace: "{{ .StatefulSet.Namespace }}"

args:
namespace: "{{ .StatefulSet.Namespace }}"
sharedVolumeMedium: Memory

initImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
initCommand: ["bash", "-o", "errexit", "-o", "pipefail", "-c", "mkfifo /tmp/data; chmod 666 /tmp/data"]

backgroundImage: elasticdump/elasticsearch-dump:latest
backgroundCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
host_name="{{ .Object.spec.serviceName }}.{{ .StatefulSet.Namespace }}.svc.cluster.local"
master_username="{{ index .Phases.backupToStore.Secrets.esMasterCredSecret.Data "username" | toString }}"
master_password="{{ index .Phases.backupToStore.Secrets.esMasterCredSecret.Data "password" | toString }}"
NODE_TLS_REJECT_UNAUTHORIZED=0 elasticdump --bulk=true --input=https://${master_username}:${master_password}@${host_name}:9200 --output=$ > /tmp/data

outputImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
outputCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
backup_file_path='backup.gz'
cat /tmp/data | gzip -c | kando location push --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --output-name "kopiaOutput" -

restore:
inputArtifactNames:
# The kopia snapshot info created in backup phase can be used here
# Use the `--kopia-snapshot` flag in kando to pass in `esBackup.KopiaSnapshot`
- esBackup
phases:
- func: MultiContainerRun
name: restoreFromObjectStore
objects:
esMasterCredSecret:
kind: Secret
name: "{{ index .Object.metadata.labels.app }}-credentials"
namespace: "{{ .StatefulSet.Namespace }}"
args:
namespace: "{{ .StatefulSet.Namespace }}"
initImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
initCommand: ["bash", "-o", "errexit", "-o", "pipefail", "-c", "mkfifo /tmp/data; chmod 666 /tmp/data"]

backgroundImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
backgroundCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
backup_file_path='backup.gz'
kopia_snap='{{ .ArtifactsIn.esBackup.KopiaSnapshot }}'
kando location pull --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}" - | gunzip -c > /tmp/data

outputImage: elasticdump/elasticsearch-dump:latest
outputCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
host_name="{{ .Object.spec.serviceName }}.{{ .StatefulSet.Namespace }}.svc.cluster.local"
master_username="{{ index .Phases.restoreFromObjectStore.Secrets.esMasterCredSecret.Data "username" | toString }}"
master_password="{{ index .Phases.restoreFromObjectStore.Secrets.esMasterCredSecret.Data "password" | toString }}"
cat /tmp/data | NODE_TLS_REJECT_UNAUTHORIZED=0 elasticdump --bulk=true --input=$ --output=https://${master_username}:${master_password}@${host_name}:9200

delete:
inputArtifactNames:
# The kopia snapshot info created in backup phase can be used here
# Use the `--kopia-snapshot` flag in kando to pass in `esBackup.KopiaSnapshot`
- esBackup
phases:
- func: KubeTask
name: deleteFromStore
args:
namespace: "{{ .Namespace.Name }}"
image: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
command:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
backup_file_path='backup.gz'
kopia_snap='{{ .ArtifactsIn.esBackup.KopiaSnapshot }}'
kando location delete --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}"

And then apply the file using:

$ kubectl --namespace=kasten-io create -f elasticsearch-blueprint.yaml
Note

The Elasticsearch backup example provided above serves as a blueprint template for logical backups. Please note that these examples may need to be modified for specific production environments and setups. As a result, it is highly recommended to carefully review and modify the blueprints as needed before deploying them for production use.

Alternatively, use the Blueprints page on Veeam Kasten Dashboard to create the Blueprint resource.

Once the Blueprint is created, the StatefulSets will need to be annotated to instruct Veeam Kasten to use the Blueprint while performing backup operations on this Elasticsearch instance. The following example demonstrates how to annotate the Elasticsearch StatefulSet with the elasticsearch-blueprint.

$ kubectl annotate statefulset elasticsearch-master kanister.kasten.io/blueprint='elasticsearch-blueprint' \
--namespace=elasticsearch

Finally, use Veeam Kasten to backup and restore the application.