Skip to main content
Version: 8.0.2 (latest)

Logical MongoDB Backup on OpenShift clusters

To demonstrate data protection for MongoDB provided and deployed with OpenShift, the install should be performed according to the documentation provided here.

$ oc create namespace mongodb-logical
$ oc new-app https://raw.githubusercontent.com/openshift/origin/master/examples/db-templates/mongodb-ephemeral-template.json \
--namespace mongodb-logical

Next create a file mongo-dep-config-blueprint.yaml with the following contents

apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
metadata:
name: mongodb-blueprint
actions:
backup:
outputArtifacts:
mongoBackup:
# Capture the kopia snapshot information for subsequent actions
# The information includes the kopia snapshot ID which is essential for restore and delete to succeed
# `kopiaOutput` is the name provided to kando using `--output-name` flag
kopiaSnapshot: "{{ .Phases.mongoDump.Output.kopiaOutput }}"
phases:
- func: MultiContainerRun
name: mongoDump
objects:
mongosecret:
kind: Secret
name: "{{ .DeploymentConfig.Name }}"
namespace: "{{ .DeploymentConfig.Namespace }}"
args:
namespace: "{{ .DeploymentConfig.Namespace }}"
sharedVolumeMedium: Memory

initImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
initCommand: ["bash", "-o", "errexit", "-o", "pipefail", "-c", "mkfifo /tmp/data; chmod 666 /tmp/data"]

backgroundImage: bitnami/mongodb:7.0-debian-12
backgroundCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
host="{{ .DeploymentConfig.Name }}.{{ .DeploymentConfig.Namespace }}.svc.cluster.local"
dbPassword='{{ index .Phases.mongoDump.Secrets.mongosecret.Data "database-admin-password" | toString }}'
dump_cmd="mongodump --gzip --archive --host ${host} -u admin -p ${dbPassword}"
echo $dump_cmd
${dump_cmd} > /tmp/data

outputImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
outputCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
backup_file_path='rs_backup.gz'
cat /tmp/data | kando location push --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --output-name "kopiaOutput" -
restore:
inputArtifactNames:
# The kopia snapshot info created in backup phase can be used here
# Use the `--kopia-snapshot` flag in kando to pass in `mongoBackup.KopiaSnapshot`
- mongoBackup
phases:
- func: MultiContainerRun
name: pullFromStore
objects:
mongosecret:
kind: Secret
name: "{{ .DeploymentConfig.Name }}"
namespace: "{{ .DeploymentConfig.Namespace }}"
args:
namespace: "{{ .DeploymentConfig.Namespace }}"
sharedVolumeMedium: Memory

initImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
initCommand: ["bash", "-o", "errexit", "-o", "pipefail", "-c", "mkfifo /tmp/data; chmod 666 /tmp/data"]

backgroundImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
backgroundCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
backup_file_path='rs_backup.gz'
kopia_snap='{{ .ArtifactsIn.mongoBackup.KopiaSnapshot }}'
kando location pull --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}" - > /tmp/data

outputImage: bitnami/mongodb:7.0-debian-12
outputCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
host="{{ .DeploymentConfig.Name }}.{{ .DeploymentConfig.Namespace }}.svc.cluster.local"
dbPassword='{{ index .Phases.pullFromStore.Secrets.mongosecret.Data "database-admin-password" | toString }}'
restore_cmd="mongorestore --gzip --archive --drop --host ${host} -u admin -p ${dbPassword}"
cat /tmp/data | ${restore_cmd}
delete:
inputArtifactNames:
# The kopia snapshot info created in backup phase can be used here
# Use the `--kopia-snapshot` flag in kando to pass in `mongoBackup.KopiaSnapshot`
- mongoBackup
phases:
- func: KubeTask
name: deleteFromStore
args:
namespace: "{{ .Namespace.Name }}"
image: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
command:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
backup_file_path='rs_backup.gz'
kopia_snap='{{ .ArtifactsIn.mongoBackup.KopiaSnapshot }}'
kando location delete --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}"

And then apply the file using:

$ oc apply -f mongo-dep-config-blueprint.yaml --namespace kasten-io
Note

The MongoDB backup example provided above serves as a blueprint for logical backups on OpenShift clusters. Please note that these examples may need to be modified for specific production environments and setups on OpenShift. As a result, it is highly recommended to carefully review and modify the blueprints as needed before deploying them for production use.

Alternatively, use the Blueprints page on Veeam Kasten Dashboard to create the Blueprint resource.

Note

If MongoDB chart is installed specifying existing secret by setting parameter --set auth.existingSecret=<mongo-secret-name>, secret name in the blueprint mongo-dep-config-blueprint.yaml needs to be modified at following places:

actions.backup.phases[0].objects.mongosecret.name: <mongo-secret-name> actions.restore.phases[0].objects.mongosecret.name: <mongo-secret-name>

Once the Blueprint is created, annotate the DeploymentConfig with the below annotations to instruct Veeam Kasten to use this Blueprint while performing data management operations on the MongoDB instance.

$ oc annotate deploymentconfig mongodb kanister.kasten.io/blueprint='mongodb-blueprint' \
--namespace=mongodb-logical

Finally, use Veeam Kasten to backup and restore the application.