Logical PostgreSQL Backup on OpenShift Clusters
To demonstrate data protection for PostgreSQL provided and deployed with OpenShift, the install should be performed according to the documentation provided here.
$ oc create namespace postgres
$ oc new-app https://raw.githubusercontent.com/openshift/origin/release-4.11/examples/db-templates/postgresql-ephemeral-template.json \
--namespace postgres -e POSTGRESQL_ADMIN_PASSWORD=secretpassword
The secret that gets created after installation of PostgreSQL doesn't have the ADMIN password that we have just specified and this password is used by the Blueprint to connect to the PostgreSQL instance and perform the data management operations.
To address the above issue, a secret should be created that will have
this ADMIN password with the key postgresql_admin_password
.
$ oc create secret generic postgresql-postgres --namespace postgres \
--from-literal=postgresql_admin_password=secretpassword
Next create a file postgres-dep-config-blueprint.yaml
with the following contents
apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
metadata:
name: postgres-bp
actions:
backup:
kind: DeploymentConfig
outputArtifacts:
pgBackup:
# Capture the kopia snapshot information for subsequent actions
# The information includes the kopia snapshot ID which is essential for restore and delete to succeed
# `kopiaOutput` is the name provided to kando using `--output-name` flag
kopiaSnapshot: "{{ .Phases.pgDump.Output.kopiaOutput }}"
phases:
- func: MultiContainerRun
name: pgDump
objects:
pgSecret:
kind: Secret
name: '{{ .DeploymentConfig.Name }}-{{ .DeploymentConfig.Namespace }}'
namespace: '{{ .DeploymentConfig.Namespace }}'
args:
namespace: '{{ .DeploymentConfig.Namespace }}'
sharedVolumeMedium: Memory
initImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
initCommand: ["bash", "-o", "errexit", "-o", "pipefail", "-c", "mkfifo /tmp/data; chmod 666 /tmp/data"]
backgroundImage: postgres:13-bullseye
backgroundCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
export PGHOST='{{ .DeploymentConfig.Name }}.{{ .DeploymentConfig.Namespace }}.svc.cluster.local'
export PGUSER='postgres'
export PGPASSWORD='{{ index .Phases.pgDump.Secrets.pgSecret.Data "postgresql_admin_password" | toString }}'
pg_dumpall --clean -U $PGUSER > /tmp/data
outputImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
outputCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
backup_file_path="backup.sql"
cat /tmp/data | kando location push --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --output-name "kopiaOutput" -
restore:
kind: DeploymentConfig
inputArtifactNames:
# The kopia snapshot info created in backup phase can be used here
# Use the `--kopia-snapshot` flag in kando to pass in `pgBackup.KopiaSnapshot`
- pgBackup
phases:
- func: MultiContainerRun
name: pgRestore
objects:
pgSecret:
kind: Secret
name: '{{ .DeploymentConfig.Name }}-{{ .DeploymentConfig.Namespace }}'
namespace: '{{ .DeploymentConfig.Namespace }}'
args:
namespace: '{{ .DeploymentConfig.Namespace }}'
sharedVolumeMedium: Memory
initImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
initCommand: ["bash", "-o", "errexit", "-o", "pipefail", "-c", "mkfifo /tmp/data; chmod 666 /tmp/data"]
backgroundImage: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
backgroundCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
backup_file_path="backup.sql"
kopia_snap='{{ .ArtifactsIn.pgBackup.KopiaSnapshot }}'
kando location pull --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}" - > /tmp/data
outputImage: postgres:13-bullseye
outputCommand:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
export PGHOST='{{ .DeploymentConfig.Name }}.{{ .DeploymentConfig.Namespace }}.svc.cluster.local'
export PGUSER='postgres'
export PGPASSWORD='{{ index .Phases.pgRestore.Secrets.pgSecret.Data "postgresql_admin_password" | toString }}'
cat /tmp/data | psql -q -U "${PGUSER}"
delete:
inputArtifactNames:
# The kopia snapshot info created in backup phase can be used here
# Use the `--kopia-snapshot` flag in kando to pass in `pgBackup.KopiaSnapshot`
- pgBackup
phases:
- func: KubeTask
name: deleteDump
args:
image: '{{if index .Options "kanisterImage" }} {{- .Options.kanisterImage -}} {{else -}} ghcr.io/kanisterio/kanister-tools:0.113.0 {{- end}}'
namespace: "{{ .Namespace.Name }}"
command:
- bash
- -o
- errexit
- -o
- pipefail
- -c
- |
backup_file_path="backup.sql"
kopia_snap='{{ .ArtifactsIn.pgBackup.KopiaSnapshot }}'
kando location delete --profile '{{ toJson .Profile }}' --path "${backup_file_path}" --kopia-snapshot "${kopia_snap}"
And then apply the file using:
$ oc apply -f postgres-dep-config-blueprint.yaml --namespace kasten-io
For PostgreSQL App Versions 14.x or older, Kanister tools version 0.85.0 is required.
$ oc --namespace kasten-io apply -f \
https://raw.githubusercontent.com/kanisterio/kanister/0.85.0/examples/postgresql-deploymentconfig/blueprint-v2/postgres-dep-config-blueprint.yaml
The PostgreSQL backup example provided above serves as a blueprint template for logical backups on OpenShift clusters. Please note that these examples may need to be modified for specific production environments and setups on OpenShift. As a result, it is highly recommended to carefully review and modify the blueprints as needed before deploying them for production use.
Alternatively, use the Blueprints page on Veeam Kasten Dashboard to create the Blueprint resource.
Once the Blueprint is created, annotate the DeploymentConfig with the below annotations to instruct Veeam Kasten to use this Blueprint while performing data management operations on the PostgreSQL instance.
$ oc --namespace postgres annotate deploymentconfig/postgresql \
kanister.kasten.io/blueprint=postgres-bp
Finally, use Veeam Kasten to backup and restore the application.