dsbackup utility
This page provides instructions for backing up and restoring CDM data using the dsbackup utility.
Back up using the dsbackup utility
Before you can back up CDM data using the dsbackup utility, you must set up a cloud storage container in Google Cloud Storage, Amazon S3, or Azure Blob Storage and configure a Kubernetes secret with the container’s credentials in your CDM deployment. Then, you schedule backups by running the ds-backup.sh script.
Set up cloud storage
Cloud storage setup varies depending on your cloud provider. Expand one of the following sections for provider-specific setup instructions:
Google Cloud
Set up a Google Cloud Storage (GCS) bucket for the DS data backup and
configure the forgeops
deployment with the credentials for the bucket:
-
Create a Google Cloud service account with sufficient privileges to write objects in a GCS bucket. For example, Storage Object Creator.
-
Add a key to the service account, and then download the JSON file containing the new key.
-
Configure a multi-region GCS bucket for storing DS backups:
-
Create a new bucket, or identify an existing bucket to use.
-
Note the bucket’s Link for gsutil value.
-
Grant permissions on the bucket to the service account you created in step 1.
-
-
Make sure your current Kubernetes context references the CDM cluster and the namespace in which the DS pod is running.
-
Create the
cloud-storage-credentials
secret that contains credentials to manage backup on cloud storage. The DS pods use these when performing backups.For
my-sa-credential.json
, specify the JSON file containing the service account’s key:$ kubectl create secret generic cloud-storage-credentials \ --from-file=GOOGLE_CREDENTIALS_JSON=/path/to/my-sa-credential.json
-
Restart the pods that perform backups so that DS can obtain the credentials needed to write to the backup location:
$ kubectl delete pods ds-cts-2 $ kubectl delete pods ds-idrepo-2
After the pods have restarted, you can schedule backups.
AWS
Set up an S3 bucket for the DS data backup and configure the forgeops
deployment with the credentials for the bucket:
-
Create or identify an existing S3 bucket for storing the DS data backup and note the S3 link of the bucket.
-
Make sure your current Kubernetes context references the CDM cluster and the namespace in which the DS pod is running.
-
Create the
cloud-storage-credentials
secret that contains credentials to manage backup on cloud storage. The DS pods use these when performing backups:$ kubectl create secret generic cloud-storage-credentials \ --from-literal=AWS_ACCESS_KEY_ID=my-access-key \ --from-literal=AWS_SECRET_ACCESS_KEY=my-secret-access-key \ --from-literal=AWS_REGION=my-region
-
Restart the pods that perform backups so that DS can obtain the credentials needed to write to the backup location:
$ kubectl delete pods ds-cts-2 $ kubectl delete pods ds-idrepo-2
After the pods have restarted, you can schedule backups.
Azure
Set up an Azure Blob Storage container for the DS data backup and
configure the forgeops
deployment with the credentials for the container:
-
Create or identify an existing Azure Blob Storage container for the DS data backup. For more information on how to create and use Azure Blob Storage, refer to Quickstart: Create, download, and list blobs with Azure CLI.
-
Log in to Azure Container Registry:
$ az acr login --name my-acr-name
-
Get the full Azure Container Registry ID:
$ ACR_ID=$(az acr show --name my-acr-name --query id | tr -d '"')
With the full registry ID, you can connect to a container registry even if you are logged in to a different Azure subscription.
-
Add permissions to connect your AKS cluster to the container registry:
$ az aks update --name my-aks-cluster-name --resource-group my-cluster-resource-group --attach-acr $ACR_ID
-
Make sure your current Kubernetes context references the CDM cluster and the namespace in which the DS pod is running.
-
Create the
cloud-storage-credentials
secret that contains credentials to manage backup on cloud storage. The DS pods use these when performing backups:-
Get the name and access key of the Azure storage account for your storage container[1].
-
Create the
cloud-storage-credentials
secret:
$ kubectl create secret generic cloud-storage-credentials \ --from-literal=AZURE_STORAGE_ACCOUNT_NAME=my-storage-account-name \ --from-literal=AZURE_ACCOUNT_KEY=my-storage-account-access-key
-
-
Restart the pods that perform backups so that DS can obtain the credentials needed to write to the backup location:
$ kubectl delete pods ds-cts-2 $ kubectl delete pods ds-idrepo-2
After the pods have restarted, you can schedule backups.
Schedule backups
-
Make sure you’ve set up cloud storage for your cloud provider platform.
-
Make sure your current Kubernetes context references the CDM cluster and the namespace in which the DS pod is running.
-
Make sure you’ve backed up and saved the shared master key and TLS key for the CDM deployment.
-
Set variable values in the /path/to/forgeops/bin/ds-backup.sh script:
Variable Name Default Notes HOSTS
ds-idrepo-2
The
ds-idrepo
ords-cts
replica or replicas to back up. Specify a comma-separated list to back up more than one replica. For example, to back up theds-idrepo-2
andds-cts-2
replicas, specifyds-idrepo-2,ds-cts-2
.BACKUP_SCHEDULE_IDREPO
On the hour and half hour
How often to run backups of the
ds-idrepo
directory. Specify using cron job format.BACKUP_DIRECTORY_IDREPO
n/a
Where the
ds-idrepo
directory is backed up. Specify:-
gs://bucket/path
to back up to Google Cloud Storage -
s3://bucket/path
to back up to Amazon S3 -
az://container/path
to back up to Azure Blob Storage
BACKUP_SCHEDULE_CTS
On the hour and half hour
How often to run backups of the
ds-cts
directory. Specify using cron job format.BACKUP_DIRECTORY_CTS
n/a
Where the
ds-cts
directory is backed up. Specify:-
gs://bucket/path
to back up to Google Cloud Storage -
s3://bucket/path
to back up to Amazon S3 -
az://container/path
to back up to Azure Blob Storage
-
-
Run the ds-backup.sh create command to schedule backups:
$ /path/to/forgeops/bin/ds-backup.sh create
The first backup is a full backup; all subsequent backups are incremental from the previous backup.
By default, the ds-backup.sh create command configures:
-
The backup task name to be
recurringBackupTask
-
The backup tasks to back up all DS backends
If you want to change either of these defaults, configure variable values in the ds-backup.sh script.
To cancel a backup schedule, run the ds-backup.sh cancel command. -
Restore
This section covers three options to restore data from dsbackup backups:
New CDM using DS backup
Creating new instances from previously backed up DS data is useful when a system disaster occurs or when directory services are lost. In this case, the latest available backup may be older than the replication purge delay. This procedure can also be used to create a test environment using data from a production deployment.
To create new DS instances with data from a previous backup:
-
Make sure your current Kubernetes context references the new CDM cluster. Also make sure that the namespace of your Kubernetes context contains the DS pods into which you plan to load data from backup.
-
Create Kubernetes secrets containing your cloud storage credentials:
On Google Cloud
$ kubectl create secret generic cloud-storage-credentials \ --from-file=GOOGLE_CREDENTIALS_JSON=/path/to/my-sa-credential.json
In this example, specify the path and file name of the JSON file containing the Google service account key for my-sa-credential.json.
On AWS
$ kubectl create secret generic cloud-storage-credentials \ --from-literal=AWS_ACCESS_KEY_ID=my-access-key \ --from-literal=AWS_SECRET_ACCESS_KEY=my-secret-access-key
On Azure
$ kubectl create secret generic cloud-storage-credentials \ --from-literal=AZURE_STORAGE_ACCOUNT_NAME=my-storage-account-name \ --from-literal=AZURE_ACCOUNT_KEY=my-storage-account-access-key
-
Configure the backup bucket location and enable the automatic restore capability:
-
Change to the directory where your custom base overlay is located, for example:
$ cd foregops/kustomize/overlay/small
-
Edit the base.yaml file and set the following parameters:
-
Set the
AUTORESTORE_FROM_DSBACKUP
parameter to"true"
. For example:AUTORESTORE_FROM_DSBACKUP
:"true"
-
Set the
DISASTER_RECOVERY_ID
parameter to identify that it’s a restored environment. For example:DISASTER_RECOVERY_ID
: "custom-id" -
Set the
DSBACKUP_DIRECTORY
parameter to the location of the backup bucket. For example:On Google Cloud
DSBACKUP_DIRECTORY="gs://my-backup-bucket"
On AWS
DSBACKUP_DIRECTORY="s3://my-backup-bucket"
On Azure
DSBACKUP_DIRECTORY="az://my-backup-bucket"
-
-
-
When the platform is deployed, new DS pods are created, and the data is automatically restored from the most recent backup available in the cloud storage bucket you specified.
To verify that the data has been restored:
-
Use the IDM UI or platform UI.
-
Review the logs for the DS pods'
init
container. For example:$ kubectl logs --container init ds-idrepo-0
Restore all DS directories from local backup
To restore all the DS directories in your CDM deployment from locally stored backup:
-
Delete all the PVCs attached to DS pods using the kubectl delete pvc command.
-
Because PVCs might not get deleted immediately when the pods to which they’re attached are running, stop the DS pods.
Using separate terminal windows, stop every DS pod using the kubectl delete pod command. This deletes the pods and their attached PVCs.
Kubernetes automatically restarts the DS pods after you delete them. The automatic restore feature of CDM recreates the PVCs as the pods restart by retrieving backup data from cloud storage and restoring the DS directories from the latest backup.
-
After the DS pods come up, restart IDM pods to reconnect IDM to the restored PVCs:
-
List all the pods in the CDM namespace.
-
Delete all the pods running IDM.
-
Restore one DS directory
In a CDM deployment with automatic restore enabled, you can recover a failed DS pod if the latest backup is within the /pingds/7.4/configref/objects-replication-synchronization-provider.html#replication-purge-delay[ replication purge delay]:
-
Delete the PVC attached to the failed DS pod using the kubectl delete pvc command.
-
Because the PVC might not get deleted immediately if the attached pod is running, stop the failed DS pod.
In another terminal window, stop the failed DS pod using the kubectl delete pod command. This deletes the pod and its attached PVC.
Kubernetes automatically restarts the DS pod after you delete it. The automatic restore feature of CDM recreates the PVC as the pod restarts by retrieving backup data from cloud storage and restoring the DS directory from the latest backup.
-
If the DS instance you restored was the
ds-idrepo
instance, restart IDM pods to reconnect IDM to the restored PVC:-
List all the pods in the CDM namespace.
-
Delete all the pods running IDM.
-
For information about manually restoring DS where the latest available backup is older than the replication purge delay, refer to the Restore section in the DS documentation.
Best practices for restoring directories
-
Use a backup newer than the last replication purge.
-
When you restore a DS replica using backups older than the purge delay, that replica can no longer participate in replication.
Reinitialize the replica to restore the replication topology.
-
If the available backups are older than the purge delay, then initialize the DS replica from an up-to-date master instance. For more information on how to initialize a replica, refer to Manual Initialization in the DS documentation.