Local SQL server backup and restore procedures
1. Backing up
In this section you will find the process for backing up the MSSQL DB that is running inside a docker container for the Lakehouse Optimizer.
1. Stop the containers
First and foremost you’ll need to stop the app and the database:
$ docker compose down
You should see a message similar to
[+] Running 3/3
⠿ Container lakehouse Removed 1.1s
⠿ Container bplm Removed 0.4s
⠿ Network centos_bplm_net Removed
This tells you that the Lakehouse Optimizer and the DB are down.
2. Locate the MSSQL data
Now you will need to identify the path on the host system where your MSSQL data resides. Do this by using the following approach:
$ sudo -i
# docker volume ls
DRIVER VOLUME NAME
local centos_mssql_data
local centos_spring_session
# docker volume inspect centos_mssql_data
[
{
"CreatedAt": "2023-03-15T07:27:39Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "centos",
"com.docker.compose.version": "2.16.0",
"com.docker.compose.volume": "mssql_data"
},
"Mountpoint": "/var/lib/docker/volumes/centos_mssql_data/_data",
"Name": "centos_mssql_data",
"Options": null,
"Scope": "local"
}
]
# cd /var/lib/docker/volumes/centos_mssql_data/_data
# ls -la
total 8
drwxrwx---. 7 root root 73 Apr 5 12:36 .
drwx-----x. 3 root root 19 Mar 15 07:27 ..
drwxr-xr-x. 5 10001 root 74 Mar 15 07:27 .system
drwxr-xr-x. 2 10001 root 24 Apr 5 12:36 backup
drwxr-xr-x. 2 10001 root 4096 Mar 15 07:27 data
drwxr-xr-x. 2 10001 root 4096 Apr 6 10:10 log
drwxr-xr-x. 2 10001 root 25 Mar 15 07:27 secrets
3. Archive the data
Once in this step, you can create a backup archive of this entire directory using the tar
command
Note: in my script above I have saved the tgz file in the centos
home folder, as such: ~centos/mssql-backup.tgz
Now you need to change the owner of the file so that your user, in my case centos
can access the file.
4. Start the containers
You can now leave the privileged session using the exit
command and restart the Lakehouse containers
The output should look similar to this:
You can check that the containers are up and running with the following command:
5. Use scp
to download the data
You can use scp
to copy the tgz
backup to your local machine and from there upload it to S3 or any other cloud storage solution you use.
For example, in my case:
This will copy the archive to my local computer from which point I can upload it to S3.
2. Restore
In this section you will find the process for using the backup created in the first section to restore the MSSQL DB that’s running inside a docker container for the Lakehouse Optimizer.
Assumptions
I’m assuming that you have the mssql-backup.tgz
file on your server under the centos
user home folder, if not, you need to use scp
to copy it from your local machine, so on your local machine you run this:
1. Stop the containers
First and foremost you’ll need to stop the app and the database:
You should see a message similar to
This tells you that the Lakehouse Optimizer and the DB are down.
2. Locate the MSSQL data
Now you will need to identify the path on the host system where your MSSQL data resides. Do this by using the following approach:
3. Clean the _data
dir
You can either remove all of the data or, for extra-precations move it into another directory of your choosing.
3.1 Deleting the data
3.2 Moving the data
4. Restoring the data
With the destination directory empty, it is now time to bring in the backup data.
As you can see I used the tar
command to extract the contents of mssql-backup.tgz
into the _data
directory (using the -C
flag).
Now you can check that the data was correctly restored using the following approach:
5. Starting the containers
You can now exit the privileged bash session and start the containers.
Give it 2 minutes during which time you can check to see that no restarts happen using docker ps
After 2 minutes without restarts you can check the app logs to see that the app started
6. Clean up of the bak
folder
If in step 3 you chose to create the bak
folder, it is now safe to delete it.