Backup Azure Database for MySQL to a Blob Storage
Published Aug 14 2019 10:32 AM 35.5K Views
Microsoft
Often customers want to backup Azure Database for MySQL to a Blob storage. mysqldump utility can't directly write the output file on a Blob Storage, in this post I will explain how this can be done using Azure Files.

 

1- Navigate to your Azure Database for MySQL server on the portal and Run Azure Cloud Shell (Bash).  If you run this for the first time it will ask you to create a Storage container and and this will mount an Azure File in it.

2- Type df in the cloud shell and collect the Azure File path

3- Change directory to the cloud drive using the cd command
     in the example below I used cd /usr/bashar/clouddrive

4- Now that you are in that directory run mysql command to extract the backup dump

mysqldump.png

5- Backup file is ready and in this example it is under File System "cs47e4f0dddd931x4619xbf7", navigate on the Azure Portal to that file system, as in the following screenshot.

file system.jpg

6- Download backup file if needed or alternatively move it to Blob Storage using AzCopy utility

MySQLbackup.jpg

 

Note: this technique leverages the cloud shell storage, if you are interested in extracting the dump to another blob storage please check the steps here: https://techcommunity.microsoft.com/t5/azure-database-for-mysql/steps-to-automate-backups-of-your-az...

11 Comments
Copper Contributor

Backup Azure Database for MySQL

 

--- backup single database:
mysqldump -Fc -v -h XXXXXXX.mysql.database.azure.com -u XXXXXXX -p -d databaseName1 > databaseName1_backup.sql

 

--- backup multiple database:

mysqldump -Fc -v -h XXXXXXX.mysql.database.azure.com -u XXXXXXX -p  --databases db1 db2 db3  > databases-backup.sql


--- backup all databases


mysqldump -Fc -v -h XXXXXXX.mysql.database.azure.com -u XXXXXXX -p --all-databases > all_databases_backup.sql


If you see issue "mysqldump: 1044 Access denied when using LOCK TABLES" Or
"Access denied for use '@'%' to database 'mysql' when using LOCK TABLES"

 

A quick workaround is to pass the –-single-transaction option to mysqldump:


mysqldump -Fc -v -h XXXXXXX.mysql.database.azure.com -u XXXXXXX -p --single-transaction --all-databases > all_databases_backup.sql

 

 

ref:

https://docs.microsoft.com/en-us/azure/mysql/

 

Copper Contributor

how can we restore it to our new paaS Database 

Microsoft

@Ops_Biren , thanks for your comment, restore from Azure Blog Storage should be similar

 

1- Navigate to your Azure Database for MySQL server on the portal and Run Azure Cloud Shell (Bash).  If you run this for the first time it will ask you to create a Storage container and and this will mount an Azure File in it.

2- Type df in the cloud shell and collect the Azure File path

3- Change directory to the cloud drive using the cd command
     in the example below I used cd /usr/bashar/clouddrive

4- Now that you are in that directory run mysql command,
example: mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p testdb < testdb_backup.sql

ref: https://docs.microsoft.com/en-us/azure/mysql/concepts-migrate-dump-restore#restore-your-mysql-databa...


Copper Contributor

Thank you for the useful article. It looks like this method leverages the cloud storage that is created when you run code from Azure Cloudshell. I'd like to run this remotely from something like Azure Automation on a regular schedule. Is there a way to use this same method without leveraging Azure Cloud Shell storage?

Copper Contributor

@Bashar-MSFT Please have the image corrected and @hmikhan your comment updated.

 

For mysqldump, -d option is not used to specify database name. -d, --no-data No row information. This is used to take a dump of the database without any data.

 

-B is the option that should be used.

Microsoft

Hello @Bashar-MSFT ,

May I know what -Fc and -v are for?

Microsoft
@amritav Please use the command mysqldump --help to learn about all the options
 
 

 -F, --flush-logs Flush logs file in server before starting dump. Note that if you dump many databases at once (using the option --databases= or --all-databases), the logs will be flushed for each database dumped. The exception is when using --lock-all-tables or --master-data: in this case the logs will be flushed only once, corresponding to the moment all tables are locked. So if you want your dump and the log flush to happen at the same exact moment you should use --lock-all-tables or --master-data with --flush-logs.

 

-c, --complete-insert: Use complete insert statements.

-v, --verbose Print info about the various stages.

Microsoft

@dhruv-dat Thanks sir, I fixed the screenshot. much appreciate your feedback

Microsoft

@Joel Hazelton  Sorry for the delayed response, please take a look at this blog post: 

https://techcommunity.microsoft.com/t5/azure-database-for-mysql/steps-to-automate-backups-of-your-az...

 
 
Copper Contributor

Thank you for the information. I would also suggest zipping the file for a smaller download.
For this post, it could be 

zip -q test_backup test_backup.sql
Copper Contributor

Thanks for this info. It was very helpful. I am dumping a database that is 30,813.1MB in size. On my first try it resulted in a 1,023.27GB file, which seems way too small. Upon inspecting and restoring the file on my local instance of MySQL, I see that the dump stopped midway thru and didn't export all of the tables. On my second try (with -v, to see what might have happened), I got verbose output but I see no errors. I also see mention of one of the tables that was not included in the resulting .sql file. 

Also, the CloudShell times out before it is complete. How can I stop that from happening? I tried pressing backspace every now and then and I think its preventing the timeout but successive runs are not taking more than 20 mins now. 
The file share that was created has a quota of 6GiB. If my dump file exceeds that, what will happen?

Edit: I had Azure Storage Explorer open and the file share selected when doing the dump previously. I closed it, wondering if it was doing something that was interrupting the mysqldump. The MySQL dump took much longer next time. Over an hour and pressing backspace every now and then prevented the CloudShell timeout. The CloudShell finally timed out on me anyway after running for a total of 1 hour, 37 minutes. I will check Azure Storage Explorer for the results tomorrow morning. 

 

Edit2: The resulting file is still too small: 1,023.19MiB. Its like there is a 1GB limit. The backup does not contain anywhere near all of the data that it should. 

Version history
Last update:
‎Oct 16 2020 06:05 PM
Updated by: