Thanks for this info. It was very helpful. I am dumping a database that is 30,813.1MB in size. On my first try it resulted in a 1,023.27GB file, which seems way too small. Upon inspecting and restoring the file on my local instance of MySQL, I see that the dump stopped midway thru and didn't export all of the tables. On my second try (with -v, to see what might have happened), I got verbose output but I see no errors. I also see mention of one of the tables that was not included in the resulting .sql file.
Also, the CloudShell times out before it is complete. How can I stop that from happening? I tried pressing backspace every now and then and I think its preventing the timeout but successive runs are not taking more than 20 mins now.
The file share that was created has a quota of 6GiB. If my dump file exceeds that, what will happen?
Edit: I had Azure Storage Explorer open and the file share selected when doing the dump previously. I closed it, wondering if it was doing something that was interrupting the mysqldump. The MySQL dump took much longer next time. Over an hour and pressing backspace every now and then prevented the CloudShell timeout. The CloudShell finally timed out on me anyway after running for a total of 1 hour, 37 minutes. I will check Azure Storage Explorer for the results tomorrow morning.
Edit2: The resulting file is still too small: 1,023.19MiB. Its like there is a 1GB limit. The backup does not contain anywhere near all of the data that it should.