Error: Azure Backup fails Everyday and cannot decipher error message:

Copper Contributor

<Data Name="StopInfo"><?xml version="1.0"?> <CBJob><JobId>9b92fbdb-b0c2-4604-b476-f9cb03f87250</JobId><JobType>Backup</JobType><JobStatus><JobState>Aborted</JobState><StartFileTime>131387904084290314</StartFileTime><EndFileTime>131387904159914728</EndFileTime><FailedFileLog></FailedFileLog><ErrorInfo><ErrorCode>100075</ErrorCode><DetailedErrorCode>-2137454110</DetailedErrorCode><ErrorParamList><CBErrorParam><Name>DLS_ERROR_CODE_NAME</Name><Value>100075</Value></CBErrorParam></ErrorParamList></ErrorInfo><DatasourceStatus><CBDatasourceStatus><JobState>Aborted</JobState><LastCompletedJobState>SnapshotVolumes</LastCompletedJobState><ErrorInfo><ErrorCode>100075</ErrorCode><DetailedErrorCode>-2137454110</DetailedErrorCode><ErrorParamList><CBErrorParam><Name>DLS_ERROR_CODE_NAME</Name><Value>100075</Value></CBErrorParam></ErrorParamList></ErrorInfo><Datasource><DataSourceId>900860664349374303</DataSourceId><DataSourceName>D:\</DataSourceName></Datasource><ByteProgress><Total>0</Total><Changed>0</Changed><Progress>0</Progress><Failed>0</Failed></ByteProgress><FileProgress><CurrentFile></CurrentFile><Total>0</Total><Changed>0</Changed><Progress>0</Progress><Failed>0</Failed></FileProgress><PartialIcRun>false</PartialIcRun></CBDatasourceStatus><CBDatasourceStatus><JobState>Aborted</JobState><LastCompletedJobState>SnapshotVolumes</LastCompletedJobState><ErrorInfo><ErrorCode>100075</ErrorCode><DetailedErrorCode>-2137454110</DetailedErrorCode><ErrorParamList><CBErrorParam><Name>DLS_ERROR_CODE_NAME</Name><Value>100075</Value></CBErrorParam></ErrorParamList></ErrorInfo><Datasource><DataSourceId>900860663008004051</DataSourceId><DataSourceName>F:\</DataSourceName></Datasource><ByteProgress><Total>0</Total><Changed>0</Changed><Progress>0</Progress><Failed>0</Failed></ByteProgress><FileProgress><CurrentFile></CurrentFile><Total>0</Total><Changed>0</Changed><Progress>0</Progress><Failed>0</Failed></FileProgress><PartialIcRun>false</PartialIcRun></CBDatasourceStatus><CBDatasourceStatus><JobState>Aborted</JobState><LastCompletedJobState>SnapshotVolumes</LastCompletedJobState><ErrorInfo><ErrorCode>100075</ErrorCode><DetailedErrorCode>-2137454110</DetailedErrorCode><ErrorParamList><CBErrorParam><Name>DLS_ERROR_CODE_NAME</Name><Value>100075</Value></CBErrorParam></ErrorParamList></ErrorInfo><Datasource><DataSourceId>900860664848728336</DataSourceId><DataSourceName>G:\</DataSourceName></Datasource><ByteProgress><Total>0</Total><Changed>0</Changed><Progress>0</Progress><Failed>0</Failed></ByteProgress><FileProgress><CurrentFile></CurrentFile><Total>0</Total><Changed>0</Changed><Progress>0</Progress><Failed>0</Failed></FileProgress><PartialIcRun>false</PartialIcRun></CBDatasourceStatus><CBDatasourceStatus><JobState>Aborted</JobState><LastCompletedJobState>SnapshotVolumes</LastCompletedJobState><ErrorInfo><ErrorCode>100075</ErrorCode><DetailedErrorCode>-2137454110</DetailedErrorCode><ErrorParamList><CBErrorParam><Name>DLS_ERROR_CODE_NAME</Name><Value>100075</Value></CBErrorParam></ErrorParamList></ErrorInfo><Datasource><DataSourceId>900860663771441046</DataSourceId><DataSourceName>H:\</DataSourceName></Datasource><ByteProgress><Total>0</Total><Changed>0</Changed><Progress>0</Progress><Failed>0</Failed></ByteProgress><FileProgress><CurrentFile></CurrentFile><Total>0</Total><Changed>0</Changed><Progress>0</Progress><Failed>0</Failed></FileProgress><PartialIcRun>false</PartialIcRun></CBDatasourceStatus><CBDatasourceStatus><JobState>Aborted</JobState><LastCompletedJobState>SnapshotVolumes</LastCompletedJobState><ErrorInfo><ErrorCode>100075</ErrorCode><DetailedErrorCode>-2137454110</DetailedErrorCode><ErrorParamList><CBErrorParam><Name>DLS_ERROR_CODE_NAME</Name><Value>100075</Value></CBErrorParam></ErrorParamList></ErrorInfo><Datasource><DataSourceId>900860664462944607</DataSourceId><DataSourceName>I:\</DataSourceName></Datasource><ByteProgress><Total>0</Total><Changed>0</Changed><Progress>0</Progress><Failed>0</Failed></ByteProgress><FileProgress><CurrentFile></CurrentFile><Total>0</Total><Changed>0</Changed><Progress>0</Progress><Failed>0</Failed></FileProgress><PartialIcRun>false</PartialIcRun></CBDatasourceStatus></DatasourceStatus></JobStatus><IsRetried>false</IsRetried></CBJob></Data>
</EventData>
</Event>

3 Replies

Hello Graham,

 

Going through the error message associated with the code 100075, seems like the backup failed because the snapshot operation for one or more volumes selected for backup could not be initiated.

Can you please ensure that the volume is not on a VHD that is contained in another VHD? Also, ensure that any files located on a VHD are not protected simultaneously along with files on the volume containing the VHD.

If a volume on a VHD that is contained in another VHD, that volume becomes nested too deeply to participate in the VSS operation. Possible reasons for this error include the following:

Trying to create a shadow copy of a volume that resides on a VHD that is contained in another VHD.

Trying to create a shadow copy of a VHD volume when the volume that contains the VHD is also in the same shadow copy set.

Let us know if you are still facing any issues.

 

Thanks,

Maanas Saran

Graham  -

 

This error translates to: VSS failed to add volumes in the snapshot set to take snapshot during prebackup.

 

I have seen this when someone was backing up different volumes and then one is removed, but more specifically usually dealing with a volume that had been presented as a mount point is removed.


Standard troubleshooting though is good here:

1. Check your scratch location to make sure it has plenty of free space.

2. Make sure that you are on the latest agent version http://aka.ms/azurebackup_agent

3. At the time of the failure, look in system and application event logs for any errors or warnings.  Pay special attention to VSS.

 

I hope this helps point you in a good direction.

 

Chris

Open a command prompt and type... "vssadmin list writers"

You are looking for each writer to be - State: [1] Stable

You will typically see State 5, 7, or 11 on writers with issues.

Anything other than a 1 on certain writers will block the process. Server reboots do not usually fix the issue but sometimes can.

The easiest way is to restart the particular service connected to that writer. There are lists out on the web showing which service is tied to which writer.

One that is hard to find is if NTDS is in an errored state. Restart COM services for that one.

Keep checking until you get all in a stable state, (or as many as possible) and then run your backup.

Hope that helps.