Video Screencast Help

B2D Duplication how to best practice

Created: 23 Nov 2012 | 8 comments

Hi all,

I have two B2D destinations for my backups. The first B2D folder is inside a SAN and its a pool of disk separeted from the data pool disk so i make my all backups in a short time period inside my SAN, and then all this backups are duplicated to the second B2D wich is at a NAS shared by a SMB share.

Almost all is fine except some of the duplication jobs that are always giving errors.

So My question is :

Am i able to create a unique duplication job to duplicate all the backups sets stored on the first storage to the second storage?


If create the backup job and put the duplicate data just after job finishes is it better?

I just don't understand that if a backup job gives an error the duplicate job never work ok again because it's always looking for those backups sets that weren't complete at the bacukup job....

Comments 8 CommentsJump to latest comment

pkh's picture

You should put in a stage after your backup to duplicate the backup to the secondary storage.  There are some users who reported problems when they delay this duplication stage.  Do the duplication immediately after the backup.

CraigV's picture


Normally you put in a duplicate job that runs after your primary job.

What is the error you are receiving? If it is moaning about duplicating to the SMB share, why not consider iSCSI LUNs presented from the NAS to your backup server?


Alternative ways to access Backup Exec Technical Support:

RobRodrigues's picture

Hi CraigV and PKH, at this moment i really do have a fisrt backup to a storage present as iscsi and it executes during the night. In the morning after i do the duplication to the second storage wich is a smb share i can change it to a iscsi is that going to improve my duplication jobs? i'm also testing the duplication right after the principal job backup ends to see if it is more realible.

The moaning error is always about not finding :

SRVSBS01 EXCHANGE BACKUP-Duplicate -- The job failed with the following error: The requested source duplicate backup sets catalog record could not be found.  Perhaps the media containing the source backup sets was previously deleted.

pkh's picture

As I have said earlier there are a couple of other users who found that their duplicate problems go away after they change the duplicate stage to run immediately after the backup stage.   What you are getting is a corruption of the backup history for the duplicate stage.

CraigV's picture

If you're using iSCSI you don't have to authenticate against a share, so it's already going to improve your experience there.

Your error above is referred to in the TN below:


Alternative ways to access Backup Exec Technical Support:

ClarkL's picture


Have you tried modifying the setting for storage of catalogs from on media to on server? With 2012 the default is to store catalogs on the media rather than on the server's C: volume. That might help.

What is the retention period on your backups to disk? In Backup Exec 2012 the data lifecycle management will clear out old backups once their retention period has expired. In older versions it would wait for a backup which needed to overwrite the files, in 2012 it actively seeks out the expired backups and removes them. Are your backups perhaps being removed before the duplicate runs?

RobRodrigues's picture

Hi folks,

So far i managed to realize that doing my duplication after the backup job is 5***** soluttion.

I believe that my 3 days retention policy for the SAN backup wasn't enough even doing the daily duplication jobs.

I'm also going to change the nas to connect using iSCSI instead of Cifs share, but one question.

In BE 2012 i'm going to delete to storage and all backups jobs and history goes down the drain.

Can i just add another storage disk to connect to the iSCSI new connection and then after that change in all the jobs the storage destination ?

pkh's picture

BE 2012 only allows one storage per volume, so you would have to delete your old storage before you can define any storage.