Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.

Grandfather - Father - Son

Created: 21 Jan 2013 • Updated: 22 Jan 2013 | 8 comments
This issue has been solved. See solution.

Can anyone show an example of setting up a b2d2t using the Granther-Father-Son methodology in Backupexec 2012?  It's not as straight foward as i would have hoped, the examples in the user manual aren't very good and I would like to check that I'm on the right tracks.

Comments 8 CommentsJump to latest comment

Backup_Exec's picture

Hi

Hi, the following thread and comment by Nick ,should help with your requirements:

https://www-secure.symantec.com/connect/forums/be2012-template-rules

Thanks

Sameer

Don't forget to give a "Thumbs Up" or Mark as "Solution" if someones advice has helped you.

Jools's picture

Unfortunately that doesn't help me.  I have 3 backup jobs - daily, weekly and monthly (running at different times) these backup to disk.  Then I've added 3 seperate stages which are supposed to run after the different backup jobs, with each stage going to tape drive but with different media sets.  The staging jobs are using the rules 'duplicate data immediately after the source task completes', then the appropriate source  job is selected.

Is this the correct / best practice method?  As the user guide only shows G-F-S stragtegy with backup jobs, not staging afterwards.  The reason I'm asking, is if a backup job fails for any reason, say the daily job, the staging job stalls and never runs again.  i.e. the duplicate to tape for the next daily backup job doesnt run, meaning a lot of extra manual checking to ensure that the b2t is still working for all servers.  Also I see that the default options for the duplicate staging is 'according to schedule', which I'm loathe to use, as i cannot guarantee that all the backup jobs have completed before the schedule would come around.

Kiran Bandi's picture

I have 3 backup jobs - daily, weekly and monthly (running at different times)

By scheduling them to run at different times you might face issues like Daily job getting triggered on the day when Weekly is running, Daily and Weekly both starting when Monthly Full is also running. Schedule them to run at exact same time with different frequencies, so that BE can priorotize your backups and skip low frequncy backups automatically when they colloid with high frequency backups.

....duplicate/stage job entirely depends upon successfull completion of source job. 

 i.e. the duplicate to tape for the next daily backup job doesnt run

Even if next daily is completing successfully?

Also I see that the default options for the duplicate staging is 'according to schedule',

Duplicating immediately after completing source job is fine. Scheduling is a bit difficult and as you said non-guaranteed.

Jools's picture

sorry i wasn't very clear - the different jobs are running at the same time, just with daily, weekly or monthly appropriate date rules.  so the rules for one running instead of the other works fine. (and i do like the improved logic compared to previous BE versions, so that if the time window of a job overlaps the next job, the next job is blocked, this used to cause all kinds of problems previously)

And yes, I have a horrible situation here, were after putting through a support call, I've been told that when using disk pools. the jobs do not automatically use the next free resource, so when the disk runs out of space the jobs fail.  Then the duplicate never runs, but when the next time schedule for the backup job come up, the backup job runs, but the duplicate stage does not.  It is still stuck with the 'old' scheduled date/time.   This is when i use the staging rule  'duplicate data immediately after the source task completes'

Colin Weaver's picture

If you have the initial stages to disk correctly configured for GFS and set the duplicate jobs to run immediately after the original then in effect GFS applies to the duplicate as well, with just the extra overhead of configuring appropriate media sets for your retention on tape vs setting the retention period directly in the job for the original disk based jobs.

Oh and the disk spanning issue is acknowledged as a defect (as long as GRT sets are not involved*) and we are looking add adding enhancements when we fix this to allow it to not just choose pool storage in order, but to choose the device in the pool with most free space. - No timescale for either the fix or the enhancement currently.

*If GRT is involved then we can never span between storage devices as the sets have to be directly mounted for the restore operation.

Jools's picture

Not using  GRT - I'm extremely unhappy with this problem of disk spanning, after spending a lot of money upgrading to BE2012, this is having a major impact on our system.  Anyway, I'm still looking to see if there is an answer the issue of the tape staging jobs not continuing after a failed backup job - as discussed in my previous post - why doesn' the logic re-schedule the staging in line with the schedule of the next job? Perhaps this needs to be tested by the develoers in order to identify and fix this problem in BE2012?

Colin Weaver's picture

If the initial backup jobs fails - why would you want a dupliacte of what might be invalid/corrupt/incomplete data - as such that bit might be by design

Jools's picture

No that's not want i want to do. Please see my posts -  I want the next scheduled backup job  - that should work to stage and duplicate off to tape.  That isn't occuring, when a job fails, the staging doesn't occur and it stalls - at the next backup cycle the staging doesn't happen.  The staging should occur after a succesful job, regardless of the previous jobs success or failure.  Unless its an incremental backup, where your logic is correct.  But i'm performing differential and/or full

SOLUTION