Video Screencast Help
Scheduled Maintenance: Symantec Connect is scheduled to be down Saturday, April 19 from 10am to 2pm Pacific Standard Time (GMT: 5pm to 9pm) for server migration and upgrades.
Please accept our apologies in advance for any inconvenience this might cause.

Unix file system backups with Netbackup

Created: 06 Mar 2013 • Updated: 12 Nov 2013 | 12 comments
This issue has been solved. See solution.

Hello Backup Gurus:

We currently use Netbackup 7.1.0.4 with an AIX master and several AIX and Windows media servers.

On our Unix file system backups, we have ALL_LOCAL_DRIVES selected.  This creates several hundred individual jobs for some of our clients.

I wanted to reach out to this community to see how others are performing their Unix file system backups.  We are concerned that if we manually enter /opt, /etc, and so forth, that our Unix or application teams may forget to notify us of additional file systems that need to be added.  This would create a management nightmare to attempt to protect our data.

Any suggestions or simply sharing what you are currently doing would be greatly appreciated!

Thanks!

Operating Systems:

Comments 12 CommentsJump to latest comment

Nagalla's picture

it looks like you have selected mutilstreaming option enabled in policy attributes..?

if you feel that these jobs are  increasing the job Queue and not having enough resoureces.. just disable the muti-streaming.. 

and only enable it for long running clients , if you feel that you have enough resources to handle the number of jobs.

ALL_LOCAL_DRIVES is recommented backup selection.. but also make sure you have proper exclude list in place. to avoide un-necessory files form backup.

this is all about the tuening of your Master server.

 

Marianne's picture

"several hundred individual jobs for some of our clients"??

Do these clients have several hundred filesystems each? Honestly never seen this in my life...
I have seen several hundred streams/jobs when wildcards were used in Backup Selection with multistreaming enabled (e.g. /filesys/*).

My 2c:

Leave multistreaming enabled, just limit amount of jobs per policy (default is unlimited).

Another option is to limit Max Jobs per Client. This is a global setting in Master server's Global Properties.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

wr's picture

I'm guessing 'client' in this case is 'customer' and not 'machine'.   I could see where, with multistreaming enabled, dozens of hosts would produce hundreds of jobs.

Will Restore -- where there is a Will there is a way

bigdog_40's picture

Thank you all for your comments.

It is actually a client, but it does /opt, /opt/oracle, /opt/admin, /opt/emc, /opt/java, /opt/Tivoli, /opt/corefiles, etc. all as individual backup jobs.

The backups are completing relatively fast, but I think the number of jobs is impacting overall backup performance and may be leading to some issues that we are having with replication.

wr's picture

Interesting.  I have not seen this behaviour with ALL_LOCAL_DRIVES directive. 

 

On the client, try running  bpmount  and observe how many filesystems are listed

Will Restore -- where there is a Will there is a way

Marianne's picture

Does 'df -h' on the client list all of them as separate filesystems?

If so, you have 3 choices:

1) Micro-manage by manually specifying filesystems and grouping them in required amount of streams, e.g:

NEW_STREAM:
/
/var
/opt
/usr
NEW_STREAM
/opt/oracle
/opt/admin
NEW_STREAM
/opt/emc
/opt/java
NEW_STREAM
/opt/Tivoli
/opt/corefiles
etc

2) Allow multistreaming and limit amount of concurrent jobs as per my previous post.

3) Disable multistreaming as per Nagalla's suggestion.

 

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

bigdog_40's picture

Marianne,

Unfortunately, it does show all of them as individual file systems. 

The odd thing is that we have 'limit jobs per policy' set, but as soon as the child jobs finish - new jobs start.  This continues until there are literally 300-400 (often more) individual jobs per server.  If I lower the number of jobs per policy significantly, will it run fewer total jobs or with it simply do fewer at a time and extend my backup window?

bigdog_40's picture

To test, I limited the number of jobs per policy to 20, and submitted a full backup.

The parent kicked off, and immediately generated 348 different jobs (1 for each data stream).  Only 20 are active at a time, while the others remain queued.  However, there are still 349 total jobs.

Maybe the maximum jobs per client would work better for me?

Any suggestions?

Marianne's picture

Are there 348 different filesystems on the client? One client in policy or multiple clients?

Sure that ALL_LOCAL_DRIVES is in Backup Selection? Or maybe a path with wildcard?

Total amount of jobs that get generated depends on policy config and actual filesystems on client. We can give best advice if we can see what you see. Please post policy config:
bppllist <policy-name> -U 
as well as 'bpmount' output on client.

Limit jobs per policy or per client will merely limit amount of concurrent active jobs.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

SOLUTION
watsons's picture

Let's also presume that you did not check "Follow NFS" and/or "Cross mount points"... otherwise many external filesystems may get included..

Stumpr2's picture

OK, I am skeptical. There should be a line for every mount in /etc/mtab. please show results for:

wc -l /etc/mtab

VERITAS ain't it the truth?

Andrew Madsen's picture

How many client computers are in your policy?

The above comments are not to be construed as an official stance of the company I work for; hell half the time they are not even an official stance for me.