Video Screencast Help

Decrease backing time for a long folder

Created: 14 Feb 2011 • Updated: 21 Feb 2014 | 12 comments
This issue has been solved. See solution.

Hi guys,

Here again with a problem to be helped.

First I want to thank to all that helped me to decommissed my phantom media server.

Here is my new problem.

We have NetBackup Enterprise 7.0, a media server connected to VTL by optic fiber that spouses to guarantee best backup times.

We also have a policy to copy all windows profiles from the hole company. It is a big folder with more than 2 Tb that I have to copy at once.

NetBackup manages it very well, but it takes too long, for a day or longer.

Right now it is not acceptable for us and I have to find a solution.

Does any one has an idea to decrease this backing time?

Comments 12 CommentsJump to latest comment

Marianne's picture

Break the Backup Selection up into more streams, e.g:


Remember to select 'Allow Multiple data streams' in policy attributes. Increase Max Jobs per Client in Master's global attributes (minimum of 4 to allow simultaneous backups in example above).

If MPX is not enabled in schedule and STU config, the above will need one drive per stream. Depending on throughput, you might want to enable MPX as well.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

Raco's picture

Thanks Marianne, but it does not help me.

This folder changes dynamically the amount and title for it´s children folder.

Every day new children folders are created and erased. Due to that I can not use specifics folder name.

I been thinking on using wildcards. But as far as I know, wild cards works on Windows clients only and we are using NDMP client because all this data are in SAN.

If no other solution is possible I´ll have to think on moving to windows client, but I hope to find other solution.

Any other idea pleaseeeeeeeeee.

Marianne's picture

"using NDMP client because all this data are in SAN" ??? I don't understand? NDMP is NAS, not SAN.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

Andy Welburn's picture

Large volume backed up via NDMP was taking just too long. Could not break it down as there were too many sub-folders & as you say NDMP does not allow wildcards and no possibililty of redefining the structure of the volume to facilitate a more "granular" backup. Nor was there anything we could achieve resource-wise to improve performance.

We had to go the same route you are already thinking & Marianne suggesting - start to back up via an alternate route (in our case an NFS mount on our Solaris master) so that we could implement multiple data streams AND wildcards. In our case, if you go this route, our backup selections were thus:


Taken individually the backup would appear to take longer, but as all streams are being written at the same time the actual time taken to backup was reduced compared with the single stream NDMP backup.

Raco's picture

I´m sorry you are right.

We have this information on volumes into the NAS.

Searching I found that I can exclude list by using the set keyword. 

Now I´m studying this topic but any of your suggestion would be great

Raco's picture

Thank Andy,

I will study your suggestion and comment later.

Thanks guys.

Raco's picture

Well guys,

This what I did.

Included a path for NDMP policy


and run a manual backup as testing one.

I got a code 99 error

14/02/2011 17:37:57 - begin writing
14/02/2011 17:37:59 - Error ndmpagent(pid=19944) ndmp_data_start_backup failed, status = 9 (NDMP_ILLEGAL_ARGS_ERR)       
14/02/2011 17:37:59 - Error ndmpagent(pid=19944) NDMP backup failed, path = /vol/VOL_M1_PERFILES_W2K3/[a-dA-D]*       
14/02/2011 17:38:00 - Error bptm(pid=19816) none of the NDMP backups for client adcentral00 completed successfully   
14/02/2011 17:38:00 - end writing; write time: 00:00:03
NDMP backup failure(99)

I keep looking for solution for this 99 code error,

Marianne's picture

I think you might have misunderstood Andy's post... He changed his config from NDMP to a Standard policy of NFS mounted NAS volumes.

Extract from NBU NDMP Admin Guide:

The following Backup Selections capabilities are NOT supported for an NDMP policy:
■ Wildcards in pathnames. For example, /home/* is an invalid entry.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

Riaan.Badenhorst's picture


Are you using remote ndmp or are the VTL drives attached/zoned to the NDMP host? Since you have a VTL i would hope its the latter, if not, change it.

What speed are you getting on the job?


Riaan Badenhorst

ITs easy :)

Stumpr2's picture

I have a client that I setup 10 policies... Policy_0 thru Policy_9 to backup the files. I used the telephone keypad as a model to break up the wildcards. A telephone key has both numbers and letters. I setup the filelist as follows the telephone keypad.





and so on.....

I needed to break them up into separate policies so that I could split up the full backups throughout the week.

Otherwise the 2.5TB of data would take too long to do it at one time.

VERITAS ain't it the truth?

Raco's picture

Ok guys,

Still I want to keep using ndmp policy type.

Then I read that using SET EXCLUDE it is possible to exclude files and directories from a backup.

I included this sentence on my policy for a particular extension as follow.



First it took to long to start writing and the speed was to low about 550 kb/seg instead of regular one we get of 23000 Kb/seg.

Does any one know the repercussion of using SET EXCLUDE on dnmp backup performance?