Video Screencast Help

How to determine best Netbackup fragment size

Created: 10 Sep 2010 • Updated: 11 Dec 2012 | 3 comments
This issue has been solved. See solution.

hi every body,

i use NBU 6.5 whit in one hand direct write on tape (LTO3) and in other hand primary backup ondisk (staging) and then destaging : writing of data from disk to tape.

So in my bakup i have several storage unit, in this storage unit are define a field : "fragment size",
i don't know realy what it's mean but i think it's cut the Big backup (ex: database og 50G) in smaller block and that's determine the size of this smaller block.
in a case of a error occur during the backup only one block is wrong and the backup is replay on this only one.
so the smallerthe fragment size are, the better it is.
But in a other hand i have tapes, and tapes prefer writing big block (for backup speed it's beter) and finding on block is easier and faster in case when we restor (the automate cartridge tape don't have a lot of point to check).
But if u have only one file to restore in the end of the block it's pretty long.

So for this first part i don't know if i m right, partially right, wrong, or totally wrong ^^

i have many king of data (Oracle database, mssql, files, ged, ...

i'm not in charge of the backup, but i have to learn about it and analyse it.

our fragment size configuration is : 524 288 Megabytes for staging disks
                                                             and 2048 Megabytes for LTO.

so what are the right choise for these fragments size? what are they linking to? how to determine best practice?
what is your experience about it ?

thanks.

 

Comments 3 CommentsJump to latest comment

Nicolai's picture

I got this info from a Symantec Health Check :

Impact: Using too low fragment size leads to more “files” on the tapes, and may impact on performance when
streaming at high speed. Although using multiplexing will lead to more fragments for each stream join/exit, it is
recommended to use big fragment sizes.
 
Recommendation: LTO2 use 2-5GB. LTO3 use 20GB, and for LTO4 use 50GB.

We used lower values before and changed our settings to the recommendations
 

Assumption is the mother of all mess ups.

If this post answered your'e qustion -  Please mark as a soloution.

SOLUTION
programmeheure's picture

thaks for your answer.

 

do you know where your recomandations come from?

i don't find any match with one number about LTO3:

LTO3 write at 80Mo/s, we could write 400Go uncompress or 800Go compress on an associate band.

 

i've heard to that windows 2003 server (like my media) fragment block over 64k???? but i haven't more information and don't know if i have to take care about that too....

AAlmroth's picture

The extract that Nicolai provided pretty much nails down the point on fragment sizes. There is no exact match for fragment sizes, it is based on experience and the values listed are generally good for most environments.

On the other hand; A lot of the current tape technologies support fast locate-block which sort of removes the need for fragments, as NBU can tell the drive to fast forward and locate the blocks required for restore. However, using fragments is still a common approach, but not really required anymore. In the old days, you could end up with a restore request where the file/s required for restore was in the middle/last part of a image on tape, and the tape drive could only fast forward to the file, not the block, meaning NBU would have to read through x GB before actually being able to read the data... Hence the use of fragments as we could fast forward to the file on tape where the file was and minimize read of non-relevant data.

Fragment size is not equal to block size!

So, what about block sizes then? In Windows 2003SP1 MS broke the tape.sys driver and a lot of tuning in NBU failed. In SP2 it was again fixed. You can still encounter that drivers cannot support larger block sizes than 64KB. For LTO3, a block size of 256KB seems to perform best. for LTO4/5 it is a bit depending on vendor and driver, but most seem to be able to do 512KB, and some 1MB. The larger buffers the more throughput as we need less I/O interrupts. Fewer I/O interrupts lead to less CPU context switches, and so on...

To tune NBU, create the SIZE_DATA_BUFFERS in <install path>\Veritas\NetBackup\db\config and on the first line enter the size in bytes. 256KB is 262144 bytes. On Linux/UNIX it is /usr/openv/netbackup/db/config.

You can combine above with number of data buffers; NUMBER_DATA_BUFFERS. Each bptm tuple (2 for non-MPX, 1-n:1 for MPX backups). They share the buffer queue. So if you use 32 buffers á 256KB and run 100 concurrent jobs, you would at least need ~820MB of shared memory, and then add memory for the processes. With media servers with not enough memory you could "over tune", and end in problems and failed backups. So use care when tuning.

/A