Video Screencast Help
Protect Your POS Environment Against Retail Data Breaches. Learn More.

bpbkar waited xxxx times for empty buffer, delayed xxxx times

Created: 10 Mar 2013 • Updated: 02 Apr 2013 | 11 comments
unixlife's picture
This issue has been solved. See solution.

Hello,

I am getting detailed report as below; and the network KB per second is down 3762 from original 42399. Any idea?

03/09/2013 03:00:00 - granted resource  DAILY1
03/09/2013 03:00:00 - granted resource  HP.ULTRIUM4-SCSI.000
03/09/2013 03:00:00 - granted resource  ajva5000-hcart-robot-tld-0
03/09/2013 03:00:00 - estimated 494449958 kbytes needed
03/09/2013 03:00:00 - Info nbjm (pid=960) started backup job for client ajva5000, policy m5000, schedule Daily-Differential-Inc on storage unit ajva5000-hcart-robot-tld-0
03/09/2013 03:00:01 - Info bpbrm (pid=27395) ajva5000 is the host to backup data from
03/09/2013 03:00:01 - Info bpbrm (pid=27395) reading file list from client
03/09/2013 03:00:01 - Info bpbrm (pid=27395) starting bpbkar on client
03/09/2013 03:00:01 - Info bpbkar (pid=27399) Backup started
03/09/2013 03:00:01 - Info bpbrm (pid=27395) bptm pid: 27400
03/09/2013 03:00:01 - Info bptm (pid=27400) start
03/09/2013 03:00:01 - started process bpbrm (pid=27395)
03/09/2013 03:00:01 - connecting
03/09/2013 03:00:01 - connected; connect time: 0:00:00
03/09/2013 03:00:02 - Info bptm (pid=27400) using 65536 data buffer size
03/09/2013 03:00:02 - Info bptm (pid=27400) using 30 data buffers
03/09/2013 03:00:02 - Info bptm (pid=27400) start backup
03/09/2013 03:00:02 - Info bptm (pid=27400) Waiting for mount of media id DAILY1 (copy 1) on server ajva5000.
03/09/2013 03:00:02 - mounting DAILY1
03/09/2013 03:00:56 - Info bptm (pid=27400) media id DAILY1 mounted on drive index 0, drivepath /dev/rmt/0cbn, drivename HP.ULTRIUM4-SCSI.000, copy 1
03/09/2013 03:00:56 - mounted DAILY1; mount time: 0:00:54
03/09/2013 03:00:57 - positioning DAILY1 to file 25
03/09/2013 03:02:39 - positioned DAILY1; position time: 0:01:42
03/09/2013 03:02:39 - begin writing
03/09/2013 12:13:40 - Info bpbkar (pid=27399) bpbkar waited 1219344 times for empty buffer, delayed 1219737 times
03/09/2013 12:13:40 - Info bptm (pid=27400) waited for full buffer 447 times, delayed 16121 times
03/09/2013 12:13:49 - Info bptm (pid=27400) EXITING with status 0 <----------
03/09/2013 12:13:49 - Info bpbrm (pid=27395) validating image for client ajva5000
03/09/2013 12:13:50 - Info bpbkar (pid=27399) done. status: 0: the requested operation was successfully completed
03/09/2013 12:13:50 - end writing; write time: 9:11:11
the requested operation was successfully completed  (0)

Prior to this; I got an error message in /var/adm/messages

Mar  8 23:01:15 ajva5000 scsi: [ID 107833 kern.warning] WARNING: /pci@2,600000/pci@0/scsi@8 (mpt1):
Mar  8 23:01:15 ajva5000        Connected command timeout for Target 4.
Mar  8 23:01:15 ajva5000 scsi: [ID 107833 kern.warning] WARNING: /pci@2,600000/pci@0/scsi@8 (mpt1):
Mar  8 23:01:15 ajva5000        Target 4 reverting to async. mode
Mar  8 23:01:15 ajva5000 mpt: [ID 675377 kern.warning] WARNING: ID[SUNWpd.mpt.sync_wide_backoff.6013]

Operating Systems:

Comments 11 CommentsJump to latest comment

Nagalla's picture

Info bptm (pid=27400) waited for full buffer 447 times, delayed 16121 times

it indicates bptm is not reciving the data from source fast enough...  its waiting to get the full buffer

please reffer the below tech notes.

http://www.symantec.com/business/support/index?pag...

http://www.symantec.com/business/support/index?pag...

http://eval.veritas.com/downloads/van/4019.pdf

mph999's picture

 

 
I don't agree with this :
 
"it indicates bptm is not reciving the data from source fast enough...  its waiting to get the full buffer
"
 
The delays caused by the 'waiting for full' are much less than those cause by the 'waiting for empty'.
 
03/09/2013 12:13:40 - Info bpbkar (pid=27399) bpbkar waited 1219344 times for empty buffer, delayed 1219737 times
03/09/2013 12:13:40 - Info bptm (pid=27400) waited for full buffer 447 times, delayed 16121 times
 
Each delay is 15milli seconds so ...
 
16121 = 4 minutes
1219737 = 304 minutes (approx 5 hrs)
 
So, the backup was delays for about 4 minutes due to the waiting for full issue (usually network of disk read speed), but delayed for 5 hrs due to something affecting the speed to media server can write to the drive (waiting for empty).

 

03/09/2013 12:13:40 - Info bpbkar (pid=27399) bpbkar waited 1219344 times for empty buffer, delayed 1219737 times
03/09/2013 12:13:40 - Info bptm (pid=27400) waited for full buffer 447 times, delayed 16121 times

Normally wth slow backups, it is the 'waiting for full' that has a high delayed number, 16000 I don't really consider to be a big issue, certainly no compared to the number of delays 'waiting for empty' which is 1219737.

So, if (and you do not have this) you are waiting for Full buffer many many times (think 100 000 +) simply the client does not get the data to the media server quickly, usually this is client disk read speed of network issues.

This does not appear to be the case, here, there data gets to the media server fine, but the media server is not able to get the data to tape quickly.

Normally, this would be tuning settings, but you have 262144 buffer size and 30 buffers.  I would image this should give reasonable performance, perhaps not the best available, but reasonable.

Certainly, if this was previously working well, and no changed have been made to NBU, then the tuning is not the casue.

For refernece (when this issue is resolved, try these setting to see if you get better performance)

Set /usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS to 262144

Set /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS to 64

(If this improves performance, try again with NUMBER set to 128, if you get decrease in performance revert back).

So, back to the issue.  If the tuning is not the cause, you are most likely loking either at a SAN issue  (including HBA, swicth, cables, GBICs etc ... ) or drive issue.

Despite popular opinion, apart from the tuning settings mentioned above, there is no way for NBU to affect the write speed of the drives (well, there is if you wanted to include 'multiplexing' but this is 'false, as it only affects the 'speed' of filling the memory buffers, not the speed of getting the data out of the buffers to the drive.

In fact, NBU does not even writing to the drives itself, the data is passed from the memory buffer to the OS which writes it to the drive.

Hence why I am pleased you posted :

Mar  8 23:01:15 ajva5000 scsi: [ID 107833 kern.warning] WARNING: /pci@2,600000/pci@0/scsi@8 (mpt1):
Mar  8 23:01:15 ajva5000        Connected command timeout for Target 4.
Mar  8 23:01:15 ajva5000 scsi: [ID 107833 kern.warning] WARNING: /pci@2,600000/pci@0/scsi@8 (mpt1):
Mar  8 23:01:15 ajva5000        Target 4 reverting to async. mode
Mar  8 23:01:15 ajva5000 mpt: [ID 675377 kern.warning] WARNING: ID[SUNWpd.mpt.sync_wide_backoff.6013]

I've never seen this before, but if that path is the tape drive, that could well be the answer.  I would contact Oracle first to find out exactly what these messages mean, and what could be causing them.

Martin

Ediitted to add :

Put "Target 4 reverting to async. mode" into Google and the first few posts suggest potential hardware errors, or drivers.

Eg. https://forums.oracle.com/forums/thread.jspa?messageID=8196394

This is the point at which you have to know if ANY changes have been made recently, esp if any drivers have been updated.

Although I find no evidence (because I haven't loooked) there is a potential possibility that a OS patch could cause an issue like this, it is only a suggestion, I do not know if it could be true, but has to at least be considered.  Of course, if no OS patches have been installed prior to the issue, it can be discounted.

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
Nagalla's picture

I do agree with you..

somehow i did overlook the bpbkar delay time

I just saw the bptm delay time , which is my bad..

mph999's picture

he he ... no problem, so did I the first time I read it through - much more common to see it the other way round, waiting on full.

Keep up the good work,

M

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
GlenG's picture

mph999,

 

Are the delays always: "Each delay is 15milli seconds"

a "grep 'io_init: child'  /opt/openv/netbackup/logs/bptm/*" on my master/media shows:

/opt/openv/netbackup/logs/bptm/030913_00001.log:00:00:11.740 [18079] <2> io_init: child delay = 10, parent delay = 500 (milliseconds)
/opt/openv/netbackup/logs/bptm/030913_00001.log:00:00:12.390 [18080] <2> io_init: child delay = 10, parent delay = 500 (milliseconds)
/opt/openv/netbackup/logs/bptm/030913_00001.log:00:00:12.686 [18094] <2> io_init: child delay = 10, parent delay = 500 (milliseconds)
 

(Yes, I increased the parent delay but I blieve the child is using the default - NBU 7.0.1 Solaris 10 x86)

 

Thanks,

GlenG

revaroo's picture

Glen, you're not the original poster? Are you experiencing the same issue?

GlenG's picture

revaroo,

Sorry, I missed your post.

I do not see an issue.

GlenG

mph999's picture

Interesting, didn't know that was changeable  - or if I did I'd forgotten.

Far as I am aware, as per this TN, 15ms is the value you use to calculate the delay ...

http://www.symantec.com/docs/TECH145578

M

 

Regards,  Martin
 
Setting Logs in NetBackup:
http://www.symantec.com/docs/TECH75805
 
Marianne's picture

So, you want to know if the slow transfer rate is a result of the error you see in messages?

No idea... what exactly is mpt1?

If this is the multi-pathed disk being backed up at the point in time, you will need to use OS tools to troubleshoot. 
Use something like dd or tar to read the disk and send output to /dev/null. 
See if error messages appear again in messages.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

unixlife's picture

Tried with UFSDUMP, it's also very slow. Updated Library (SL24) and LTO4 drive firmware. No use... Seems the drive is defective in library. Waiting for a new drive.

GlenG's picture

unixlife,

 

We have an SL48 w/2 HP LTO4s (4Gb FC) and have replaced the drives several times.  I recommend runing HP's LTT LTO Drive Assessment test.  We found that slow drives fail this test even though they otherwise are working.  If (when) a drive fails that test Oracle will replace it.

 

Take a look at Oracle's MOS document "How to Run Diagnostics using LTT on HP LTO Tape Drive:ATR:1388614.1:2 [ID 1388614.1]"

 

have a good week,

GlenG

SOLUTION