How to benchmark the performance of the bpbkar32 process on a Windows client

Article:TECH17541  |  Created: 2001-01-18  |  Updated: 2013-09-09  |  Article URL http://www.symantec.com/docs/TECH17541
Article Type
Technical Solution


Issue



How to benchmark the performance of the bpbkar32 process on a Windows client


Solution



Determining the performance benchmark of a Windows client can be an effective troubleshooting tool when total backup throughput is found to be less than desirable. This is especially true when performance issues are only seen with certain clients. This benchmark is done by examining the performance of the NetBackup bpbkar32 process on the client specifically, then comparing it against the performance of a normal backup. Bpbkar32 is the process on a Windows NetBackup client, that reads data from the client's disk and sends it to the media server. The test in this document will use bpbkar32 to pass data to a local (null) device on the client itself.

If a performance issue can be reproduced by simply having the client's bpbkar32 process data locally, then other factors such as network connectivity and media server / tape performance can be eliminated.

Bpbkar32.exe is located in <install_path>\veritas\netbackup\bin\.

Syntax for this test:
bpbkar32 -nocont [file_path_to_test] 1> nul 2> nul

The following example would tell bpbkar32 to back up the F: drive to a null device.
bpbkar32 -nocont F:\ 1> nul 2> nul

Before initiating, please ensure that the <install_path>\veritas\netbackup\logs\bpbkar directory exists. Having this log is necessary in order to view the results. Make sure also that a backup isn't already running. After launching the command, it will be possible to see if the test is still active, by checking for a resident bpbkar32 process on the client machine that is using the CPU. Examining the growth of the bpbkar log file is another way to confirm if the test is still active.

When the test is complete, the performance results are found at the end of the backup in the bpbkar log file. The value for bytes per second (bps) is the key thing to analyze.  Look for entries similar to the following:

tar_base::backup_finish: TAR - backup:                          15114 files
tar_base::backup_finish: TAR - backup:          file data:  995460990 bytes  13 gigabytes
tar_base::backup_finish: TAR - backup:         image data: 1033197568 bytes  13 gigabytes
tar_base::backup_finish: TAR - backup:       elapsed time:        649 secs     23099898 bps

In this case, a throughput of 23,099,898 bps is recorded. Dividing this number by 1024 will yield a value of 22,558.49 Kilobytes per second.  Then, dividing by 1024 once more will give us a value of 22.03 Megabytes per second.

It is recommended that these tests be run three times for each disk resource. Then, average the results to obtain a clear figure of the performance. It is also recommended that the dataset be at least 10 Gigabytes in size to ensure consistent results. Run this bpbkar test for the same data that is being protected during a normal backup in order to properly compare results. Instead of a drive letter, it is also possible to simply type a file path for bpbkar to test.

The performance seen with this type of bpbkar test is normally higher than that of an actual backup, because data is processed locally and tape is not involved. If this is indeed the case, then any performance issues encountered during normal backups are likely related to other factors, such as:

 
1. Network connectivity to the media or master server
 
2. General Server performance of the media or master server
 
3. Performance issues when writing to the backup device on the media server (Tape, Disk)  
 

 

 
The bpbkar null test may also run as slow as the normal backup. If so, then the performance issue is localized to the client itself. Below are reasons a bpbkar null test can run slower than expected. They can also be causes for slow backups in general:

 
1. The Client Job Tracker is enabled.
 
2. Virus scanning is enabled.
 
3. One or more of the Client's disks are heavily fragmented and require defragmentation.
 
4. Other applications on the client are consuming the client's resources.
 
5. The volume(s) in question contain(s) a very high number of files, such as greater than 1 million files.
 

 



Legacy ID



242918


Article URL http://www.symantec.com/docs/TECH17541


Terms of use for this information are found in Legal Notices