Video Screencast Help

Looking for Detailed Explanation of Job Log Fields

Created: 16 Jan 2013 | 6 comments

Can anyone point me to an article or document that clearly indicates the meaning behind each field produced in a job log from Backup Exec 2010 R3?

I'm going through the logs and curious about the following excerpt from my differential backup:

Backed up 0 files in 15876 directories.
Processed 3,619,748 bytes in  1 minute and  21 seconds.
This backup set may not contain any data.

Given this is a differential backup, I read it as: "There were 0 changes detected but we processed 3,619,748 bytes to come to that conclusion"

Am I accurate in this assumption?

What does the "processed" number of bytes really mean?

In the case of this job, the Full backup that it is relying on is around 64 GB.  I'm just curious where does it get that "processed" number from?

Here's another example:

Backed up 23 files in 14 directories.
Processed 4,328,791,807 bytes in  4 minutes and  23 seconds.

I interpret this as: "We backed up 23 files in 14 directories, backing up a total of 4,328,791,807 bytes in the process."

Am I accurate in that assumption as well?

I am trying to properly figure out how much data we are backing up in a given day and I need to know whether i'm using the appropriate data for my calculations.

Thanks for any help!

Comments 6 CommentsJump to latest comment

King_Julien's picture

From your original post:

Backed up 0 files in 15876 directories.
Processed 3,619,748 bytes in  1 minute and  21 seconds.
This backup set may not contain any data.

 

You can also compare the number of bytes agains the byte count in job monitor.

 

Regards.

 

A second of your life, ruined for life.

pkh's picture

Looking at your job log will not tell you the final story.  If you are using tape and hardware compression, you can only know how much you are putting on the tape if you look at the tape itself because the job log does not report this final figure.

To know how much data you are putting onto your media, the easiest way is to examine your media.  For tape, note the amount of space consumed on the tape before and after your job.  The difference is the amount of data written.  With disk media, you should not be appending to them, so all you need to do is to sum up the space occupied by the .bkf and .img files that are used by the job.

JWNEG-IT's picture

I'm not sure if i'm shocked or sadly disappointed to hear that PKH.  First, i'm shocked that there is no mechanism within Backup Exec to tell you exactly how much data your backing up with a given job.  The reason why i'm trying to pull this information is to figure out what our storage savings will be by switching from a full/differential system to a deduplication system.  I need hard numbers to put into a report for my bosses to sign off.

I'm disappointed in myself that I didn't find this out sooner because i have been entering those "processed" numbers into a spreadsheet thinking that I was identifying how large each of our backups were by server.  I would then take the difference between two differentials to come up with an rough estimate on how big a deduplication process might result in.

Ultimately, I can't figure out how much is getting backed up from each server without manually checking the server data being backed and then checking the size of all the .bkf and .img files associated with the job?  (we have 5 backup jobs, backing up a total of 9 servers)  Ugh.  I'm digging through the Reports in Backup Exec and they are somewhat convaluded in the design.  Maybe if I fumble around in there, they can help.  I find it very hard to believe there isn't a more elegant way of finding out how much actual data is being backed up from day.

While I'm terribly disappointed with the response, I am thankful to have received one.  I suppose this will simplify my end report because it would be too hard to pull specific details.  I would appreciate it if anyone had much insight on the Reporting feature in Backup Exec 2010.

So does anyone know exactly what "processed" means?

pkh's picture

What I am saying is that you can get from the job log how much data is actually sent to the tape, but you would not know how many byte is actually written onto the tape from the joblog if hardware compression is used.  If you are using software compression, then the data is there in the job log.

If you want the figures per servers, then you have to do some digging through the joblogs.  Checking the rate of expansion is an easier way to get a gross number.

JWNEG-IT's picture

Okay, I think I get it.  So then the "processed" number is actual bytes written to the media (tape or drive).  Yes, we have hardware compression running so I guess if i'm documenting that processed number, I am figuring out how much data is being directly written to disk.  This ultimately tells me how much space we need to account for at the bare minimum.

So by that logic, you may have answered my question.

If a job (differential) is saying that it "backed up 0 files in 15876 directories" and "processed 3,619,748 bytes", translates to:

"We wrote 3,619,748 bytes to the media but no NEW data"

and in the other example:

Backed up 23 files in 14 directories.
Processed 4,328,791,807 bytes in  4 minutes and  23 seconds.

Translates to:

"We backed up 23 files in 14 directories, writing a TOTAL of 4,328,791,807 bytes to the media; NEW and previous data combined"

This would make sense for differentials because that number is always going up throughout the week

Please correct me if I'm wrong...thanks!

pkh's picture

It is more like we scan x directory and found no files to backup.  Remember you are dealing with a differential backup and not all the files need to be backed up.