Video Screencast Help

Exchange 2010 "Out of page buffers"

Created: 08 Aug 2012 • Updated: 20 Aug 2015 | 31 comments
This issue has been solved. See solution.

We have been running Backup Exec 2012 with all patches as of today (8/8/2012) to back up our Exchange 2010 (also fully up to date) for a few months.  Full backups always complete successfully, then Incrementals will run for 2-3 days just fine.  After a few days, Incrementals start to fail with the error "V-79-57344-759 - Unable to complete the operation. The following error was returned when opening the Exchange Database file: '-1014 The database is out of page buffers. '".  After this happens, the Incrementals fail with the error "The last backup job was not run by Backup Exec" until the weekly Full backup runs.  This happens every week.

The Backup Exec temp files are not be scanned by the antivirus, and I can't find anything on the Exchange server that would run at these times that would use extra resources during the backup windows.  Circular loggin is not enabled.  I tried changing the registry entries mentioned in the MS KB article that discusses Exchange running out of page buffers (don't remember the KB# off the top of my head).

I've also tried modifiying the backup windows (changing the day the Full runs, changing the times, etc).  Still happens within a few days.  Is there anything that could be causing this, or settings I can change to avoid this going forward? 

Comments 31 CommentsJump to latest comment

David Palmerston's picture

Have you seen this technote?  it sounds like your issue.

Are you running BE2012 on a 32-bit server?

How about rebooting the BE2012 server between incremental 2 and 3?

dhill82's picture

Sorry, should have mentioned that.  I saw that Technote, and both servers are x64.  They are Windows Server 2008 R2 VM's.  I've thought about setting up a job to reboot the server, but was hoping there was a better answer.  I'd likely have to reboot the Exchange server, though, as none of the other nightly backups I run has this problem.

David Palmerston's picture

I'd try rebooting just the BE media server first, then you could tell that it was that particular unit.

2 other things:

1) How much real ram are you dedicating to the BE media server?  You can try increasing that temporarily to the maximum that your physical machine will allow you and see if it makes a difference.

2) Also, what size have you set for the pagefile on the BE Media Server (rule of thumb: 2x ram + 12mb at minimum) - My BE Media Server is set to 4x ram for the pagefile....

dhill82's picture

I'll try increasing the page file on the BE server and see if that helps.  That server has 32GB of RAM (it's a physical server, not a VM, Exchange is a VM...not sure why I put that they both were yesterday...long day, I guess). Right now I have the page file set to range between 32 and 48 GB.  I increased it once, but probably should have made it larger.

dhill82's picture

Just an update to keep this thread alive.  I increased the page file like I mentioned in my last comment, but the job still failed after a few days.  I increased it again, to a max of 80 GB, rebooted the servers, and so far so good, but the last full backup was only two days ago.  If it runs fine until Monday I may be ok, since it usually fails around 2-3 days after the full backup.

dhill82's picture

Haven't posted in a while, but I'm still having problems with this, even after increasing the page file to 80GB.  Not really sure where to go with this at this point, besides setting up the server to reboot every few days.

CraigV's picture

...both are VMs? Why not Vmotion your media server to another host and keep the affinity on that 1 to see if the issue happens again?

Are you sure you have enough disk space on the data stores and host resources available?

Alternative ways to access Backup Exec Technical Support:

dhill82's picture

Sorry, I mistyped when I said they were both on VM's in my one post.  Only Exchange is a VM, Backup Exec is a physical server.  Resources shouldn't be an issue for my VM hosts, as there are only a few servers on each one.  During backup windows the resource usage spikes, but only to around 30-40%.  If anything, I probably need to add more memory to the Exchange VM, but I don't know if that would help with this or not. 

It seems odd that this would happen after a couple days of incrementals, and that a full backup runes fine.  If this was a memory/resource issue, wouldn't the full backup cause it?  Another thing I don't think I've mentioned is that we are running deduplication on this as well. Could that cause an issue?

IS@ESPC's picture

We are having exactly the same situation described by dhill82.  Our environment is also very similar.  Our BE server is a physical Dell server with 32GB memory and our Exchange 2010 server is a VM (ESX 4.1) also running with 32GB memory.  We run full dedup backups with tape duplication on Friday nights and incremental dedups on Sunday - Thursday.  Typically, on Wednesday mornings I will get the same error dhill82 described.  And, as already described, we can run a successful Full backup without restarting any servers but an incremental fails.

We did not have this problem until about a month ago when we switched from using the VM agent with disk/tape to the Windows agent using dedup storage.  Prior to the change in agents, the process was bullet-proof.  We made the change due to a "Best Practices" recommendation from Symantec to not use the VM agent when using dedup storage. Since that change, we've been chasing a number of issues, including this one.

One test I am running this week is to see what happens when I do not use client-side deduplication on the Exchange Server.  The problem is that I won't know if this works for about a week or so.  I'll post the results next week.

dhill82's picture

IS@ESPC - you may want to also run LiveUpdate on your BE server.  I checked that on Monday and there was a hotfix available that states it is supposed to address this issue.  After I installed the hotfix and updated the BE agent on the Exchange server it ran a couple more days before getting the error.  I was able to reboot the Exchange server on Tuesday (the BE agent update requires a reboot), so I'm hoping that the issue has been resolved totally, now.  If anything, it's worth a shot.

IS@ESPC's picture

I did check for the update and there was nothing available for our installation - but thank you for the info...

An update... The change to "server-side" dedup did not appear to work. However, it wasn't a perfect test because we had two issues with the Exchange backup this week.

What I am going to do this week is move the Exchange backup from dedup storage to disk storage. Given that we didn't have this situation prior to moving to dedup, I want to prove that it works in exactly the same environment with disk storage. If it does, I'll move it back to dedup and see what happens.

I'll check back by the end of the week to update our status.

IS@ESPC's picture

Changing the storage media from dedup to disk worked like a charm.  We had no issues this week (first time in months).

I'm moving back to dedup this week but changing the schedule so that the Exchange backup runs on its own.  I currently allow four concurrent jobs on the dedup storage media an only one on the disk media.  I'm wondering if the Exchange job exposes a resource issue that I don't have otherwise.

IS@ESPC's picture

Not surprisingly, the issue is back...  The full backup on Friday night ran fine as did the incrementals on Sunday and Monday.  The Tuesday night incremental however failed with the "Out of page buffers" error again.  Needless to say, this is frustrating.

My next test is to run the full backups using dedup and the incrementals using disk storage.

dhill82's picture

Been meaning to update this post.  I still have the issue, even after the latest patches.  Haven't tried moving off of dedup, but I have a feeling that if I do I'll have the same results as IS@ESPC

GarfieldMaximus's picture

Hi ,

I do have the exact same problem as you guys. But I'm wondering ... would it not be possible to solve this problem without changing the storage type. (we also have dedup disks right now) Maybe someone from Symantec could help us ? I don't know if they read the forum themselves ?

Thanks for the topic , it has been very interesting so far.

IS@ESPC's picture

The change to a combination of Dedup (full) and Disk (incremental) did not solve the issue. I had my typical Wednesday issue this morning. For now, I'll move back to all disk.

I will however call Symantec today and see what they can do. So far, they have not had a solution for the issue. I hope we now have enough information to help them resolve the problem.

I'll post how the discussion went.

auth1299's picture

I'm having the exact same issue. I am also running a similar environment. I have exchange 2010 Version: 14.02.0318.001 running on windows 2008 R2 SP1 64 bit as a virtual machine on xen server.

My backup exec 2010 is runing on windows 2008 R2 SP1 64 bit on a stand alone dell server with 8gb of ram and the virtual disk is set to automatic.

This issue also started around september. Maybe a windows patch broke something. I'm getting the exact same error as mentioned above. Sometimes the backup runs fine but most of the time I get the error:

Backup- \\\Microsoft Information Store\company
V-79-57344-759 - Unable to complete the operation for the selected resource using the specified options.  The following error was returned when opening the Exchange Database file:  '-1014 The database is out of page buffers. '
auth1299's picture

I'm backing up using  (FULL - Using archive bit (reset archive bit)) every backup. My exchange DB is about 60GB.

IS@ESPC's picture

Sorry about the delay in posting. I am working with with Symantec Tech Support. They started debug and have collected two days worth of logs. A couple of interesting items to report... When we started using the Troubleshooting tool, it found that the ASR Writer was in a failed stated. After a bit more research, we found that this seems to occur only when the Symantec Backup process begins. On servers where I do not run BE2012, I do not have this issue. The only way to get the ASR Write into a 'stable' state is to restart the server.

If you want to run the troubleshooting tool, go to "Backup Exec button" --> Technical Support --> Run the support tool..."

Also, changing the Exchange Server backup to NOT use GRT seems to have worked. It's already Thursday and the Exchange job has not failed. This is not a good long-term solution, but it's better (for me) to not have the job fail.

The Tech Support person did have me change my default snapshot technology option on the backup job (Advanced File Open) from "Automatic" to "Microsoft VSS..." then pick "System - ". I'll post the results next week.

Finally, I am also beginning to have an issue where periodically BE2012 reports that one of my Windows Agents is not the right version. When I look at the versions, all is fine... I have to reinstall the Agent to resolve this issue. Is that something any of you are also dealing with?

dhill82's picture

Thanks for the update.  Nothing I've done has worked, and setting up off-host backup failed miserably.  Not sure what else to do, so hopefully Symantec can get this straightened out.

IS@ESPC's picture

Symantec is escalating the issue on Monday.  In the meantime, I was successful this week by simply turning off the GRT option.  This won't work for me long-term, but at least I have a backup of the Exchange Server in case something goes wrong.

IS@ESPC's picture

Update:  Symantec Tier 3 support has suggested the following (which I will try for a week).

  • Separate the System and Information Store jobs into two jobs
  • Use “Differential” for the Information Store
  • Use “Incremental” for the System
  • Do NOT allow the “Differential” and “Incremental” to run at the same time
  • Do NOT turn on Snapshotting (Advanced File Open) for the Information Store job
  • Turn on GRT for both the System and Information Store jobs

I’ll give this a shot and let you all know how it works.  Obviously, we won’t know much until the end of next week.

Larry Hyman's picture

Having the exact same problem. Fulls work great and Inc seems to not be able to handle the load. I also have a case open with Sym. My tech should talk to your tech..     Case 420-132-318 Thx, Larry

dhill82's picture

I just made the changes that IS@ESPC received from Symantec as well.  I set my full backup to run tonight, so we'll see what happens in a few days.  I also added 4GB of RAM to my Exchange VM.  Probably won't make a difference with the backup, but it's possible.

RSpicer's picture

Just checking in to see if the problem still exists for dhill82 and all. 

I have the exact same symptoms but with Exchange 2007.  Breaking out the File backup to its own job and adding the VFF timeout entry (up to the max of 600) to the registry on the Backup Exec server seemed to help but not cure the problem.  I no longer have to restart the services after adding the VFF entry when the job fails. 

Key : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VirtFile
Value: VFF Extended Timeout
The Maximum value is 600 seconds.
Start out at 240 seconds at first. If error still occurs then 480, before going to 600.

Restart all the Backup Exec Services for the changes to take effect.

mbsweep16's picture

Just got this error on my Exchange Inc today for the first time. Similar setup. Exchange 2010 on Hyper-V.  BE 2012 on 3600 R2 Appliance (installed 2 months ago). Hyper-V agent backup with GRT runs on Fri night, then Exchange agent Full on Sat night and Incrementals Sun-Thur night. Haven't had any issues until last nights' backup. Read the TECH articles referenced earlier. Will try the fixes IS@ESPC mentions.

Have had sporadic problems with different backups ever since install. Open case for SharePoint failures. Have been nothing but frustrated with BE and Symantec for the last 2 months. We've been using BE for years and have never had the issues we've experienced with 2012 and this appliance. Don't know who to blame it on. Did Symantec release a product that wasn't ready for prime time or have M$ and others made changes/updates that make it difficult to backup their products? Would love to be able to go even one day without worrying about backups. I'll get down off my soap box now...

IS@ESPC's picture

Update - but not necessarily good news.

For the past week I did the following with the indicated results:

  • Separate the System and Information Store jobs into two jobs - GOOD
  • Use “Differential” for the Information Store - Didn't make a difference and actually seemed to cause move issues when GRT was turned on.
  • Use “Incremental” for the System - GOOD
  • Do NOT allow the “Differential” and “Incremental” to run at the same time - MAKES SENSE
  • Do NOT turn on Snapshotting (Advanced File Open) for the Information Store job - UNNECESSARY - In reality, BE uses Microsoft VSS automatically on Exchange backups.
  • Turn on GRT for both the System and Information Store jobs - BAD - on the Infostore backups, using GRT created different issues.

The Symantec Tech I talked to recommended that we simply get away from backing up the Infostore using Dedup. While not necessarily documented, using Dedup for Exchange, due to the overhead required, isn't an efficient way to go.

So, what I am now doing, and it appears to work fine, is the following:

  • System and Infostore jobs are separated
  • System - Full with incrementals using GRT and DEDUP storage
  • Infostore - Full with incrementals using GRT and DISK storage

The obvious downside is the need to use a lot more disk. I like to keep four weeks of fulls and two weeks of incrementals. I won't be able to do that with this technique. So, I've also added a "Duplicate to Tape" step on the Infostore so that I can have access to older copies of the Infostore.

Here are some of the salient points of my discusison with Symantec Technical Support:

  • Because of the dynamic nature of the Exchange database, Dedup storage is not the recommended storage media for Exchange databases. Even though my results were pretty good using Dedup, apparently the BE resources necessary to determine what needs to be written to Dedup are substantial. And, the further away you get from the full backup, the greater the resource requirement becomes - which may be why the incremental jobs die after a few days.
  • One possible solution (which I am not going to try) is to run full backups every night using GRT and Dedup. I may try this at some point, but I'm going to give the Disk backup a shot first.
  • Using Disk instead of Dedup will provide for much faster recovery time for an individual mailbox or mailbox items.
  • Be sure that your Exchange database maintenance jobs do not run at the same time the backup is running.  This is documented in the Best Practices Guide.

Finally, just in case you haven't seen this link, it's worth checking out. It doesn't really help this situation (unless the documenation has been updated), but there are some good things to consider.

I will update this post next weekend after I've had a chance to let a full week go by using the solution outlined above.

dhill82's picture

I've had good luck so far with the changes that IS@ESPC mentioned on the 17th.  I had one backup fail, but the others ran fine afterwards.  Before when one failed, they would continue to fail until the next Full backup.  I have noticed that since the 25th my logs haven't been getting deleted as they should be, but that's probably an unrelated issue.  SQL logs on other servers are deleting as normal, it's just Exchange that's having the problem.

IS@ESPC's picture

I should have mentioned that the 'differential' will not delete log files - by design.  Since the differential contains everything since the last 'full', it needs to keep all the logs in case of a restore.

Glad to hear the 'differential' is working for you.

dhill82's picture

I didn't think of that.  Of course, now that I said something yesterday, last night's backup failed with the error "V-79-57344-759 - Unable to complete the operation for the following reason: VFF Open Failure. This can be caused by low memory or disk resources."  Ugh...

IS@ESPC's picture

Well, after a week of using the following configuration, I have had no problems.  See previous post from 10/28 for more details on why this works.

  • System and Infostore jobs are separated
  • System - Full with incrementals using GRT and DEDUP storage
  • Infostore - Full with incrementals using GRT and DISK storage