Backup Exec 12.5 intermittantly hangs on back ups on Snapshot Processing
2 jobs are running, set to go off 3 hours apart. 1 goes to tape, 1 to external USB. Currently the tape job is stuck on Snapshot Processing while the USB is stuck at 201,021,799,273 bytes. Running BE 12.5 rev 2213, all updates installed on Windows 2003 SP2, A reboot of the server usually fixes it. I restarted the BE services with ServicesMgr.exe and same thing happens. No media alerts to remove a tape (as per this thread). Tape driver is HP LTO Ultrium-2 drive, file version 220.127.116.11, this is a network and tape backup running 3 hours apart.
I made sure to push down all of the latest updates to the agent on the respective servers. After reboot the jobs ran ok twice. Both jobs seem to get stuck at the same time but this time for a different reason (last time jobs stopped at the exact same byte count). I scoured the Windows event logs for anything related to VSS but nothing pops up Here are some warnings/error, hoping for some clues here:
1) The shadow copies of volume D: were deleted because the shadow copy storage could not grow in time. Consider reducing the IO load on the system or choose a shadow copy storage volume that is not being shadow copied.
2) Scope, 10.0.225.0, is 85 percent full with only 14 IP addresses remaining.
3) Could not scan 1 files inside D:\Symantec Endpoint\Symantec_Endpoint_Protection_SBE_12.1_RU1_MP1_Part1_Installation_Software_EN\SEPM\Packages\SAVLegacy32.dat due to extraction errors encountered by the Decomposer Engines. Application has encountered an error. For more information, please go to: http://www.symantec.com/techsupp/servlet/ProductMessages?product=SAVCORP&version=12.1.2015.2015.sepsb&language=english&module=1000&error=0014&build=symantec_ent
4) The DNS server has encountered numerous run-time events. To determine the initial cause of these run-time events, examine the DNS server event log entries that precede this event. To prevent the DNS server from filling the event log too quickly, subsequent events with Event IDs higher than 3000 will be suppressed until events are no longer being generated at a high rate.
These was a suggestion from another thread about the 2 jobs perhaps hitting the same file and locking up. Anything else?