Ayuda de vídeo de Screencast
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.

DFS and DFSR: Backing up your distributed file system with Backup Exec

Created: 28 Abril 2009 • Updated: 29 Abril 2009 | 20 comments

Traducciones automáticas

el cuadro de los Scott Meltzer
+4 4 Votos
Login to vote

The following guide is intended to provide you with procedures for backing up your DFS and DFSR File Servers:

- Which DFS File Server To Backup? -
You'll want to compare your file servers that participate in DFS or DFSR and select the one that has the following attributes:

  1. Fastest network link to the Backup Exec server (Gigabit vs 10/100)
  2. Fastest disk hardware (SCSI vs IDE)
  3. Fastest Processor(s)

- Installing the Remote Agent for Windows Servers (RAWS) -

  1. From within Symantec Backup Exec, Select the Tools menu, then choose "Install Agents and Media Servers on Other Servers"
  2. Choose Next in the Installation window
  3. Select Windows Remote Agents from the list, and select Add.
  4. Enter the file server's Computer Name, and the domain name it belongs to.   Click OK
  5. Next, you'll need to provide credentials that will allow you to run a program installation on the file server.  Enter them and choose OK.
  6. Select the features that you'd like to install on the file server, in this case, the "Remote Agent for Windows Systems", then click Finish.
  7. The Remote Installation Status window will indicate that the installation has installation has completed successfully.

- Backing up your DFS and DFSR Folders -

  1. From within Backup Exec, select New Job from the Backup Tasks left-hand menu.
  2. Now in the Backup Job Properties screen, Under Selections, expand "Windows Servers" from the Favorite Resources tree.  Now that you've installed the Remote Agent on your file server, it will be listed in this tree.
  3. Expand your file server's tree further and further expand the "Shadow Copy Components" tree, followed by the "User Data" tree.
  4. Next, you'll see the "Distributed File System Replication" tree, expand that, followed by the "DfsrReplicatedFolders" tree.
  5. Finally, you'll see all of your existing replicated folders, select the checkbox next to any of them to mark them for backup.
  6. Once you've made all of your selections, you may run or schedule your backup job as usual.

Comentarios ComentariosIr al último comentario

el cuadro de los MitchR

This is the "by the book" process for backing up DFS shares - however - you will see some very significant performance issues doing this.

For more info, see:


Mark this post as the solution, and have good luck for 7 years.
Forget to mark this as the solution, and tomorrow your server will crash.

Login to vote
el cuadro de los me79

I registered on backupexecfaq today and neither on Internet Explorer nor Firefox I get any content after logging in. Was the FAQ site closed?

Login to vote
el cuadro de los Scott Meltzer

Excellent point Mitch,  

If you do experience significant slow-downs using the Shadow-Copy Components to back up DFS shares, you can try backing up the shares directly using the methods described in the articles listed above.

Login to vote
el cuadro de los superalf

I've actually been researching this issue a lot lately, in an effort to figure out the best/most practical way of backing up DFS replicated data.
A big disadvantage of backing up DFS data via the Shadow copy components, if you are running Backup Exec 11 or 10 is that you CANNOT redirect data during restores. In v. 12 and above you can according to the tech docs.

If you back up DFS user data through the shares as Mitch pointed out you can redirect. The only drawback to this method is that you lose security attributes for files and folders. You are also bypassing the remote backup agent, which would probably result in a reduction in performance (compared to backing up files normally via the agent).

A third option is to stop the DFS service on the remote server via the Pre/Post job configuration page. Basically you issue a pre command of “NET STOP DFSR” followed by a “NET START DFSR” as a post command when the job finishes. This will allow you backup the replicated data directly via the normal volume level backup resulting in quicker backups, retention of security information, and allows for restore redirects. The downside is that stopping the DFS service results in loss of DFS data used for calculating which files to replicate and to which sites. Basically it will slow down the replication process. A Microsoft tech support agent recommended against this option.

We’re currently investigating a fourth option, where we disable communication between the DFS replicating hosts. In theory this should allow us to backup the data at the volume level without the problems of the other methods. But we have not had a chance to test this out yet.

For more details see: http://seer.entsupport.symantec.com/docs/304981.htm, http://seer.entsupport.symantec.com/docs/296168.htm, and http://seer.entsupport.symantec.com/docs/309537.htm

Login to vote
el cuadro de los robnicholson

This really is a bit of a show stopper for using Backup Exec with DFS, which is a technology we're using for disaster recovery.

I find it rather ironic that each year, we seem to move to new technologies that are making products like Backup Exec redundant. Our very expensive Quantum Tape Loader is pretty much redundant as tape just doesn't work very well. We replicate our data to our USA site for main DR. We want to use Backup Exec as the "Oops, I've just deleted a file" but it doesn't play very well with DFS put in for aforementioned reasons.

Seriously, our expensive BE support contract is up for renewal and I'm going to put that on hold whilst we look around for alternatives.

Cheers, Rob.

Login to vote
el cuadro de los robnicholson

Any, why does DFS have to use shadow copy? Why isn't there a plain vanilla file copy? I can use Robocopy to suck data off way faster than BE...

Cheers, Rob.

Login to vote
el cuadro de los me79

I guess this is because DFSR is something were files enter and leave the system through the DFSR staging area while replicating with other members. There is also a special DFSR database keeping track of things. So only VSS can guarantee that you are backing up a consistent state of files.

Login to vote
el cuadro de los FischerRoman

Yesterday I found some relatively new article from Symantec, dealing with the problem not being able to:

a) Backup DFSR Data by selecting the Shadow Copy Components

AND at the same time

b) Backup a Cluster via its Cluster Resource Name

Here's the Article: http://www.symantec.com/business/support/index?page=content&id=TECH164777


a) DFSR data should be backed up via the Shadow Copy Components.

b) Clustered servers should be backed up via their Cluster Resource Name. 

Conclusion: Since the clustered resource name does not show shadow copy components, both of these best practices can not be followed.


There is an alternate way to backup the DFSR data and that is using the share name. The DFS shares will show under the cluster name. Selecting the shares will allow for the data to be backed up, however the security information on the folders will not be obtained. This should not be considered a disaster recovery solution. For disaster recovery, run an occasional backup of the shadow copy components on the active node in the cluster.

So the funny stuff is, the "Solution" is to back up DFSR Data by using the share names and lose every ACL and "from time to time" run some REAL backup, which isn't the supported way, because maybe you try to backup the wrong node.

I try to keep my anger down, but I have to admit that Symantec seems not to be able to provide working, supported and best practice solutions for Products which are more than 8 years old (DFSR).

What's your preferred method of backing up a DFSR Cluster?

Login to vote
el cuadro de los robnicholson

I try to keep my anger down, but I have to admit that Symantec seems not to be able to provide working, supported and best practice solutions for Products which are more than 8 years old (DFSR).

I learnt to keep an anger down as BE wasn't doing my blood pressure much good either! That said, after many hours of battling I've managed to get our BE environment stable...


Login to vote
el cuadro de los robnicholson

Only found this one after we wasted £££ buying the ADBO license only to discover true image/synthentic backups don't work with DFSR based folders, which is 99% of our data.

Once again, half-a-solution from Symantec :-(

Anyone know if BE 2012 is any better in all of this? We're on the support contract so can install it. But from what I saw, it was mainly a user interface refresh and the underlying key technologies were unchanged, e.g. the dedupe engine had all the same performance problems.


Login to vote
el cuadro de los aloy_si_baik

configuring BE for cluster are such a pain in the a** !!!

i have a cluster ALIAS named JKTFS110 which an alias from 3 FILE SHARING server JKTFS03,07,11.All of the servers are installed with RAWS on each physical server. I use DELL R310 as media server and DELL PowerVault 124T as a backup device. The media server are backing up through JKTFS110  and this is where the problem start. The backup job always completed with exepction alert and the *.pst always corrupted(with random users)

Remote Agent not detected on JKTFS110.luminaryprima.com.
Click an exception below to locate it in the job log
V-79-57344-3844 - The media server was unable to connect to the Remote Agent on machine JKTFS110.luminaryprima.com.

The media server will use the local agent to try to complete the operation.

Backup- JKTFS110.luminaryprima.comV-79-57344-3844 - The media server was unable to connect to the Remote Agent on machine JKTFS110.luminaryprima.com.

The media server will use the local agent to try to complete the operation.

V-79-57344-65277 - AOFO: Initialization failure on: "\\JKTFS110.luminaryprima.com\DFSData". Advanced Open File Option used: No.

Remote Agent not detected on JKTFS110.luminaryprima.com.

Backup- \\JKTFS110.luminaryprima.com\DFSDataWARNING: "\\JKTFS110.luminaryprima.com\DFSData\Profiles\Data\arya sadewa\My Documents\Outlook\Personal Folders.pst" is a corrupt file. This file cannot verify.

Verify- \\JKTFS110.luminaryprima.com\DFSData WARNING: "Personal Folders.pst" is a corrupt file. This file cannot verify."

My local(Indonesia) Symantec Pre-sales guy doesn't do any help when i asked him abut this... it seems he never had any hands-on experience,from the way he answered me.

I posted in this forum about last week, and this carlos guy told me to use DFS backup, but after i rad this thread seems there is no hope in BE... i think  i'll suggest my customer to use another backup software

Login to vote
el cuadro de los FischerRoman


I just re-read your post and found out I over-read "cluster".....
Backing up a DFS-R cluster with backup exec has to be done with a workaround:
- back up the active node.

As I posted some weeks ago, there's an article from symantec describing the problem:

So please overread the rest of my post below regarding connection problems between Server and RAWS! ;-)



there are two problems with your pst-files:

  1. using PST-files with Outlook which are located on a network share are not supported by Microsoft:
    --> http://support.microsoft.com/kb/297019/EN-US
    --> http://blogs.technet.com/b/askperf/archive/2007/01/21/network-stored-pst-files-don-t-do-it.aspx
  2. replicating PST-files with DFS-R can lead to data corruption and replication problems
    --> http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_050
    - PST-files stay open with R/W access even when outlook is started only - the file gets modified by just starting Outlook and has to be replicated!
    - Imagine 50 users opening Outlook in the morning, having a 2 GB PST-file - that's 100 GB of data to be replicated (first, staged, then replicated via RDC, so WAN traffic will be kind of low, but you will see high I/O utilization on the servers running DFS-R (100 GB only by opening and closing Outlook without changing anything inside the PST!!)).
    - imagine a user who starts Outlook (PST-file is modified), closes it and opens it again - the PST-file will be replicated one time (after closing Outlook) and stay open (and locked) while Outlook is running and will be replicated again after closing Outlook at the end of the day when the user logs off...

So opening PST-files from a file share is a problem for itself, but replicating it will get you in trouble.

Even your errors seem to come from a communication problem between Backup Exec Server and the RAWS Remote Agent (check your firewall logs between the sites, try to configure your Agent Communication to a fixed port and allowing it through firewalls) backing up PST-files which are opened from a file share means backing up possibly corrupted data.

That may be the reason your PST-files are corrupted - maybe Backup Exec shouldn't be blamed here!

If you get your Remote Agents working (or the communication between Server and RAWS) and your PST-files are still corrupt after backing them up, think about getting rid of PST-files and you'll be fine.

I know that's not easy, but with Microsoft behind your back (e.g. the lacking support of PST-files opened from file shares) it should be more easy to get it done.

One of my customers had PST-files opened from network shares, and with implementation of DFS-R we excluded all *.PST-files from replication.... Users had to stop using PST-files or move them to the local disk and manually back them up to an external drive or as a ZIP-file on the network share.

Many users stopped using PST-files actively, and with Exchange 2010 they should be gone forever (thank god!!!).

Login to vote
el cuadro de los aloy_si_baik

Hi Roman,

Thanks man.. your post enlighten me and my grumpy customer LOL smiley

Login to vote
el cuadro de los FischerRoman

You're always welcome! ;-)

So - did you solve the problem in the end?


Login to vote
el cuadro de los Robert M

I would like to contrubute a solution to the issue(s) presented above - Currently running 2010 R3 on 2008 R2 SP1

I am running 2 x 2008R2 VMs as a HA DFSR cluster servers.  The servers both connect to a phycial LUN on an Equallogic device.

My investigations and research has noted that ADBO out of the box will work with clustered disks as long as servers are not in a DFSR cluster - i also run 2 x 2008R2 VMs in a HS cluster for data that is not replicated with DFSR.  ADBO works great in this situation buy selecting the ClusterResource Name as the entry point for accessing the drive letter.

So i was out to find why Clustered DFSR disks wont do it same way. It is well documented that the only way to backup DFSR data is by the Shadow Copy Compoents but this method 9.5 times out 10 yeilded rubbish transfer rates and the restores could not be re-directed.
I tried backing up with standard MS VSS by the cluster resource share, while it worked i saw AOFO warnings for V-79-10000-11219 about initialization failure.  The other problem with this is that NTFS file permissions are not restored when the data is.

After trawling the forums and error codes on symantec.com and looking at all the people who have a similar problem I stumbed accross this post - http://www.symantec.com/connect/forums/job-succesfull-no-files-are-backupped as i also experienced the same thing while trialing different options.  The second last post by Gerard 2, lead me to http://www.symantec.com/business/support/index?page=content&id=TECH92375.

I thought, what the hell I will try it out - After inserting the Active File Exclusion into the registry and recycling the BE services, ADBO for clustered DFSR disks worked. 

The Equallogic hardware VSS snapped the volume, it got transported to the BE media server, attached, backed up, and returned to the DFSR cluster host to merge back in.  I could not believe my eyes - nor the backup logs...

Now to test restore, I selected my backed set from above, chose a redirected location and hit run now.  The files were restored to the alternative location with NTFS permissions intact.

Can anyone on the forum confirm my findings.  It appears as though this could be real workable solution to those banging their heads against the wall with Backup Exec, ADBO, Clustered Disks and DFS & DFSR. 

I will be testing further to see if the registry key impacts SQL, Exchange as the technote indicated although we agents for those systems rather than targeting .edb .stm & .log directly if at all.

Login to vote
el cuadro de los Colin Weaver

DFSR should be backed up  in the officially stated ways (via Shadow Copy or perhaps via Share Level backups) disabling AFE might work but will not be officicially supported and you will also have the added overheads that other things that you do not need to backup as flat files will get backed up (or will report errors against skipped files that canniot be backed up and cause annoyance)

As such if you continue with this configuration make sure you thoroughly test backups and restores as we may not be able to help if something is either not backed up or is not restoreable.

Login to vote
el cuadro de los rablack

Hi, I have been following this issue for some time and am just as disappointed that DFSR and ADBO do not work together, and that DFRS in general is such a problem to backup.

I would be interested to know if anyone at Symanted has considered what has just occurred to me:

find out why the solution above works and then separate it from the other problems it causes with SQL and backing up of unecessary files. If it is a specific file or set of files then this could surely be resolved easily. Then your customers have DFSR and ADBO. or put another way - your customers have disaster recovery and fast backup. Which is what they want.


Login to vote
el cuadro de los Robert M

@ Richard.
  The only problem as I understand it is that if you try to target the actual SQL/Exchange or other protected files that are "always" in use, BEWS will try to back them up (opposed to skip them) and generate file in use excptions.  IMO if you target these protected files directly through the file system rather than with agents you are asking for trouble anyway.

@ Colin.

Thanks for the update, could you please provide the "offiical" Symantec procedure when the DFSR is in the 2008R2 cluster with 2 or more possible hosts.  There is no Shadow Copy components as the cluster name is the entry point.  We cannot pick a specific host to backup as the host could be different each week due to normal scheduled maintenance and moving resources from 1 host to another.

When picking up at the share level with a single DFSR server it is workable and have done that in past

Have a DFSR cluster and all you see is:

Job ended: Tuesday, 16 October 2012 at 9:11:51 AM
Completed status: Failed
Final error: 0xe00084af - The directory or file was not found, or could not be accessed.
Final error category: Job Errors

For additional information regarding this error refer to link V-79-57344-33967

Click an error below to locate it in the job log

Backup- MEL-DFS
Directory  was not found, or could not be accessed.

None of the files or subdirectories contained within will be backed up.

V-79-57344-33967 - The directory or file was not found, or could not be accessed.

The other thing is speed for ADBO, I see pretty much twice a much throughput with ADBO as to normal on host backup - 2.5TB @ 3200MB/min compared to 1.5TB @ 1700MB/min, both to Gen1 LTO4 @ 3gbps SAS.

I second Richard above that Symantec really need to look at this issue as dont see any problem as to why ADBO cannot be used with DFSR to picking up a flat file system.

Login to vote
el cuadro de los FischerRoman

Hi Robert,


Until yesterday I was using EMC Replication Manager (or its CLI/Scripts) to create Hardware-Snapshots of Volumes holding DFSR data. This worked fine, but It was an unbeautiful workaround with the need of having one Snapshot per Volume permanently mounted on the Backup Exec Media Server which allocated a large amount of EMC Snapshot LUNs (our EMC consultant was unable to work around this need) and also impacted storage performance (we still use the EMC CX4-120).

As of ongoing Storage expansion and the move to a new concept using 24 smaller LUNs (800 GB) instead of currently 4 LUNs (2 TB) and distributing them all to two two-node-Clusters in an active/active scenario for load balancing and better scalability, the use of EMC Replication Manager was not possible for much longer.

So I enabled Backup Exec ADBO (the free 60-day-license - will be bought in the next couple of days so it doesn't run out) and also installed the EMC VSS Provider on all Cluster-Nodes and the Backup Exec Media Server itself.

First tests showed a perfectly working Snapshot-Backup of non-DFSR-data on the snapped Volumes (including transport to the Media Server and a clean removal of the Snapshot from the storage after finishing the backup).

After setting the registry-key to disable the "Active File Exclusion" Feature in Backup Exec everything was backed up - including ALL DFS-R data.

This is the key set on the Media Server:

[HKEY_LOCAL_MACHINE\SOFTWARE\Symantec\Backup Exec For Windows\Backup Exec\Engine\Misc]
"Exclude Active Files"=dword:00000000

The only backdraw of this solution is to have manual work in a desaster recovey case where complete volumes are lost, including DFSR-configuration etc...
But that's no problem, as I scripted and documented everything regarding DFSR - so even if the whole storage burst in flames it is possible to get everything back working using the last full Snapshot Backup and use that like a pre-seeded newly created DFS Replicated Folder which will get all newer changes replicated from its counterparts.

I wouldn't restore any DFSR Database or system configuration at all - even if using the (hell slow) Microsoft DFS-R Snapshot Provider for backups.

I believe it's better to have everything under control, I don't trust Backup Exec when it comes to the restore of lost Replicated Folders...

Login to vote
el cuadro de los robnicholson

> have been following this issue for some time and am just as disappointed that DFSR and ADBO do not work together, and that DFRS in general is such a problem to backup.

It is a real shame and it's still the case AFAIK several years after the problem was reported despite a major new version (*cough*). The use of DFS & DFSR will only increase IMO as more companies look to replication to sister sites for disaster recovery purposes and not completely reliant upon old-style backup processes to protect their business (as it's documented that many still went out of business).

This thread has made stop and think though. Nothing really to do with backup, the problem with DFS & DFSR is file locking it you make both ends of the DFS link live at once, e.g. UK people edit local copy at same time as USA people edit their copy. There is no dsitributed file locking in DFS.

So we're looking at PeerLock which comes with PeerSync which is a replacement for DFS & DFSR. I wonder Backup Exec fairs any better with that technology?

Cheers, Rob.

Login to vote