Video Screencast Help

Seeding Backup Exec server from remote Linux clients

Created: 07 Apr 2013 • Updated: 13 Apr 2013 | 8 comments
KeirL's picture
This issue has been solved. See solution.


I'm looking into a solution whereby I can seed a central backup exec server from about 10 remote sites. Each site have just a single Linux server with about 500GB of data - links 10Mbs.

I can't dedupe at source so 'collecting data via USB' isn't really an option - (and, reading other posts, I don't think this would work anyway) - so I was thinking about creating a vm of the central Backup Exec using VMware workstation an a 1TB USB drive and dispatching this to sites in turn (or perhaps get several USB drives and do multiple sites at once). 

Whilst this is going on I would build the 'actual' Backup Exec server on the central site and once I'd seeded\imported the backup from the USB continue with incrementals over the WAN. 

I'd be interested on peoples thoughts on this apporach, experiences if already tried, and\or comments on how to improve and even comments suggesting this is a stupid idea........  :o)

kind regards 


Comments 8 CommentsJump to latest comment

pkh's picture

What you need to do is

1) do a full backup at the remote site.

2) bring the above backup to the main site and duplicate it to the dedup folder.  This will seed the dedup folder

3) do a full backup across the WAN link

4) continue with your incremental backups.

You cannot skip the full backup because BE 2012 will not allow it.

You would also need to do occasional full backups.  You cannot do incremental backups forever because when you do a restore you would need to restore your last full backup + ALL  the incremental backups since the full backup.  Also, since all your incremental backups are required for a restore, your dedup folder will eventually get full and your jobs will fail.

You might want to consider setting up media servers with dedup folders at your remote locations and then use optimised duplication duplicate backup sets to the main site.  This way only the changed data blocks are sent over the WAN link, thus minimising bandwidth requirements.  You would also need the CASO option.

KeirL's picture

Thanks very much for the feedback.

so when you say "1) do a full backup at the remote site." - this is the bit i'm most interested in on how to achieve. This is a new solution and I don't have Backup Exec on any of my remote sites..... I assume you mean do a  'Backup Exec' full backup and hence my thoughts on having a Backup Exec media server as a  VMware workstation vm (with a Backup Exec eval licence)  on a 1TB USB drive to do this. I will ship this to site and remotely configure to create this initial backup by installing the Backup Exec client on my Linux server and doing the backup to the virtual Backup Exec media server on the USB drive. There a no technical skills on the remote sites and so I just want them to plug in the USB and leave the rest to me. 

I also wasn't planning to use dedupe at all but was going to use synthetics - so I could create the full backups at the central site. This is low change rate and relatively non critical data and I was looking for a low-cost solution that would allow me to remove the backup infrastructure that currently exists at each site and wasn't keen to inctroduce an extra media server on each site. I'm confident that once all the data is in place at the central site and I can run synthetic backups then it will be a good solution (only 5GB change of sata per day per site). My challenge is how to get that initial baseline backup completed.



pkh's picture

You cannot do synthetic backups with Linux servers.  It only support Windows servers.

Also, whether you use dedup or not, the baseline cannot be done on a media server.

When you seed a dedup folder, you are not using a baseline.  You are just populating it so that similiar data block do not need to be stored later.  Noticed that my procedure above has 2 full backups, one at the remote site and another (baseline) at the main site.

KeirL's picture

OK - thanks - didn't realise about the synthetics for Linux.

But basically is my idea of using a vm to do the full backup at the remote site the best option? as I don't have a backup exec media server at the remote site...... I can't see another way to achoeve this.

I was going to create the vm with the same identity as the central backup server so the client does the backup to a local 'virtualised backup server' the data is the media set is then imported into the 'real backup server' at the central site and then the remote client does an increamental to the 'real backup server' (I'll need to sort out a change in IP address)...... would that work?

My thoughts with using the deudpe folder is that I didn't think you can dedupe Linux at the client and so the client would still send over the full amount over the WAN and it will only dedupe once it gets to the media server. It's sending data over the WAN I'm looking to reduce more than the total storage at the destination. I can't see the benefit of doing the initial backup locally if I just wanted to save space at the destination.

thanks again



pkh's picture

Symantec always recommends a physical server for a media server.  Using a VM is an alternative configuration with its attendant problems.  Read Colin Weaver's comment in this discussion and the document referenced by him.

Even if you managed to do what you want to do, you will still have to problem of doing occasional full backups over the WAN link.

If the Linux used at the remote site supports RMALS, you can backup to devices attached to the remote Linux servers using the media server at the main site.

KeirL's picture


Thanks and I appreciate the information and patience in understanding what I'm trying to achieve here.

So I only want to end up with a single Backup Exec server which is to be in a central datacentre which will be a physical machine with 10TB of disk storage. At my 10 remote sites I want just the Backup Exec client (ideally).

As I'm only anticipating <5GB of changed data per day I'm happy to be able to back this up as full fat data overnight during incrementals. My challenge is how do I get the initial 500GB per site into my Backup Exec server at the central datacentre. These 10 sites are global and just have a single Linux server and 1 or 2 (very non technical people per site), There is also a couple of Windows servers but these are out of scope for this backup requirement.

So my thoughts on a local 'virtualised media server' is purely a temporary proposal to allow me to do a local backup into a backup exec format onto a USB that can be couried back to the central datacentre. Once I have the data on USB in my datacentre I effectively have the 500GB of data that hasn't had to travel over the WAN. I need the remote Linux server to 'think' it's done a full backup to the central backup exec server and so be able to run incrementals from that point - eg set it's flags accordingly. I also need to the central backup exec server to be able to ingest the data on the USB and be able to request incrementals from the remote client from that point. I will need to rethink how to best get around the constraint of not being able to do synthetics for these servers......

I hope this is a little clearer now and you can point out the errors in my plan :o)

again - I value your assitance (and appreciate your patience....!)



pkh's picture

I don't see a way to do your full backup other than at the central site.  Also, as I said before, you still have the problem of occasional full backups.

KeirL's picture

ok - so it sounds like there is no way to prevent full fat data going over the WAN without retaining a media server on every site and doing optimised replication....

ok - back to the drawing board then :o)

thanks all the same