Video Screencast Help

Your opinions about my backup strategy

Created: 23 May 2010 • Updated: 22 Sep 2013 | 5 comments
This issue has been solved. See solution.

Hi everybody,

I'm using BE 2010 (since 12.5) to back up about 110 hosts. I have a wide range of applications to backup (e.g. exchange, Sql, Unix, Lotus Notes, Windows , NDMP backups, Vmware etc.). I have a powerfull backup hardware

- 8 CPU, 8GB RAM Sun server.
- SL500 (4 tapes) library
- JBOD 4400 (20 TB) disk system.

As you can guess sometimes I have difficulties to backup successfully everything. I created 20 different jobs to group these servers and applications. My strategy is about grouping each type of backup like Windows, UNIX, SQL backups, Exchange bckups, Then I divided each group to sub groups by calculating time of backups and separating to timely equal groups.  ( I wont talk about my disk >D2T and >T2T backup strategies.)

But imo it is not enough cause I have still problems with some kind of backups. I cant finish my full backups on backup window. This can have several another reasons but I want to argue my backup strategy

Anybody uses that kind of backup strategy ? If yes have you divided your jobs more deeper than this ? For example each of my Windows jobs has 10 hosts to backup. For example I have 2 Exchange servers with totally 400 GB of data. Should I create separate jobs for all information stores in each server ? I mean this with "dividing deeper".

I will be very happy if you share your opinions with me.

Best regards.

Comments 5 CommentsJump to latest comment

CraigV's picture

Hi,

Why not look into amalgamating these jobs further?
I would look at grouping jobs as follows:

1. ALL Database jobs together in 1 job, with no Advanced Open File Option enabled. This would include SQL, Exchange, and Lotus Notes.
2. All UNIX in a separate job (you might have done this).
3. File Server in another
4. System State/random data in another...etc

Basically the idea is to cut down on those 20 jobs into as few jobs as possible, and stream 4 jobs at a time to your library.
With Backup Exec 2010, Symantec got into the deduplication market at a non-enterprise level. This would allow you to run backups on your file server (and any other files for that matter), and deduplicate ("leave out") duplicate data. If the backup runs across duplicate files, it backs them up with pointers to the first instance. This would serve the purpose of cutting down backup sizes and times quite substantially, depending on the amount of duplicaste data.

Furthermore, BE 2010 introduced the option to archive either data, or data out of Exchange. This is going to allow you to place that data on some other storage, while allowing the users themselves to get the data from their workstations. Again, this serves to cut down on the amount of data being backed up, and decreasing the time taken too.

The only other option you have, should you decide to stay with your number of jobs, would be to move from full backups to incremental or differential, allowing backups of changed data only. You would then look at running your full backups over a weekend when there is less impact with jobs running into the next day.

Other alternatives would be to look at backing up your VMs in full (run vSphere 4 for Incremental backups, similar to Veeam). You would still need the licenses for your applications, but in backing up your VMs directly, you would then be able to restore incrementally from them.

Hope this all helps?

Laters!

Alternative ways to access Backup Exec Technical Support:

https://www-secure.symantec.com/connect/blogs/alte...

mbuyukkarakas's picture

Hi Craig, thanks for your reply.

Sorry , I forgot to describe my current job design. As you said, I was grouped my jobs as diff/inc. during the week, and full for the weekend to save time and use the advantages of BE.

I dont want to group all DB sources (SQL, Exch. etc) in one unique jobs cause in any case of failure I dont want to r estart the complete job (Is there any other way to recover failed backups ?). Thats why I divided my jobs in small pieces. And I followed for a long time BE and Windows 2003 64bit relations, if backups take very long time the performance of the server and other components decrease too much. So sometimes I'm not able to finish weekly full backups during weekend. This is my second goal, to run small and quick jobs to achieve finishing in backup window.

Are you agree with me or do you have any other idea ?

Thanks again.

PS: I didnt bought the dedup option yet, I have to wait for a while to have this.

CraigV's picture

Hi there,

This COULD get very interesting running around with this, depending on how much cash you want to spend =P
But here goes...

1. Backup Exec 2010 has (like other versions) a 60-day trial. You can enable the Dedupe option (on an x64-based server!) for 60 days and trial it. It has to go to disk, so make sure you have enough space available for this.

2. If you have a SAN in place...servers attached to a SAN, tape library attached to a SAN etc...you are able to make use of BE's SAN SSO option. What this will do is share your SAN-attached tape library amongst any SAN-attached servers with large backups. The end result is a backup running at 4Gb (or whatever the speed of your SAN) vs. a backup running across your LAN. I used this before we virtualised our client's head office, and my backup speeds dropped by 60%. It is also trial-able for 60 days. The downside to this is that you need full versions of Backup Exec installed on all your SAN-attached servers you want to have accessing your tape library.

3. Look into a large storage device (like an HP EVA/EMC CLARiiON) that you can run your backups too. Disk is going to be a faster option than tape, and once on disk, you simply stream off to tape, and don't worry about the time it takes. The reason for this is that it would have no impact on your production environment.

4. Look at upgrading the tape drives in your library (assuming they aren't already LTO4 or LTO5). Faster drives mean faster backups. It also means an outlay of cash for new tapes to make use of that!

I think that a better idea can be generated if you are able to provide the following:

a. Speed of your LAN
b. Speed of your tape drives
c. Do you have a SAN in place? If so, what speed, and what all is connected to it?
d. Size of your various backup jobs?
e. Times that the backups run?

Thanks!

Alternative ways to access Backup Exec Technical Support:

https://www-secure.symantec.com/connect/blogs/alte...

SOLUTION
Hywel Mallett's picture

As Craig says, in a large environment like this there are lots of routes you could go down (and lots of options for you to spend money on).
One thing I did notice though is that you mentioned about setting up lots of jobs, and trying to adjust them. Are you using policies at all? Once I got past more than a handful of resources to backup I found it much easier to set up policies and backup selection lists. This can create quite a lot of backup jobs (I have 25 policy created jobs in a much smaller environment than yours), but it makes it easier to manage them, and you can let Backup Exec  manage which jobs will run in parallel.

CraigV's picture

Hi,

Any news on whether or not this helped?

Thanks!

Alternative ways to access Backup Exec Technical Support:

https://www-secure.symantec.com/connect/blogs/alte...