Video Screencast Help
Netting Out NetBackup

Frequently Asked Questions on NetBackup Accelerator

Created: 03 Jul 2012 • Updated: 28 Mar 2014 • 91 comments
AbdulRasheed's picture
+22 22 Votes
Login to vote

NetBackup Accelerator is an exciting feature introduced in NetBackup 7.5 and NetBackup Appliances software version 2.5. This blog is not a substitute for NetBackup documentation. NetBackup Accelerator transforms the way organizations do backups, I am compiling a list of frequently asked questions from NetBackup Community forums and providing answers. If you have follow up questions or feedback, post them as comments. I shall try to answer the questions or someone else in the community can jump in. Fasten your seat belts! You are about to get accelerated! 

What is NetBackup Accelerator?

NetBackup Accelerator provides full backups for the cost of an incremental backup.

cost reduction in full backups = reduction in backup window, backup storage, client CPU, client memory, Client disk I/O, network bandwidth etc.

NetBackup Accelerator makes this possible by making use of a platform and file system independent track log to intelligently (there are a number of intellectual properties associated with this technology) detect changed files and send the changed segments from those files to media server. These changed segments are written to a supported storage pool (currently available only in NetBackup appliances, NetBackup Media Server Deduplication Pools and PureDisk Deduplication Option Pools) and an inline optimized synthetic backup is generated.

What is NetBackup Accelerator track log?

Track log is a platform and file system independent change tracking log used by NetBackup Accelerator. Unlike file system specific change journals (e.g. Windows NTFS change journal), there are no kernel level drivers that runs all the time on production clients. The track log comes to action during the backup and is populated with entries that are used by NetBackup Accelerator to intelligently identify changed files and segments within the changed files.

The size of the track log is a function of number of files and size of the files in the file system. The size of the track log does not increase with increase in data change rate in the file system.

So, NetBackup Accelerator is good for full backups. How about incremental backups?

The primary benefit from NetBackup Accelerator is for full backups. However, NetBackup Accelerator also reduces a subset of costs in running incremental backups. 

cost reduction in incremental backups = reduction in client CPU, client memory, Client disk I/O 

Since I can get full backups for the cost of doing an incremental backup, should I simply delete my incremental schedules and increase the frequency of full backups? 

Not recommended unless you are not concerned about the catalog growth. Note that full backups will have catalog files (the "*.f" files in NetBackup image database) larger than those of incremental backups. Running full backups instead of current incremental backups would mean that your image catalog size would increase. Larger catalog requires more space on your master server and it takes longer to run catalog backups. 

As I mentioned in the answer to the previous question, NetBackup Accelerator does help with incremental backups as well by significantly reducing the impact of backups on client's resources. Stick with your current schedules and take advantage of NetBackup Accelerator. 

What is NetBackup Client Side Deduplication?

NetBackup Client Side Deduplication deduplicates data and sends unique segments directly to storage server. Media server is not involved in this data path. For example, if your storage target is a NetBackup 5020 appliance sitting behind a media server, the NetBackup client is sending the unique segments directly to NetBackup 5020 appliance. This design makes it possible for a media server to support many storage pools and clients (scale-out on front-end as well as on back-end).

If your storage pool is media server deduplication pool (MSDP) or a NetBackup 52xx appliance, the storage server and media server co-exist on the same physical system. Even in this case, NetBackup Client Side Deduplication is sending unique segments directly to storage server (which just happened to be a media server as well) and hence you get both front-end and backend-end scale out. For example, it is possible to have a NetBackup 5220 to host a media server deduplication pool while also serving as a media server for another NetBackup appliance or media server. 

What is NetBackup Optimized Synthetic Backup?

NetBackup Optimized Synthetic Backup is a feature where a full backup can be synthesized on storage server by making use of previous full and subsequent incremental backups without reading those component images and writing a new image. This technology had been in NetBackup since 6.5.4. It is available on all NetBackup appliances, Media Server Deduplication Pool and PureDisk Deduplication Option Pool. Recently some of the OpenStorage partners have also announced support for this feature.

Can you compare and contrast NetBackup Accelerator and NetBackup Optimized Synthetic backup?

NetBackup Accelerator provides all the values you get from NetBackup Optimized Synthetic backup and a lot more. In fact, if you have a supported storage pool for NetBackup Accelerator, there is really no need for you to use NetBackup Optimized Synthetic backup.

NetBackup Optimized Synthetic backup is a post-backup synthesis. You need a separate schedule that generates synthetic backup after incremental backups are done.  NetBackup Accelerator generates the full backup inline when data is being sent from client.

NetBackup Optimized Synthetic backup requires you to run traditional incremental backups. Hence all the limitations of a traditional incremental backup are applicable. For example, the incremental backups do require NetBackup client to enumerate the entire file system to identify the changes. NetBackup Accelerator makes it possible to intelligently identify changes and read just the changed files.

I can list a lot more, but you get the point by now. Bottom line is… if your storage pool supports NetBackup Accelerator, there is no need to use the older NetBackup Optimized Synthetic backup schedules.

Can NetBackup Accelerator and NetBackup Client Side Deduplication co-exist?

Of course! In fact, these two features are like milk and cookies. They are tasty by themselves but delicious when eaten together!

NetBackup Accelerator reduces the cost of doing a full backup (see the question “What is NetBackup Accelerator?”  for the definition of the cost). When you combine it with NetBackup Client Side Deduplication, some of the advantages are…

  • Global deduplication without deduplication processing penalty on client. For our competitors, turning on source side deduplication would imply resource consumption on production client system to support dedupe fingerprinting. Because of NetBackup Accelerator, the resources needed on client are significantly lowered in NetBackup. In fact, it is fair to say that NetBackup Accelerator lets you dedupe at source confidently. 
  • Ability to directly write to storage server. If client side deduplication is not enabled, Accelerator sends changed segments to media server first. With client side deduplication enabled, the changed segments are directly sent to storage. The result is scalability (front-end and back-end scale-out). A media server can support 100s of clients and 10s of storage pools.
  • Support for in-flight encryption and compression (configurable at pd.conf on clients)
  • No incremental licensing cost. Both Accelerator and NetBackup Deduplication are on the same SKU. You already have both capabilities if you paid for one or the other.  Turning on/off these features are at the click of a mouse, try it anyway! 

Are they any workloads where NetBackup Accelerator cannot be used?

The graphical user interface is designed to grey-out enable accelerator check-box for the policy types where NetBackup Accelerator is currently not supported. Furthermore, if you happen to choose a storage unit that does not support NetBackup Accelerator, the policy validation is designed to fail when you try to save changes to the policy.

Are there any design considerations when not to use NetBackup Accelerator when NetBackup Client Side Deduplication is used? 

No! NetBackup Accelerator does not have any negative effects on NetBackup Client Side Deduplication.

Are there any design considerations when not to use Client Side Deduplication when NetBackup Accelerator is used? 

No functional limitations.  But there are a couple of situations where you do not get the advantage of NetBackup Client Side Deduplication at this time.  The good news is that there is nothing you need to do during the design or implementation.  NetBackup knows not to attempt NetBackup Client Side deduplication in these scenarios. I am listing them for the sake of awareness.

  • Remember is that NetBackup Client Side Deduplication is available for a subset of NetBackup Client platforms. Refer to NetBackup Deduplication documentation for more info. NetBackup Accelerator is available for ALL supported client platforms with the exception of OpenVMS.
  • If your storage pool is NetBackup Cloud (to Nirvanix, AT&T Synaptec, RackSpace, Amazon etc.), NetBackup Client Side Deduplication is not currently available. NetBackup Accelerator is supported. 

Note: Thank you for so many follow up questions! I have made sincere attempts to answer all of them down below. Furthermore, I would like to bring your attention to two additional blogs you may be interested in

Comments 91 CommentsJump to latest comment

RLeon's picture

Thank you for this informative post;
g-force was felt.

RLeon

+1
Login to vote
Smartmil8's picture

Agree with you.
Post really informative. Thanks to author!

-8
Login to vote
Stephane COLIN's picture

Hi,
 

A very good synthesis, thanks.
 

Can you explain us, how to work with the "Accelerator Track log" ?  (for troubleshooting, tunning, ...)

 

Stephane COLIN

...... NBU Guy ..... 

+2
Login to vote
AbdulRasheed's picture

Hi Stephane, 

   I shall get back to you on this after consulting with Support to see if there are any plans to publish a TechNote. Typically troubleshooting steps are documented in TNs and in Troubleshooting guides.

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

+1
Login to vote
Fred2010's picture

Interesting, but what I would like to know is if my dededuicating storage units are supported?

I have DataDomain 690 with Boost in use, and also HP B6200 with Catalyst.

Can you please let me know if these can be used with Nbu Accelerator?

If not, when will they be supported?!?

Thx!

+1
Login to vote
AbdulRasheed's picture

 

Hi Fred,

  As Mayur had already mentioned, the APIs required for Accelerator are made available to all OpenStorage vendors and you could follow up with EMC or HP (in your case) to see where they are with updating their OST plugins to support NetBackup Accelerator. 

  Some of the OST features had been quite difficult for partners to implement on account of architectural limitations of the backend device. For example, it took almost 3 years for partners to build a plugin that can support Optimized Synthetic Backups. With that it mind, what Symantec had done with NetBackup Accelerator is to provide NetBackup Deduplication at no additional cost (both features are in the same SKU). Hence if you have some decent storage (for instance, depreciated production storage that you can repurpose), simply attach to your media server and make a media server deduplication pool (MSDP) for now. That way you can start using NetBackup Accelerator for a set of workloads that could really use performance improvement today. Once the vendors provide you the plugin, you can migrate backups (bpduplicate) to their storage and transfer the license. 

  Just a thought. In any case, do talk to your vendors about their plan. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

+2
Login to vote
Fred2010's picture

I have 9 TB of SATA disk in RAID on each mediaserver, currently not in use, would that be suitable for use in Accelerator?

Also: you mention this is a separate license option. We have a backend TB license for our company. Does this license encompass the Accelerator option?

Thanks!

-14
Login to vote
AbdulRasheed's picture

 

You would be able to use it. Please read the media server sizing guidelines in this document: http://www.symantec.com/docs/TECH77575 Take a look at pages starting from 15. You need to make sure that the raid you have supports a write speed of 200 MB/sec or more. Since you are running Boost and Catalyst, it looks like you may have media servers with decent processing power and memory. Those requirements are also given in this document.

The platform per terabyte license does not cover data protection optimization add-on you need to use NetBackup Accelerator. Talk to your sales rep for an evaluation key and try it out. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-5
Login to vote
Mayur Dewaikar's picture

Fred,

NBU Accelerator is available through the Open Storage API to all Symantec OST partners. EMC and HP will need to implement this feature as part of their OST implementation with Data Domain and the B6200 respectively. Once they have done this, you can use accelerator on these devices.

Please note, use of NBU accelerator requires the NBU Data Protection Optimization Option license.

Hope this helps!

+1
Login to vote
Fred2010's picture

Helps somewhat, thank you :)

I have no idea who to ask how far they are with implementing stuff (lol: they probably wouldn't tell me anyway!)

Don't EMC/HP have to work with Symantec together to get the API's working on their machines?!?

Would be nice if somewhere we all could see which hardware manufacturer is working on getting it implemented on their systems. It could give certain hardware an edge over a competitor's hardware if one does support it and the other doesn't.

For that we would need an overview who has it, who doesn't and who is still working on it.

To me, someone in Symantec must know this info. Would be great if it could be made public (if at all possible)

-13
Login to vote
AbdulRasheed's picture

 

Hi Fred,

   You are absolutely right. The OpenStorage partners do work with Symantec to develop these plugins. Symantec needs to qualify the plugins for OpenStorage API conformance.

  Note that partners are developing these plugins so that they get competitive advantage over devices without OpenStorage support. Further, the more OpenStorage features a partner support (e.g. Optimized duplication, Optimized Synthesis, Auto Image Replication, NetBackup Accelerator etc.), the better competitive advantage they get over others with lesser number of OpenStorage features. Hence Symantec offers NDA to partners so that their development and release plans for the plugins are protected until they formally make their announcement.

   Thus, Symantec cannot share anything related to where a particular partner is in terms of supporting a specific OpenStorage feature. We encourage customers to contact respective partners for their backend device. Hope this helps to understand why we cannot speak for the upcoming features in partner OST plugins. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

+1
Login to vote
Fred2010's picture

LOL :)

If everybody keeps it a secret is is very hard for customers to decide whether to invest in Accelerator...

I do understand the problem, but for us it is really impossible to find out whether a vendor is working on a technology or not:

All our contacts are sales people and not the 'core' developers. They have no idea what OST is, except that 'it' is supported (Whatever 'supported' entails is not clear to most).

EMC & HP support OST, but what specific features are supported is harder to find out...

Some extra questions:

1) Does Accelerator work with ALL policy types?
2) Is there an andvantage for using Accelerator with FULL VMWare backups?
3) If so: How much advantage in runtime can one expect generally?

Thanks!

Fred

 

+1
Login to vote
AbdulRasheed's picture

 

Hi Fred, 

   Unfortunately, I cannot speak for the road map from partners. Sorry for not being able to help on that matter. All I could say is that you do have NetBackup Dedupe in the same license and hence you could use it until partners offer support. Thus you are not stuck in case things does not happen quick enough from a partner. 

   1. Currently NetBackup Accelerator supports Standard and Windows policies

   2. Are you referring to NetBackup for VMware? That is a true offhost backup solution. NetBackup Accelerator is not currently supported with NetBackup for VMware. 

      If you are referring to running a NetBackup Client within a VM (be it VMware, Hyper-V, XenServer, Solaris Zones.... ) NetBackup Accelerator helps significantly reduce the resource overhead (client CPU, client memory, disk I/O etc. See the "cost of doing backups" section in the FAQ) and hence recommended. 

   3. It is inversely proportional to data change rate. The lower the daily rate of data change, the better is the runtime advantage. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-13
Login to vote
SeanRegan's picture

This is one of the key benefits of SYMC Integrated appliances. All of the capabilities work together with the backup software and the storage. This helps eliminate someof the lags between when the software provider delivers a feature like Accelerator and the backend storage provider like EMC in this case support it. 

+1
Login to vote
effiko's picture

I have seen DataDomain has a sizing tool that EMC guarantees that the product it recommends will not be the bottleneck for performance on the data type and mix defined.

Symantec’s Symantec TECH77575 seems like a good point to start with.

Is there an automated tool (even an Excel sheet) from Symantec to calculate a stable solution given the input mix of data, size and backup window?

 

-13
Login to vote
AbdulRasheed's picture

Sorry, I had been away for a while. 

Yes, there are calculators and other tools available for partners. Please talk to your channel/partner account manager to get this for you. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-9
Login to vote
LuisLiendo's picture

 

Do you foresee any potential problems running Optimized backups on clients being on a Netbackup version less that 6.5.4 (i.e. 6.5.3.1) ???

Our master servers are at 7.1 with NBU5020 appliances 1.4.2 but with several solaris clients running at Netbackup version 6.5.3.

Thoughts ???

 

 

-4
Login to vote
AbdulRasheed's picture

Hi Luis, 

   Optimized Synthetics is an OpenStorage API. It requires NetBackup 6.5.4 or higher at the media server. This feature is not really client version dependent. But the amount of testing done for older versions will be minimal. I would recommend opening a Support case. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-12
Login to vote
NickSW's picture

Policy validation doesn't appear to succed with Accelerator enabled and the Policy storage configured to a Storage Unit Group containing PureDisk disk storage units.

-7
Login to vote
AbdulRasheed's picture

Hi Nick, 

Did you upgrade from an earlier version of NetBackup? You are entitled to use Accelerator since you have NetBackup Deduplication Option/Add-on license.However, the older licence keys do not turn on the specific bit needed for NetBackup Accelerator. Please work with sales team for a new Data Protection Optimization Addon/Option license for the same quantity of dedupe license you currently have. The new key (which you are getting at no additional cost) will resolve the issue. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-6
Login to vote
Morten Seeberg's picture

Hey Abdul, what about Shadow Copy Components and Accelerator... especially thinking about Microsoft DFS-R servers, which keep files in the Shadow Copy Component area.... I am just thinking that it may not be able to index files in Shadow Copy Components, or is it?

I plan on testing this, but if you had some insight before attempting, that would be great.

Did you restore something today?

-1
Login to vote
Morten Seeberg's picture

After some testing I think the conclusion is that I either did something wrong or Accelerator does not work on "Shadow Copy Components". This is a daily backup with backup selection "Shadow Copy Components:\" and approx 350Gb DFS data:

28-09-2012 11:30:15 - Info nbjm(pid=3612) starting backup job (jobid=573055) for client remoteDFSserver.domain, policy DFS_Backup, schedule Daily  
28-09-2012 11:30:15 - estimated 368500807 Kbytes needed
28-09-2012 11:30:15 - Info nbjm(pid=3612) started backup (backupid=remoteDFSserver.domain_1348824615) job for client remoteDFSserver.domain, policy DFS_Backup, schedule Daily on storage unit MSDP01
28-09-2012 11:30:16 - started process bpbrm (9792)
28-09-2012 11:30:17 - using resilient connections
28-09-2012 11:30:27 - Info bpbrm(pid=9792) remoteDFSserver.domain is the host to backup data from     
28-09-2012 11:30:27 - Info bpbrm(pid=9792) reading file list from client        
28-09-2012 11:30:28 - Info bpbrm(pid=9792) accelerator enabled           
28-09-2012 11:30:43 - connecting
28-09-2012 11:30:44 - Info bpbrm(pid=9792) starting bpbkar32 on client         
28-09-2012 11:30:44 - connected; connect time: 00:00:01
28-09-2012 11:30:48 - Info bpbkar32(pid=8044) Backup started           
28-09-2012 11:30:48 - Info bptm(pid=11856) start            
28-09-2012 11:30:49 - Info bptm(pid=11856) using 524288 data buffer size        
28-09-2012 11:30:49 - Info bptm(pid=11856) setting receive network buffer to 2098176 bytes      
28-09-2012 11:30:49 - Info bptm(pid=11856) using 256 data buffers         
28-09-2012 11:30:49 - Info msdpserver.domain(pid=11856) Using OpenStorage client direct to backup from client remoteDFSserver.domain to msdpserver.domain  
28-09-2012 11:30:56 - begin writing
29-09-2012 09:18:23 - Info bpbkar32(pid=8044) accelerator sent 318017739264 bytes out of 317158541312 bytes to server, optimization 0.0%
29-09-2012 09:18:26 - Info bpbkar32(pid=8044) bpbkar waited 863647 times for empty buffer, delayed 1701842 times.   
29-09-2012 09:18:31 - Info msdpserver.domain(pid=11856) StorageServer=PureDisk:msdpserver.domain; Report=PDDO Stats for (msdpserver.domain): scanned: 309740188 KB, CR sent: 755547 KB, CR sent over FC: 0 KB, dedup: 99.8%, cache hits: 0 (0.0%)
29-09-2012 09:18:32 - Info msdpserver.domain(pid=11856) Using the media server to write NBU data for backup remoteDFSserver.domain_1348824615 to msdpserver.domain
29-09-2012 09:18:33 - Info bptm(pid=11856) EXITING with status 0 <----------        
29-09-2012 09:18:33 - Info msdpserver.domain(pid=11856) StorageServer=PureDisk:msdpserver.domain; Report=PDDO Stats for (msdpserver.domain): scanned: 2 KB, CR sent: 0 KB, CR sent over FC: 0 KB, dedup: 100.0%
29-09-2012 09:18:33 - Info bpbrm(pid=9792) validating image for client remoteDFSserver.domain        
29-09-2012 09:18:35 - end writing; write time: 21:47:39
the requested operation was successfully completed(0)

Did you restore something today?

-6
Login to vote
AbdulRasheed's picture

Let me do some digging and get back to you on this, Morten. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-9
Login to vote
Morten Seeberg's picture

Find out anything? I haven´t been able to get any answers through my channels...

It looks to me like this is causing an issue with this customer, as their backups are extremely slow, and I suspect this is due to thousands of queries to the MSDP storage server.

Did you restore something today?

-4
Login to vote
AbdulRasheed's picture

I totally missed this during my travel. My apologies. Let me check on this. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-21
Login to vote
AbdulRasheed's picture

Do you mind sending me the case number if you had already worked with technical support on this? You can e-mail me through Connect. It will help if we already have logs/data about your environment. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-16
Login to vote
River.Hsieh's picture

Hi All :

 

We used to add new tape drives for holding the backup window into 24hr for full backup in Direct-NDMP . 

We wants to shorten the backup window in efficient way.

 

Does Accelerator works in Direct-NDMP or Remote-NDMP backup?

How can Accelerator works in NDMP environment with large data?

 

Keep Adding tape drives seems not a good solution.

We need some suggestion and solution.

-5
Login to vote
AbdulRasheed's picture

Hi River, 

   NDMP backups are not currently supported with NetBackup Accelerator. However, Accelerator can help you with your situation in a different way at this time. 

   Your NAS volumes can be mounted on a NetBackup client, NetBackup media server or NetBackup 5220 appliance and you can make use of NetBackup Accelerator from there. Your first backup will be slow as it needs to read all the data in your NAS volumes. After that your full backups will be much faster. You also have the ability to scale out this kind of backup processing. You can have different volumes mounted on different clients (or media servers, appliances) in case of very large NAS devices and concurrently back them up.  

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-16
Login to vote
AbdulRasheed's picture

Hi River, 

  RE: Does Accelerator work in Direct-NDMP or Remote-NDMP backup?

No, NetBackup Accelerator currently does not support NDMP method of backup 

 RE: How can Accelerator works in NDMP environment with large data?

So your question really is how NetBackup Accelerator help where you have a NAS system with lots of data, right? 

  NetBackup Accelerator can indeed help here. You would need to mount the volumes to a NetBackup client or media server (or a NetBackup 5220 appliance) and you can turn on NetBackup Accelerator for backups. After the initial backup (this first backup might be a bit painful!), you can do the furture full backups much faster. The performance gain depends on the data change rate, but in general most NAS workloads have a lot of static data and hence it is certainly worth trying. Furthermore, for a very large NAS system with multiple exported file systems, you can scale out the perforamance by mounting different file systems on different NetBackup clients. You just need to make sure that the same file system is mounted on a given client across backups. 

 

 

 

 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

+1
Login to vote
River.Hsieh's picture

Hi.

This is worthy to try.

But Avamar got the first step of POC.

How could I convince boss to keep stay in NBU solution?

-6
Login to vote
AbdulRasheed's picture

 

Hi River, 

   Invite your boss to look at the big picture. What are we trying to solve? You have the business need to protect ever-growing data on your NAS systems. 

   Historically NDMP backups were the only solution to protect data on NAS devices. While it is still widely in use, it is not scaling to meet the growing demands. See this blog from one of my colleagues: https://www-secure.symantec.com/connect/blogs/lookout-ndmp-backup-snapshots-your-birthdays-are-numbered

  Your environment is a classic example of a case where old school NDMP backups are not fitting the business needs. What Avamar is going to offer is what they call an NDMP Accelerator. These are really dedicated Avamar nodes (extra $$$) in an attempt to increase the performance. The idea here is based on doing incremental backups forever. NetBackup Accelerator is the third generation of that idea (first generation is synthetic backups, second generation is optimized synthetic backups and third one is NetBackup Accelerator) 

   If you are going to use NetBackup Accelerator, your boss could save his IT budget by reducing TCO as following. 

   1. For an environment of your size, Avamar would require a huge Data Store: Heavy capital expenditure as EMC would also require professional services to install this

   2. You do not have a good way to store data with heavy retention requirements within Avamar. They may talk about a media access node, however it requires additional nodes (more $$$) and would need additional maintenance tools. 

  3. NetBackup Accelerator lets you make use of your existing resources in media server or clients to scale-out performance for you NAS backups. (save $$$) 

  4. You also have the option to consider NetBackup Replication Director if your NAS devices are from NetApp. This goes back to what I was referring earlier about the blog (future ready, investment protection, stop adding short-time steroids to boost NDMP performance temporarily) 

  5. Talk about additional operational expenditure and overhead if you also need to maintain Avamar. A short-term bandaid is likely to cost more $$$ overall. 

 

   Send me a note with your contact info if you like, I can arrange for a sales rep to visit you guys and provide a briefing on NetBackup Accelerator and NetBackup Replication Director. He/she has access to tools to provide an estimate on TCO so that your boss could compare and contrast his/her IT budget with a long term vision for investments in backups.  

  Disclaimer:  Symantec's policy is to respect competitors in social media. The opinions expressed in the comment section should not be treated as those of Symantec. 

 

 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

+1
Login to vote
Lee C's picture

Hi Rasheed,

 

We too have per TB licensing. We were expecting to have the Accelerator option included, but it's greyed out in the policies.   Our license includes everything else. So I'm a bit suprised we need to purchase the accelerator option, or have I misunderstood your comment?

Master/Media NBU7.5.0.7 on W2k8 - VMware, SQL and Exchange agents, MSDP

OCA 7.5.0.7 on W2k8R2

 

0
Login to vote
AbdulRasheed's picture

Hi Lee, 

   NetBackup Platform per Terabyte license does not include Data Protection Optimization Add-on. The latter is needed for using NetBackup Intelligent Deduplication and NetBackup Accelerator. 

  Having said that I am wondering if you already have dedupe license (since you mentioned 'Our license includes everything else'). If you already have dedupe license bit turned on, the problem could be that you do not have the correct license key to turn on the Accelerator bit although you are already entitled for it. Please talk to customer care or your sales rep if you already have dedupe turned on with your current license, they can provide you another key that will turn on Accelerator at no additional cost. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-13
Login to vote
Lee C's picture

Thanks Rasheed,

 

We did have the Data Protection Optimzatiom.  We had to get a new license re-generated.  The Option is now ungreyed.

Master/Media NBU7.5.0.7 on W2k8 - VMware, SQL and Exchange agents, MSDP

OCA 7.5.0.7 on W2k8R2

 

+1
Login to vote
Chris Garrett's picture

Hi Rasheed,

Do you have any information on what information the track log stores, and how it compares the filesystem state so quickly?  Are the headers of every file on the filesystem checked in the same way as an incremental?

Thanks,

Chris.

-14
Login to vote
AbdulRasheed's picture

Hi Chris, 

  I won't be able to reveal the actual IP behind the processes. NetBackup Accelerator is designed such that it works on any file system. In addition, it is also architected such that it can make use of exisiting change journal mechanism* built into the file system (this is user configurable).

  The track log keeps essential information needed to verify if a file has changed since a previous point in time. This does include the file meta data collected. In addition, it also includes hashes for various segments of a file so that we only need to read the changed blocks if a file has changed. During the run time, we do a file system meta data walk to detect the current statuses of files. There is IP involved in this area to make this process quicker. (You may recall that FlashBackup, V-Ray technology etc. also has similar processes to optimally detect changes without a full scan). 

*Currently NTFS change journals are supported

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-8
Login to vote
Chris Garrett's picture

Thank you Rasheed,

I understand that not everything can be in the public domain smiley

Chris.

-12
Login to vote
perkins.michael2012@hotmail.com's picture

This blog provides very useful information about the backups. Thank you for increasing my knowledge.

-4
Login to vote
teiva-boy's picture

How to solve extremely high random reads with client mounted shares and performing NBU Accelerator type backups?  

In lieu of doing NDMP and performing these types of backups, I would mount the shares to a client and these clients would scan for changes, and backup changes via the "Accelerator," technology.  Thats a lot of random reads.  A lot.!  Not only am I reading each file for the archive bit, but I'm doing it probably across multiple clients to improve throughput and shrink my backup window.

I don't see how a journaled file system would work in this case of CIFS/NFS mounted shares?  As an example NTFS Change journal doesn't work this way across shares, only NTFS.

 

Or am I not understanding how files are scanned and backed up?

There is an online portal, save yourself the long hold times. Create ticket online, then call in with ticket # in hand :-) http://mysupport.symantec.com "We backup data to restore, we don't backup data just to back it up."

-13
Login to vote
AbdulRasheed's picture

The 'random read' overhead is mainly for the data blocks of a file. Note that most of the meta data for the file (needed in a incremental backup) comes from the directory. Although we are used to thinking of directory as a 'folder' containing a bunch of files, directory itself is special file in the file system that associates file meta data (owner, group, various time stamps, data block addresses and offsets etc.) to its file name. Your random read would have overhead when the file blocks are scattered around disk segments. When it comes to NetBackup Accelerator, there is no (or mimimum) impact with such fragmentations becuase it needs to seek the actual data blocks only when the file is changed which it knows from reading the directory (directory-file, to be precise).

Thus, with the exception of the very first backup, NetBackup Accelerator can help you with file systems where there is huge random read overhead. However, note that your mileage depends on how frequently files change. 

Change journal in NTFS would work only when NetBackup client sees it as an NTFS file system. If you are NFS/CFS mounting that file system somewhere else and backing up, NetBackup Accelerator cannot take advantage of NTFS. However, track log is capable of tracking changes even without file system level change journals. In my opionion, that is the true value of NetBackup Accelerator. 

 

 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-10
Login to vote
Jaykullar's picture

Hi Rasheed,

 

We have a per tb base licence and duplication option also per tb. Would i be required to purchase another licence for Acclerator?

-7
Login to vote
AbdulRasheed's picture

You have everything needed to make use of NetBackup Accelerator! Do you know if the license keys you have were issued during pre-NetBackup 7.5 days? If yes, contact your sales rep or customer care center to issue new license keys for use with NetBackup 7.5. You are not paying anything more. You need the new keys to turn on Accelerator bit. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-3
Login to vote
Gautier Leblanc's picture

Hi Rasheed,

 

I have not seen the answer about Shadow Copy Component. I am doing some tests now, and it looks like Accelerator will not improve SCC backup : On my little clients, Accelerator optimizes only 40% of my backup.

I investigate now but may be you have the answer.

Regards

 

PS : I have a Netbackup 7.5.0.4, master and clients are under Windows 2008R2.

 

-12
Login to vote
AbdulRasheed's picture

Hi Gautler, 

   Let me investigate this with our development team. Please note that the optimization you get from NetBackup Accelerator depends on how less the data change rate is. In Windows systems, Shadow Copy Components are quite dynamic and changes often. That could be the reason. 

  In any case, I shall get back to you if our developers have a different opinion. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-6
Login to vote
AbdulRasheed's picture

Hi Gautler, 

  I did hear from our engineering team. What you see for SCC at this time is normal mainly bacause of the way system files are returned from API calls. The good news is that our team had found a way to optimize this significantly but it would require extensive testing cycles before we can include in a release update. Stay tuned, we got you covered! I am unable to state a date as it is against Symantec policy to talk about road map items. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

+1
Login to vote
jadlip's picture

Useful info... Clear many point...

Thx

Thank you,

Pradeep Jadli

-7
Login to vote
StefanosM's picture

I want to ask if I can use the accelerator with a clustered file server.

I'm thinking to move the track log to a shared disk and configure both nodes to "see" the same tack file.

Do you think that this will work ? Is it a supported configuration?

-15
Login to vote
AbdulRasheed's picture

So long as the track log and file system are from the same node, and the node's identity in the policy is using a virtual name; this would work just fine. In fact, we have customers backup up large NAS volumes (similar to a shared file system) using this method. 

In the worst case scenario, if the track log could not be located; NetBackup will do a traditional full backup. Although it may take longer to finish, you are losing any data. 

Managing the mechanism to co-locate and failover the tracklog is your responsibility and technical support may not be able to help. But the capability itself is supported. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-6
Login to vote
John Heffner's picture

I may have missed this, but where is the Accelerator log file stored? 

Would we need to put in a backup exclusion ? And if so what is the path unix and windows?

And, 1 more, I understand that the size of the file is file count and size dependant, but is there some type of scale for estimating Accelerator track log ?

 

-11
Login to vote
AbdulRasheed's picture

Hi John, 

  This document has information on location of the track log and how to relocate it if necessary: http://www.symantec.com/docs/HOWTO77409 

 You are not required to exclude it from backups becuase these are in general small files and not worth the trouble of managing specific exclude lists. You certainly can if you like. 

  There is an internal formula for estimating the size but it is really not that useful for external consumption. The main reason is the fact that change rate varies from site to site and machine to machine. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-2
Login to vote
dwilson1987's picture

Hi Rasheed,

I have have just upgraded from Version 7.0.1 to 7.5.0.5 and i have updated keys to allow me to select the Accelerator attribute.

I have got to work on a new media server (fresh 7.5.0.5 install) i installed, none of the other media servers that i have i enabled the option on the backup policies for work. all the policies are using MSDP...

is this a known issue from an upgrade?

 

Regards

 

Daniel

-6
Login to vote
AbdulRasheed's picture

Hi Daniel, 

  What was the error message? I am guessing that Accelerator bit is not turned on for those media servers. Please update the licenses on your media servers as well. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-12
Login to vote
GraemeShaw's picture

Hi,

I am trying to get acceleartor working to a Data Domain storage unit. I have upgraded the DDOS version and DDBoost plugin on the media servers (which happens to be running on a Netbackup 5220 Appliance in this environment). To get the policy to accept the "use accelerator" setting I had to ensure the device mappings file version was 1.114.

When I try to run an accelerator backup the job fails immediately with error 154 (storage unit characteristics mismatched to request).

Surely the fact that the policy accepts the "use accelerator" setting means that the storage unit "checks out".....?

The Appliance is still running Netbackup 7.1.0.8, is this the issue?

-16
Login to vote
AbdulRasheed's picture

Hi Greame, 

  NetBackup Accelerator is feature in NetBackup 7.5/NetBackup Appliance 2.5. You do need to update your appliance to use Accelerator. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

-13
Login to vote
Gautier Leblanc's picture

Hi,

 

As said AbdulRasheed, you need NBU 7.5 to use Accelerator.

I think that your DataDomain do not support Accelerator feature (please check compatibility list but I am pretty sure). Accelerator is supported with Symantec Deduplication today (also with Sepaton devices I think).

 

 

-14
Login to vote
GraemeShaw's picture

HI, upgrading the Appliance to 7.5 solved the issue. Data Domain supports Accelerator using DDOS 5.3.0.4 and DDBoost 2.6 (which are still RA) Gautier Leblanc, so all looking good now.

-20
Login to vote
Gautier Leblanc's picture

You are right, with DDOS 5.3 (and NBU 7.5) it looks good (and it is a good information that I had not).

 

Thanks !

-14
Login to vote
kesavanGopi's picture

Really a useful document Abdul !!

 

My sincere thanks !!

-14
Login to vote
effiko's picture

Hi Abdul Rasheed

We have a prospect with an EMC Isilon of Front End 60TB and annual growth of 50%.

would you recommend going with Accelerator and have a media server mount the complete OneFS or stick to the NDMP accelerator by Isilon and use remote NDMP to appliances? Is there a crossover point?

As most of the data does not change, does symantec have a better way for archiving and not backup all the files every week?

-20
Login to vote
River.Hsieh's picture

I meet a RD client had using Isilon. And there are some limitation in Isilon.

The use NDMP acceleratior by Isilon with DDNS and backup to tape, but usually backup failed , EMC support still not find out the root cause yet.

And Isilon can only use "set type=tar"

And they also had restore problem on Isilon at browsing file & directory that they need.

 

So , maybe you can try to setup a media server and mount the Isilon filesystem and use media server deduplication.

-3
Login to vote
effiko's picture

Thanks for your information.

Can you forward me the contact at that site so I can discuss the issues with them?

 

Regards

 

-11
Login to vote
weigojmi's picture

Could someone clear up this basic point please. 

Does client dedup require the media server dedup to exist, be enabled, or licensed, whichever applies? 

Above Abdul says..."NetBackup Client Side Deduplication deduplicates data and sends unique segments directly to storage server. Media server is not involved in this data path". 

BUT

the official NB 7.5 Dedup Guide says..."With NetBackup Client Deduplication, the client hosts the deduplication plug-in that duplicates the backup data. The NetBackup client software creates the image
of backed up files as for a normal backup. Next, the deduplication plug-in breaks
the backup image into segments and compares them to all of the segments that are stored in that deduplication node.The plug-in then sends only the unique segments to the NetBackup Deduplication Engine on the storage server. The engine writes the data to a media server deduplication pool."

So does there have to be any media/storage server deduplication running/enabled for Client side to work?  Sorry for my ignorance as I have not used any of this yeat so am not sure what check boxes on either side are needed or applicable. 

Part of my confusion could lie in that in the concept we're looking at, the master/media/storage servers are all the same box.  Most of our sites are relatively small.  The others we were considering backend SAN storage already exists so still using "all-in-one" servers.

Thanks! 

-5
Login to vote
SYMAJ's picture

When using Client Side De-dup or Accelerator you must be writing to de-duplicated storage (MSDP/Appliance/PureDisk etc).

AJ

-7
Login to vote
weigojmi's picture

OK...but I think I'm still missing something.  My understanding is that you don't need a "dedup appliance" and you could create an MSDP from any storage disks (direct attached, enclosure, SAN, etc.) and that NetBackup would handle all the dedup work.  If so, what settings are needed where?  I assumed just on the client for Client Side dedup?

-13
Login to vote
Morten Seeberg's picture

As SYMAJ writes, you need MSDP or an appliance target, and yes MSDP you can create on any kind of storage (theoretically, it should comply to some performance requirements documented in the Deduplication Manual).

So to answer your questions:

Does client dedup require the media server dedup to exist, be enabled, or licensed, whichever applies?

You must have a dedupe pool created "somewhere" (on the Master, separate media server or an external appliance), which you cannot create without the Optimization license.

The plug-in then sends only the unique segments to the NetBackup Deduplication Engine on the storage server. The engine writes the data to a media server deduplication pool."

Consider the Dedupe Engine as server process which receive data and "understands how different blocks relate to each other" (the intelligent part) and the Media Server Dedupe Pool (MSDP) as "storage" for the blocks.
Yes the naming can be confusing :-) and often people just say MSDP (for all of it).

So does there have to be any media/storage server deduplication running/enabled for Client side to work?  Sorry for my ignorance as I have not used any of this yeat so am not sure what check boxes on either side are needed or applicable. 

Yes you need a deduplication target to store the data in for this to work! Whether the client or the media server performs the actual deduplication process depends on your Master Server Host Properties, Client Attributes setting and/or the policy's "deduplication option".

Part of my confusion could lie in that in the concept we're looking at, the master/media/storage servers are all the same box.  Most of our sites are relatively small.  The others we were considering backend SAN storage already exists so still using "all-in-one" servers.

It can be confusing, and yes you can put Master/Media/Storage Server on the same box, not a problem (for small sites). But conceptually the Storage Server/MSDP is "standalone" feature even though it runs on the same server. A good example of this is if the client performs the deduplication, it actually sends the data directly to the "Storage Server" processes and not the Media Server processes (using TCP port 10102 + 10082 and not 1556). But it still communicates with the media server processes for metadata etc.

Did you restore something today?

-7
Login to vote
Nicolas Cruchot's picture

Hello all,

I'm testing NB Accelerator and it looks very nice ...

I found a lot of helpful informations here and ... only here ! So thxs

I've a question about NTFS Change Journal. Do I have to enable it or is it OK with default Track log ?

Do you have any ideas about this ?

Thxs

 

-11
Login to vote
Morten Seeberg's picture

You do not have to enable it, although based on the tests I published here:
https://www-secure.symantec.com/connect/articles/accelerator-backup-windows-cluster-systems

you will see, that the difference between having only the track log and having track-log combined with NTFS journal is quite big:

  • Full backup: 14 minutes
  • Incr with track log: 5 minutes
  • Incr with track log and journal: 1½ minute

and this was just on a "small" test-filesystem.... so it does make a difference

Did you restore something today?

-13
Login to vote
SymGuy-IT's picture

Dear Abdul Rasheed

Any updates on Shadow copy component backup with accelerator optimization? I am facing simillar issue and the backup speed of SCC is terrible which ruins the use of accelerator.

-7
Login to vote
loori's picture

In 7.6 the SCC backup with DFSR still doesn't work and the accelerator doesn't have any positve effect. First of all the backup has to work. The accelerator only can help when there is a working backup, and up to now it wasn't possible to backup a server with DFSR with three five TB volumes.

We resorted to flashbackup.

0
Login to vote
AbdulRasheed's picture

Sorry about the delay. I had not been monitoring the commends while travelling. Let me talk to engineering and get back to you on status. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

0
Login to vote
Di Ro's picture

Thank you Abdul, this is a Great post!!

But, I have one more question, I configured Accelerator in 8 different clients which are backing up thru WAN, in some of them I have accelerator optimization by 25-95%, but 2 of them are running by 2-3%, what am I doing wrong? and, what factors affect optimization?

 

Thanks,

-10
Login to vote
AbdulRasheed's picture

Hi DI Ro,

  The optimization depends on data change rate. The lower the amount of changes between backups, the better the optimization as less data needs to be read and sent across the wire. 

   There are end cases where we err on the side of caution. We call out these situation if it occurs on the detailed status report on Actitivity Monitor.

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

+1
Login to vote
nawaf's picture

Hi,

Does accelerator works with multi data streams?

+1
Login to vote
AbdulRasheed's picture

Of course! 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

0
Login to vote
loori's picture

Our experience regrettably shows that the accelarator with the MSDP is of very limited use.

We have a well sized environments for the use of SLP with advanced disks and 3592E06. We reached a throughput of 1 GByte/sec and more per media server.
Introducing MSDP and the accelarator we lowered our throughput to below 300 MByte/sec. The only use of the accelarator is to lower the bandwidth consumption on the network, with the cost of high cpu and memory consumtion on the clients (sometimes it rendered them hung), and and increasing backlog due to the low performance of the MSDP.

We don't use the accelarator any more and the MSDP only to store data, as a staging device they are useless.

The performance with conventional synthetic backups is several times higher.

+1
Login to vote
AbdulRasheed's picture

Hi Loori, 

  Sorry to hear that Accelerator didn't work for you. NetBackup Accelerator is not certainly the end-all solution for all types of workloads. Based on your description (1GB/sec on Advanced Disk), it sounds like your workload might be small number of large files? Hope you had an opportunity to work with Technical Support. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

0
Login to vote
BillAdams's picture

I have been using v7.5 for quite some time now, but I have not used the Accelerator option.  I have received fresh license keys that will allow me to do so. I will not be able to apply the keys for a few days until my master server is ready to restart services.

Since the first full backup (the long one) is necessary for a base line, can I do that one before the keys are installed and the Accelerator check box is checked, or must the base line backup be done after the option is set?

Thanks Abdul

0
Login to vote
Morten Seeberg's picture

I am not 100% sure you need to restart your master for this, try just adding the key, then optionally do a "bprdreq -rereadconfig" and then restart your GUI. That might just un-gray the accelerator option.

To answer your other question then no, you cannot do the baseline backup first and use it as a base for Accelerator, the "tracking file" accelerator create and use will not get created.
That being said, if this is a "remote" file server you can still benefit from starting the backup now if you are planning on using client side dedupe, because the server will get seeded and then your subsequent accelerator-baseline backup will run faster.

Did you restore something today?

0
Login to vote
AbdulRasheed's picture

Thank you Morten for answering these questions!

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

0
Login to vote
effiko's picture

 

I haven't find any best practice regarding backing up a Celerra (or unified storage as they call it now) with NetBackup accelerator.

I read that it works and improves dramatically the backup time compared to NDMP but I have no idea how file cxreation/modification times are tracked.

If I mount all my shares on a windows server (as a proxi), how does Netbackup track changes made by other users on the shared files?

As my target wil be DataDomain and they now support accelerator has anybody experienced it?

 

0
Login to vote
Nicolai's picture

Data Domain works fine with accelerator, but you need to ensure you are running DD OS 5.4. There are bugs in the DD OS 5.3.

Also Please note that client side dedupe and accelerator are two different things. With Data Domain you "only" get accelerator.

Assumption is the mother of all mess ups.

If this post answered your'e qustion -  Please mark as a soloution.

+1
Login to vote
effiko's picture

Thanks for the advice.

As this is the key advantage of accelerator, do you know how does accelerator track changed files on a remote CIFS server? Do you have to turn on Celerra Security Auditing?

0
Login to vote
Morten Seeberg's picture

Accelerator on Windows works in "2" different ways, with or without a file system track log. "With" only works with NTFS using the NTFS journal log, and "Without" it only utilizes Symantecs own track log which is placed in ...\Program Files\VERITAS\NetBackup\track.

When you do Accelerator backups of CIFS destinations you would not be able to utilize any other logs than Symantecs own track log, i.e. you just need to backup the CIFS destination and the track log would be placed on the proxy server you choose to perform the backup. It will have to scan the file system for changes every time a backup is run.
How it determines when a file is updated, that is most likely based on whatever setting you gave the proxy client (archive vs. time).

One thing you should consider is that recovery via a proxy server and CIFS shares is most likely to be very slow. Is the purpose recovery in case of loss of the Celerra or is it revision control?
If the primary purpose is recovery then you might be better off with NDMP backups of the volumes...

Did you restore something today?

0
Login to vote
AbdulRasheed's picture

Hope the response below helps with your question. The NetBackup Acclerator track log on mount host for your NFS/CIFS share is how it is being tracked. 

Also, note that you can scale out the performance by using multiple mount hosts for different shares. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

0
Login to vote
effiko's picture

If indeed " It will have to scan the file system for changes every time a backup is run." then there is no real advantage of using accelerator on CIFS at all as it will take eternity to rescan all the directory tree it will only save transmitting the actual files.

I regret to admit that I had a different understanding reading above post https://www-secure.symantec.com/connect/blogs/freq...

I'll be glad to hear  Abdul Rasheed comment on it.

0
Login to vote
Morten Seeberg's picture

Sure Abdul might have something to add. My comment was based on theoretical knowledge (I hope that was clear enough), although I am considering a test myself just for the fun of it now :-)

But even though you have to scan the filesystem, there is still an improvement, check out my stats in this article, where accelerator increased performance by 300% just by using phase 1 accelerator on Windows:
https://www-secure.symantec.com/connect/articles/a...

Did you restore something today?

+1
Login to vote
AbdulRasheed's picture

The problem is that the word 'scan' means different things to different people. For the same reason, I had deliberately avoided that term in this blog. For some ‘scan’ means reading something end to end as in the case of a scanner/copier machine. For others, scan may mean quickly going through an area to detect obvious anomalies and then zeroing in on an object.

In order to explain how NetBackup Accelerator goes through file system, let us think about a very high level view of a generic POSIX compliant file system. I apologize for preaching to the choir, I hope this may help those who may need a refresher.

A file system is a collection of data structures known as files. A ‘regular file’ will have direct and indirect blocks where the data is stored. It is when reading these blocks a program like backup application will cause spindle movements, as these blocks could be everywhere.

Although we normally visualize a directory/folder as something that contains a number of files, the reality is that directory is a ‘special file’ that is quite small in size. Its content is simply the name of the file and inode metadata. What NetBackup Accelerator does is to read this ‘directory file’ for understanding the metadata. In other words, think of it like opendir/readdir calls on the ‘directory file’ rather than open/read on each ‘regular file’. Hence NetBackup Accelerator is not touching all the data structures in the file system like a traditional backup, rather it is scanning those ‘special directory files’ and changed ‘regular files’ only.

This is why NetBackup Accelerator adds value without needing any kernel drivers for the file system. This approach works on any POSIX compliant file system. The optimization and performance boost achieved depends on data change rate.

PS: This is an oversimplified explanation, there are a number of internal details and intellectual properties Symantec has made use of in the implementation.

 

 

 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

+2
Login to vote
Morten Seeberg's picture

But then to "fall back" to the original question which started all this way way up :-):
Not being a complete filesystem geek here, will NBU Accelerator be able to utilize those POSIX file system features when the file system is suddenly a network mount (CIFS or NFS)?

Is there any knowledge about which performs better for large filers, CIFS via a Windows proxy or NFS via a UNIX proxy?

Did you restore something today?

+1
Login to vote
AbdulRasheed's picture

Hi Morten, 

  I published a part II for this blog exclusively on NFS/CIFS/NDMP angle. Hope this helps. Your feedback is appreciated for that blog. 

Warm regards,

Abdul "Rasheed" Rasheed

Tweet me @AbdulRasheed127

0
Login to vote
effiko's picture

Thanks Abdul Rasheed for the explanation

I'd like to expand Morten's question

If I need to backup a CIFS file system that includes millions of small file spread in thousands of directories will Accelerator be faster then Direcct NDMP to VTL (where there is no shoeshinning) ?
Is there a break even point?
Are there any published benchmarks?
Is there an impact on restore?

0
Login to vote