Video Screencast Help

FSA Ingestion using more servers

Created: 23 Aug 2012 • Updated: 09 Oct 2012 | 4 comments
AimHigh's picture
This issue has been solved. See solution.

Hi Group,

I have a question regarding FSA and 70TB worth of data. I have read the EV Performance Guide and it looks like it will take several months to ingest all the data. Does Symantec have a White Paper on standing up several FSA servers for ingestion and then removing them once the ingestion is done? If not, has anyone done this type of work? I am looking for ideas on how to get 70TB into EV in about 3 to 4 months.

Thanks in advance.

Comments 4 CommentsJump to latest comment

JesusWept3's picture

Typically those types of things they will leave for the consultants/partners/professional services, the problem you may have though is that the FSA task has to sit with the vault store itself

So you would have to create several FSA tasks and several vault stores and do the ingestion and then migrate the data and vault stores to the machines you really want to keep in your environment

And even then I'm not entirely sure you can have multiple FSA tasks looking after the same file servers

Jeff Shotton's picture

What would be critical here to the archiving rate is the average file size. If you are looking at lots and lots of little files that contain indexable content, then your archiving rate is going to be a lot lower than if you were archiving media files of, say, an average size of 1GB.

I've seen rates as high as 1TB/day in this scenario when the average file size was 1GB and the content was video (and therefore not indexed).

I'm assuming that all the other nice things like high-speed LAN/everything in the same data centre/powerful hardware are in place...

To speed this up for the initial ingestion (if you need it) you could in theory stand up several EV servers, each one with a vault store database (or maybe more than one) and each one with one or more FSA tasks. FSA tasks target archivepoints, and so you can have several EV servers targetting one massive file server, or one EV server targetting several fileservers...or a mix. The important point is, once you have archived an archive point with a task, the storage service that was linked to that task at a time 'locks' the archiving to that instance of EV. In other words your EV server instance is then responsible for the data. You can change tasks...but only for the same EV server (you will find a restricted list of archive tasks available to choose from).

Now, in the same way that building blocks allows you to temporarily 'fail' over one server instance to run on another EV server, there are ways of consolidating the vault store databases from, say, 2 servers down to 1. Other than the work in SQL, you would potentially need to move the vault store partitions and index volumes. So there is some risk, and service it's best to have a  consultant/partner/professional service guy or gal to hand when doing this.



Jeff Shotton

Principal Consultant

Adept-tec Ltd

Website: here

AimHigh's picture

Thanks Jeff. We are a partner but have never had an FSA target of 70TB so I wanted to speed it up. I know FSA. I wanted to see if anyone has done this or if there is a best practices guide to doing this. If not, I will write something up once we start the engagement. I will add it to this post ;) The customer says it is 100kb per item but that is usually not the case so, we are talking ballpark figures here. This is also an initial sales call so we are trying to get a best estimate for the customer.'s picture

I would not recommend the building blocks approach to remove servers and double or triple load remaining servers. Migrating ingested data to an existing server after ingestion would be also be more trouble than it was worth. The only scenario I would contemplate for this would be to use virtual servers with lots of processing and memory and appropriate network connections for the ingestion phase and then reduce the memory and processors on the VM's once steady state was reached.