Video Screencast Help
Protect Your POS Environment Against Retail Data Breaches. Learn More.

Deduplication: Robotic library destination element full error.

Created: 15 Sep 2013 • Updated: 09 Oct 2013 | 7 comments
This issue has been solved. See solution.

Hi all,

I use optimized deduplication to my DRP site like described in this technote: http://www.symantec.com/docs/TECH172464

Since two weeks, replication between local deduplication storage to centralized deduplication storage takes longer and sometimes jobs hang.

I received this message in the central admin console (see attached). I ran several Inventory.

Thanks in advanced for your assistance.

 

Best regards,

Thomas

Operating Systems:

Comments 7 CommentsJump to latest comment

pkh's picture

It is probably because your dedup folder is full.  Try re-claiming some space using the procedure in this document

http://www.symantec.com/docs/TECH130103

If you still have problems after re-claiming space, try doing an inventory on your dedup folder.

TLANGE's picture

Hello Pkh,

thank you for you quick response but:

 

- dedup folder is not full (see statistics in attachment).

- The article talks about Backup Exec 2010 and I use BE 2012.

Best regards,

Thomas

dedup_2.PNG
pkh's picture

The document is also applicable for BE 2012.  The freed up data blocks may not be re-claimed although they are not used.

ANevatia's picture

Hello,

The Robotic Library Destination Element Full Error Generally comes up when the OST media to which the data is getting written to becomes full. 

Please check the job log and check the OST media used when the error comes up.

Check the same OST media inside the deduplication folder and check if it shows full ( 50 GB of 50 GB used.)

You should be checking the Deduplication stats by using the command crcontrol --dsstat from the command prompt.

Thanks,

ANevatia

 

TLANGE's picture

Hello Pkh,

I tried to reclaim space as indicated in the technote 130103 but I'v still the problem...

Here is the result for crcontrol --dsstat:

************ Data Store statistics ************
Data storage      Raw    Size   Used   Avail  Use%
                   8.8T   8.3T   3.3T   5.1T  39%

Number of containers             : 13672
Average container size           : 253424737 bytes (241.68MB)
Space allocated for containers   : 3464823015154 bytes (3.15TB)
Space used within containers     : 3453446622395 bytes (3.14TB)
Space available within containers: 11376392759 bytes (10.60GB)
Space needs compaction           : 23232740380 bytes (21.64GB)
Reserved space                   : 491378176000 bytes (457.63GB)
Reserved space percentage        : 5.1%
Records marked for compaction    : 282347
Active records                   : 38695369
Total records                    : 38977716

Use "--dsstat 1" to get more accurate statistics

-------------------------------------------------------------------------------------------------------------

For the same deduplication job, now, BackupExec takes +/- three times longer than before. The remote line is not saturated.  

Thanks in advanced for your help.

Best regards,

Thomas

 

 

pkh's picture

Other than the job taking longer, do you still get the error that you reported in the original post?

TLANGE's picture

Hi all,

 

here is the solution for the issue I meet:

 

http://www.symantec.com/business/support/index?page=content&id=TECH204116

 

Thank you for your help.

 

Best regards,

Thomas

SOLUTION