Video Screencast Help

force index rebuild

Created: 08 Jun 2009 • Updated: 21 May 2010 | 16 comments
K.G's picture
This issue has been solved. See solution.

i started force index rebuild operation and it still rebuilding 45 days left.
i know rebuild operation time changes with index size,length ...etc
but i saw 5-8 days index rebuild operation at most!
this is normal?
it say that still rebuilding.
(normal rebuild operation everytime gives failed message,so i started force rebuilt operation.)

what is your idea?you seen any smillar issue?

Discussion Filed Under:

Comments 16 CommentsJump to latest comment

Maverik's picture

The question here is, is anything acutally still occurring for this index? Do you regularly get event ID 7305 as per the technote below?

Do you get any errors relating to indexing and this index?

Whenever rebuilding a large index you are better to go back to a good backup of that index and restore, then select update to cut down the wait time.

Depeneding on your EV Version this only covers EV6SP4 could be this

Could be this?

Index rebuilds appear to stop processing in an environment that utilizes NTFS Collections
In an Enterprise Vault (EV) environment that uses NTFS Collections, it is possible that an index rebuild may appear to stop processing due to being unable to extract items from a .cab file.
In this scenario, the Storage Crawler process will appear to loop over the same Digital Vault Saveset (DVS) file, however Windows will notify this process that the file is not yet available.

To Identify this issue:
1. On the Enterprise Vault Server, open a command prompt and begin a DTrace. (See Related Documents article 276120)

2. Enable Dtrace for StorageCrawler

3. In this scenario we would expect to see similar entries repeatedly for the following line:

CItemFetcher::RequestItem (BTID:9308) NextISN:6745942 MaxISN:6923275 Format:3 MaxMarshallSize:10240 (KB) IIRebuildMode:1 (hr=Success [0])

4. If this scenario, the number in the NextISN remains the same repeatedly.

1. In a SQL Query Analyzer, run the following query against the Vault Store that the index or archive belongs to

SELECT C.RelativeFileName AS 'CAB Location', S.CollectionIdentity
FROM Collection C, Saveset S, ArchivePoint AP
WHERE IndexSeqNo = 6745942
AND AP.ArchivePointId = 'ArchiveID'
AND S.ArchivePointIdentity = AP.ArchivePointIdentity
AND S.CollectionIdentity = C.CollectionIdentity

2. Make a note of the path and name of the .cab file.

3. Verify this .cab file exists and the items can be opened.

4. Once verified, contact Enterprise Vault Technical Support for the corresponding version of the UnCollect utility.

5. Identify the partition and .cab file from within the Uncollect utility and extract the items to their original location.

6. Once accomplished, restart the Indexing Service and through a Dtrace of Storage Crawler verify that we are now processing items correctly.

To be honest there is alot it could be.  What type of storage and version of EV are you using?

The Indexcheck utility which is covered in the utilities guide will help you minitor the health of the indexes so you can be proactive rather than re-active to issues.  May not help you now but in the future ;-)

Liam Finn's picture

What errors were you getting during the rebuild

There is a way to do a rebuild while bypassing the failures and then later go back and address the failures then when the issues are fixed update the index with the missing data

The process to bypass the failurs and allow the rebuild to continue is easy. The hard part is correcting the errors that caused the failure to rebuild in the first place

If you have dtrace logs of the rebuilds or event logs that will help identify the issue please post them so we can help

If you check out this posting it will tell you how to perform the rebuild and to ignore the errors so the rebuild wont fail

K.G's picture

 i can see  same and new event id's everytime.
i think rebuild is continue it still send event but why it takes too long time?
i  added ev logs .evt file with zip
as you see there are many consecutive errors in the event viewer same event id's still came.
what is your idea?

Liam Finn's picture

The reason it is taking so long could be to either retries to get the data or issues it is having contacting the data.

The errors shown in the event log dont give much information other than unable to get the data or error processing the saveset

Do you have a dtrace of the rebuild? A dtrace will show every step of what is happening and make tracking the issue easier

do a dtrace of the storagecrawler servers and the indexing service both in verbose mode so we can see everything thats happening

Make sure that the dtrace is running when the errors are being generated so the error will show in the dtrace also

Michael Bilsborough's picture


Your rebuild will take a lot longer than nornal since the data is coming from NBU rather than say local NTFS share etc.
EV is reporting problems with the NBU migrator.  Have you looked up those events in the tech note database seem if there are any clues?

K.G's picture

ok i will take once more dtrace of these services and i will share with you.

i set up NBU migrator before. but after i couldn't get good effort and i gave same time for start and finish time so it is disabled.but it still gives errors for the cab files ,i set it maxpossionpilss for skip this erorrs but i'am still waiting rebuild operation. 

Liam Finn's picture

If you set this reg key

HKEY_LOCAL_MACHINE\SOFTWARE\KVS\Enterprise Vault\Indexing\PoisonPillCount
Default: 1

This will set the number of retries to one so if it fails on an item it moves on rather than retrying 3 times which is the default. It might help speed things up but you do need to restart the services for this to work

The DTrace should  be for the storagecrawler service and the indexbroker services

Maverik's picture

Hi Kamil,

I seem to recall that you logged a case about this previously and were informed that if you want to rebuild the index with all the data indexed you would need to recall this data bit by bit from NBU, as it had been configured to almost instantly after archival to migrate to secondary storage.

Apologies if I am incorrect.

K.G's picture

 Hi Liam,
i done this registry key and restarted the services.

HKEY_LOCAL_MACHINE\SOFTWARE\KVS\Enterprise Vault\Indexing <DWORD> MaxConsecutivePoisonPillItems = 0

i will take dtrace also.

NBU migrator is non active i set finish and start time same.
how can i increase the speed of this process?

Hi Genie,
i have no data on NBU side. so i am not calling any data from it will be higer speed.

Liam Finn's picture

No the setting should be 1 not 0. Setting it to 0 will mean it will retry forever on an item until it manages to extract it.

Set it to 1 so it only tries once and if that fails it moves on and skips that item

Kopfjager's picture

If you set the setting to 1, the index will be marked failed on the first item it  cannot index.  0 is the correct setting if you don't want it to fail regardless of failed items.   0 effectively disables this function and it will not fail regardless of how many items unable to be indexed. 

"The Indexing service currently suspends indexing if a configurable number of consecutive items fail to be indexed. The Indexing service will be now mark the index volume as failed when this occurs. This avoids further errors if, for example, storage becomes inaccessible for some reason. The failed index volume is then clearly visible in the IndexVolumeReplay utility. After you have resolved the problem, you can update the index volume using the IndexVolumeReplay utility.

You can configure the maximum number of failed consecutive items using the DWORD registry setting, MaxConsecutivePoisonPillItems, in the following location:

In this release the default value of this setting has changed from 100 to 25. If you require a very strict indexing validation, which stops indexing as soon as one item fails to be indexed, you can set the value of MaxConsecutivePoisonPillItems to 1.

I'm guessing that you thought he was referring to your earlier post for "poisonpillcount"

Maverik's picture

If you are not using NBU then I cannot undertand why these errors below are being formed from your event logs.  You best log a case.

Type: Error
Date: 01/06/2009
Time: 14:27:24
Event: 6954
Source: Enterprise Vault
Category: None
User: N/A
The 3rd party Migrator application 'NBU Migrator' has logged the following message:

Migrator parameter is missing or incorrect. Check the Vault Store Partition configuration.

For more information, see Help and Support Center at

Liam Finn's picture

Why are you repeating the same post over and over

Maverik's picture

No idea was only clicked once. Only got one point for it.

TRALSH's picture

What version is the NBU client on the EV server?
How many Cab files were migrated when you had NBU Migrations enabled?

The error you receive is fairly generic; more than likely the true "error" is being thrown by the NBU client.

Check the NBU admin console for an error to see if the restore request for the cab actually makes it to the NBU Master.

If not, enable NBU client logging on the EV server.  To enable logging, creat the folling folders:
<NBU Client install path>\Netbackup\Logs\Exten_Client
<NBU Client install path>\Netbackup\Logs\tar

Once EV requests the file from the NBU client, the Exten_Client log file will have the next actions.  If it is a client issue, you will see the error there.

MichelZ's picture


Any more info needed here?