Restrictions and best practice when using NFS presented storage for Disk Storage Units.

Article:TECH74087  |  Created: 2009-01-01  |  Updated: 2009-01-30  |  Article URL
Article Type
Technical Solution




Restrictions and best practice when using NFS presented storage for Disk Storage Units.


In an NFS environment the NFS clients buffer data and can defer writes, operating asynchronously to the NFS server - choosing to hold the write data in their cache until such time that they choose to send it to the server.   Closing or flushing the file causes the cached data to be written, but it is possible that until the file is closed the data is not actually written to the server.
This behavior can cause issues when using NFS storage for backups because the amount of data on the NFS volume may be incorrectly reported while backups are still writing to the volume.
The following 'best practices' are recommended when using NFS volumes as storage units with NetBackup:
Using NFS volumes in AdvancedDisk disk pools
When NFS volumes are used with AdvancedDisk disk pools each disk pool should only contain one NFS volume.  The caching behavior of NFS means that NetBackup cannot detect the 'disk full' condition correctly and thus will not always span to the next volume correctly
Using NFS volumes for disk staging
When using NFS volumes for disk staging (either with BasicDisk staging storage units or with AdvancedDisk and capacity managed retention) observe the following rules:
1. Set the high water mark (HWM) to a value that will allow several minutes of write time before hitting a disk full condition.  Typically set the HWM to around 95% but decrease this value for smaller volumes.  (For example on a 100 GB volume writing at 50 MB/sec set the HWM to 85% to allow a 5 minute buffer between hitting HWM and disk full).   Allowing this extra 'head room' reduces the risk that disk full will be reached before older images are removed as a result of passing the HWM
2. Set the fragment size of the storage unit to a smaller percentage of the volume size.  As each fragment is closed the cached data should be committed and the available space updated correctly.  Smaller fragment sizes will cause the data to be flushed to disk more frequently thus reducing the risk of a sudden large change in available space.  The default fragment size is 500 GB, reducing this to 1% of the volume size or less is recommended.

Legacy ID


Article URL

Terms of use for this information are found in Legal Notices