Did you know VxDMP is auto-configured for a wide range of storage and optimized to run out of the box ?
The multi-pathing policies are connectivity based and are therefore applicable to all the LUNs that are enclosed in the same physical entity, i.e. the instance of the storage array. Hence it makes sense to have these set on a per storage array instance basis and thus avoid the need for further configuration steps whenever new LUNs are added to an existing storage array and exposed to the same host
VxDMP understands the specific array using storage array specific policy modules and employs the best algorithm suited for providing access to LUNs from the array. The policies are pre-configured (out of the box) based on the array characteristics and hence does not require any user configuration. The administrator can just plug-in the array and start using it.
However if the administrator chooses, they can override the defaults for the storage array and these settings are persistent. VxDMP offers a choice of multiple I/O policies, as well as multiple proactive error detection and recovery policies. In a VMware environment, these can be changed directly from the vCenter using the VxDMP plugin or the VxDMP remote administration CLI.
VxDMP also understands the LUN path characteristics especially in case of Asymmetric LUN Access (ALUA) arrays and uses only the Active/Optimized LUN paths for I/O traffic, thus retaining the storage administrator configured balance of I/O load on the array storage controllers. Similarly when connectivity is restored after a SAN outage, DMP would automatically failback to the initial I/O load distribution set by the storage administrator.
VxDMP employs several advanced error detection and recovery algorithms to achieve quicker recovery from connectivity failures with less impact on the CPU which is vital when operating in the hypervisor.
- Low impact path probing (LIPP) reduces the CPU consumption for periodic path connectivity checks by sending a single connectivity check probe for a set of paths that share the same connectivity infrastructure.
- Subpath failover grouping (SFG) classifies LUN paths, based on connectivity, into groups that fail or get restored together so that the failure detection and connectivity recovery can be employed on a group basis rather than on a LUN-by-LUN basis. This dramatically reduces the CPU consumption and speeds up recovery.
- Idle LUN probing allows proactive error detection on idle LUN paths, thus avoiding the cases when I/O is sent on a path that has connectivity disrupted. This reduces waste of CPU resources in processing errors while servicing I/O
- Path ageing is a technique that detects flaky paths that are typically a result of a loose connection, or failing infrastructure, and avoids using those paths for active I/O until they are found to be ‘stable’. Again this conserves the host resources and allows them to be put to optimum use.
The administrator can enable or disable these advanced features, but it’s recommended to have them enabled for best performance. They can also set host centric policies such as how frequently should the check for connectivity restoration be made, number of worker threads to be employed for maintenance tasks, etc. The default values are optimal for most cases and should not require any changes.
I would like to know you experiences with using VxDMP and feedback on ability to operate out of the box and providing enterprise grade multi-pathing