Failed temporary mount of ZFS FileSystem while BMR restore
When trying to restore with BMR we faced the following:
1) after we did the prepare to restore on BMR, we booted from net on the solaris client, the restore failed with the error:
+ /usr/sbin/zfs mount rpool/var
cannot mount 'rpool/var': 'canmount' property is set to 'off'
+ (( 1 ))
+ echo ERROR: failed temporary mount of ZFS FileSystem rpool/var at /tmp/mnt/var.
ERROR: failed temporary mount of ZFS FileSystem rpool/var at /tmp/mnt/var.
Solution was to bring up the client and change the property to ON and take a backup again.
2) after fixing the first issue, we retried the restore, this time we had the following:
The restore fails with the following error:
+ print Creating ZFS Storage Pool rpool
+ 1>> /dev/console
+ /usr/sbin/zpool create -f -m /tmp/mnt/rpool rpool c1t0d0s0 spare c1t2d0s0 c1t3d0s0
cannot open '/dev/dsk/c1t0d0s0': I/O error
We had to restore from local tape to bring back the Client. note that the disks do not have any hardware problem, then we suspected that it is a mismatch of the system version used:
We are using an srt with solaris update 9, and the node we want to restore has a same patch installed(Solaris with Generic_147440-12), so I have tried to add all missing patches to the SRT but when installing some patches the installation hang and the SRT become invalid.
Could anyone help on how to add the following patches to SRT:
The succesful patch installation of each of the avove are very random. It succeeds for one of them and fails next time.