I didn't get a single hit from this mail. :-( The zfs root file system finally reached a point of no return. I didn't have a chance to try reseating the drives or anything. The machine crashed and wouldn't boot again, so we just replaced both disks. For the migration issue... I had installed the OS on machine2, then we popped it (a single disk at this point) into machine1. As before, it failed to boot, saying that the zfs pool 'rootpool' was last mounted on a different machine (machine2). So, I booted off the network into single user. Then ran 'zpool import -f rootpool'. Then I rebooted off the disk and all was fine. There may be a better procedure, but this does work. FYI Tom Lieuallen Oregon State University Tom Lieuallen wrote: > I have a V120 running Solaris 10 U6. I decided to use a zfs root file > system so that I could mirror without all the solstice pieces. I > installed the OS, then attached the second drive after the OS was up > (zpool attach...). It appears that the resilvering had problems. > > # zpool status > pool: rootpool > state: DEGRADED > status: One or more devices has experienced an error resulting in data > corruption. Applications may be affected. > action: Restore the file in question if possible. Otherwise restore the > entire pool from backup. > see: http://www.sun.com/msg/ZFS-8000-8A > scrub: resilver completed after 0h41m with 51926 errors on Wed Dec 31 > 13:05:02 > 2008 > config: > > NAME STATE READ WRITE CKSUM > rootpool DEGRADED 10 0 108K > mirror DEGRADED 10 0 108K > c0t0d0s0 FAULTED 20 0 0 too many errors > c0t1d0s0 DEGRADED 0 1 216K too many errors > > errors: Permanent errors have been detected in the following files: > > /a1/solaris10/include/curl/curl.h > /a1/solaris10/include/curl/curlver.h > ... > <ad nausea> > ... > > # zfs list > NAME USED AVAIL REFER MOUNTPOINT > rootpool 13.3G 19.9G 94K /rootpool > rootpool/ROOT 4.25G 19.9G 18K legacy > rootpool/ROOT/solaris10_6 4.25G 19.9G 4.17G / > rootpool/ROOT/solaris10_6/var 83.3M 19.9G 83.3M /var > rootpool/a1 2.75G 5.25G 2.75G /a1 > rootpool/dump 2.00G 19.9G 2.00G - > rootpool/private 341M 5.67G 341M /private > rootpool/swap 4G 23.9G 16K - > > # iostat -e > ---- errors --- > device s/w h/w trn tot > ramdisk1 0 0 0 0 > sd0 0 129 487 616 > sd3 0 256 877 1133 > nfs1 0 0 0 0 > > The system logs report scsi bus resets and read and write errors for > both disks. I assume one of the disks is causing problems for > everything on the bus. > > The 'faulted' disk is the one with the original OS. The files it is > reporting problems with are all on rootpool/a1. There's only ~50,000 of > them. That's just a copy of our /usr/local, so there's nothing there I > need. > > Should we just shut the machine down/off, reseat both disks, and hope > for the best? I'm concerned it won't boot again. :-( Or, if we try to > hot-plug one of the disks, it may panic too. > > I have another related question... > > I'd like to prepare another disk or two to slap into this host in case > it's not repairable or in case something I do makes things worse. :-) > What is the proper procedure for installing a machine with zfs root, > then moving that disk to another host? I did this once before and it > complained about the zpool being last used by a different host. I ended > up booting off the net and forcing the import. To simplify matters, > should I boot the temporary install host off the net and export the > zpool? Or should I just be prepared to force the import? > > thank you > > Tom Lieuallen > Oregon State University > _______________________________________________ > sunmanagers mailing list > sunmanagers@sunmanagers.org > http://www.sunmanagers.org/mailman/listinfo/sunmanagers _______________________________________________ sunmanagers mailing list sunmanagers@sunmanagers.org http://www.sunmanagers.org/mailman/listinfo/sunmanagersReceived on Wed Jan 28 11:09:08 2009
This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:44:13 EST