These are the steps taken to resolve the problem: 1) booted into single user mode 2) unmounted all filesystems that were on the external storage array 3) ufsdumped each filesystem to a remote NFS mount 4) metacleared all filesystems that were on the external storage array metaclear -f d2 metaclear -f d1 metaclear d0 metaclear d10 5) recreated the d0 RAID5 metadevice metainit d0 -r c1t0d0s3 c1t0d1s3 c1t0d2s3 c1t0d3s3 c1t0d4s3 c1t1d0s3 c1t1d1s3 c1t1d2s3 c1t1d3s3 c1t3d3s3 -k -i 32b 6) recreated the d10 metadevice mirror metainit d10 -m d50 d51 1 metainit d50 1 1 c1t4d0s0 metainit d51 1 1 c1t4d4s0 newfs /dev/md/rdsk/d51 metattach d10 d51 7) shutdown the server and replaced the two bad disks Tray 3 - Drive 4,2 Tray 3 - Drive 5,0 8) powered server back up 9) since the d1 and d2 RAID5 metadevices each contained a defective disk that was replaced, they both had to be recreated from scratch and have the data restored from backups metainit d1 -r c1t5d0s3 c1t5d1s3 c1t5d3s3 c1t5d4s3 metainit d2 -r c1t3d1s3 c1t4d1s3 c1t4d2s3 c1t4d3s3 10) restore /install1 and /install2 from backups Thanks for everyones help on this (Ian, Dwight, and others). I thought about taking Ian's advice on renaming all /dev/dsk/c1* entries to c2*, but decided that I'll do things the *safe* way instead and just recreate and restore. Chris > > I have a Solaris 6 server where the disks used in each metadevice does not > correspond to the real disk numbers. For example, most of the disks that > are part of metadevices start with c2tXdXsX, while the actual disk names > in the system are c1tXdXsX. I believe this was caused by a hardware > change that probably happened somewhere along the line. The metadevices > still work somehow, but I'm not sure why or how they still work. I > found out about this when one of the devices needed maintenance and I ran > the metareplace command.... > > bash-2.05# metareplace -e d1 c2t5d0s3 > metareplace: earthquake: c2t5d0s3: No such file or directory > > The new disk name would be c1t5d0s3. I need to fix all of these for each > metadevice. Here's an example of one... > > d1: RAID > State: Needs Maintenance > Invoke: metareplace d1 c2t5d0s3 <new device> > Interlace: 64 blocks > Size: 100615336 blocks > Original device: > Size: 100617984 blocks > Device Start Block Dbase State Hot Spare > c2t5d0s3 5362 No Maintenance > c2t5d1s3 5362 No Okay > c2t5d3s3 5362 No Okay > c2t5d4s3 5362 No Okay > > What is the best way to fix these metadevices? I don't think that the > server will come back up after a reboot in its current state. > > How are these devices still working when they are referencing disk names > that do not exist anymore? > > What, besides a defective disk, causes metadevices to require maintenance? > This happens often, and a lot of times simply running the metareplace > command fixes the situation. > > fyi, the uptime on this server is over 600+ days!! > > Thanks for the help! _______________________________________________ sunmanagers mailing list sunmanagers@sunmanagers.org http://www.sunmanagers.org/mailman/listinfo/sunmanagersReceived on Mon Oct 24 10:41:55 2005
This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:52 EST