******************************************************* -------------------- Disk setup on daphne -------------------- Daphne is setup for RAID 1. This means all data is mirrored to two separate drives. In Daphne's case, there are two 9GB drives installed with SCSI id 0 and 1 (/dev/dsk/c0t0d0 and /dev/dsk/c0t1d0 respectively). Each disk is "broken" into 7 "slices" (which we can think of as partitions) and the disks must be partitioned identically (or the second mirror has to have larger partitions than the master mirror). For daphne the disks were partitioned as follows: slice name cylinders size ---------------------------------------------------------------------- 0 root wm 600-4923 7.4GB 4324/0/0 15527484 1 swap wu 0-292 513.75MB 293/0/0 1052163 2 backup wm 0-4923 8.43GB 4924/0/0 17682084 3 var wm 293-577 499.72MB 286/0/0 1027026 4 unassigned 5 unassigned wm 578-582 8.77MB 5/0/0 17955 6 unassigned wm 583-588 19.29MB 6/0/0 21546 7 unassigned wm 589-599 19.29MB 11/0/0 39501 ---------------------------------------------------------------------- slice 2 is generally reserved and spans the entire disk, so is never mounted (but provides an easy way to back up an entire disk to tape) Only slice 0, 1 and 3 are mirrored. Slices 5, 6 & 7 are identical on both disks, but didn't need to be. Each of those slices (6 in all) are used to hold a state database replica, which is basically what the system uses to keep the mirrors all in sync. It is important to have multiple state database replicas on each disk, since if one disk fails, it still needs at least three to keep going (or something like that). These slices should be at least 4MB in size. Mirroring uses the concept of metadevice, where we create a metadevice for every partition we want to mirror. Then the two partitions that will actually be mirrored are made into sub-mirrors and attched to the metadevice as such. Below shows how the metadevices are setup on daphne. d0 is / d10 is disk0 / (c0t0d0s0) d20 is disk1 / (c0t1d0s0) d1 is swap d11 is disk0 swap (c0t0d0s1) d21 is disk1 swap (c0t1d0s1) d3 is /var d13 is disk0 /var (c0t0d0s3) d23 is disk1 /var (c0t1d0s3) ------------------------------- To Create and setup metadevices ------------------------------- // Assumption: Solaris is initially installed on disk 0 before setting up RAID metadb -c 2 -a -f /dev/dsk/c0t0d0s5 //Create copies of state database replica metadb -c 2 -a -f /dev/dsk/c0t0d0s6 metadb -c 2 -a -f /dev/dsk/c0t0d0s7 metadb -c 2 -a -f /dev/dsk/c0t1d0s5 metadb -c 2 -a -f /dev/dsk/c0t1d0s6 metadb -c 2 -a -f /dev/dsk/c0t1d0s7 metainit -f d10 1 1 c0t0d0s0 //Initialize slices as "sub-mirrors" metainit -f d20 1 1 c0t1d0s0 metainit -f d11 1 1 c0t0d0s1 metainit -f d21 1 1 c0t1d0s1 metainit -f d13 1 1 c0t0d0s3 metainit -f d23 1 1 c0t1d0s3 metainit d0 -m d10 //Set one of the submirrors as the master mirror for the metadevice metainit d1 -m d11 metainit d3 -m d13 metattach d0 d20 //Attach the other submirror to the metadevice metattach d1 d21 //Now we should have a functioning RAID 1 system and it should metattach d3 d23 //begin syncing up the drives at this point metaroot d0 //Updates /etc/vfstab to use metadevices installboot /usr/platform/'uname -i'/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0 // make mirrored disk bootable //Optional //To enable easy booting from second mirror device (in case first disk fails), we can add an alias to the Open Boot Prom ls -l /dev/dsk/c0t0d0s0 // The returned value is the full device name for the first disk ls -l /dev/dsk/c0t1d0s0 // The returned value is the full device name for the second disk //at OBP prompt: ok nvalias disk0 /sbus@1f,0/SUNW,fas@e,8800000/sd@0,0:a //This is for disk0 in daphne ok nvalias disk1 /sbus@1f,0/SUNW,fas@e,8800000/sd@1,0:a //This is for disk1 in daphne //Now you can boot the second device at the OBP prompt by just typing "boot disk1" // To enable the system to boot from the other disk is one disk fails, do the following at the OBP: ok setenv boot-device disk0 disk1 // Remember that the secondary disk needs to be made bootable using the "installboot" command above. // Now the system can boot from any disk, but the problem will be that if one disk fails, only half // the database replicas will be available, and the system complains about insufficient database // replicas, write permissions, read permissions, etc. To overcome this problem, modify the file // /etc/system by appending the line: set md:mirrored_root_flag=1 // That way, it will allow the system to boot even if just half the database replicas are available // (Since we include 3 database replicas on each disk, this may not be necessary... // Alternatively, if a disk dies and Solaris wants a quorum of DB replicas, boot into single user // mode, delete the database replicas on the failed disk and reboot: boot -s metadb -d /dev/dsk/c0t?d0s5 metadb -d /dev/dsk/c0t?d0s6 metadb -d /dev/dsk/c0t?d0s7 reboot --------------------------------- Checking Metadevice (RAID) status --------------------------------- Use the metastat command. To see the status of all metadevices, use it without arguments. metastat To see the status of an individual metatdevice, specify the metadevice as an argument: metadevice d0 ----------------------------------------------------- To detach a submirror (disk) [i.e. for replacement] ----------------------------------------------------- // Assumption: We're replacing disk 1, do detach all submirrors on it metadetach d0 d20 //Detaches submirror for metadevice metadetach d1 d21 metadetach d3 d23 metadb -d c0t1d0s5 //Remove state database replicas from the disk (use metastat -i to verify) metadb -d c0t1d0s6 metadb -d c0t1d0s7 metaclear d20 //Remove submirror definition/instantiation (use metastat to verify) metaclear d21 metaclear d23 --------------------- To Attach a submirror --------------------- // Assumption: We just replaced disk 1 with a new disk and partitioned and formatted it correctly (see above) metadb -a -f /dev/dsk/c0t1d0s5 //Add database state replicas to new hard drive (3 copies) metadb -a -f /dev/dsk/c0t1d0s6 metadb -a -f /dev/dsk/c0t1d0s7 metainit -f d20 1 1 c0t1d0s0 //Initialize new submirrors metainit -f d21 1 1 c0t1d0s1 metainit -f d23 1 1 c0t1d0s3 metattach d0 d20 //Attach submirrors to metadevice (disks are resynced at this point) metattach d1 d21 metattach d3 d23 metaroot d0 //Updates /etc/vfstab to use metadevices installboot /usr/platform/'uname -i'/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0 // make new mirrored disk bootable // If both submirrors are replaced with bigger drives (replacing one at a time, resyncing data in between), // and some of the slices are bigger now, use the growfs command to enable the system to recognize the additional space available. ------------------------------------------------------ To be able to boot with only 50% of metaDBs available (one disk failure in a two-disk mirror setup) ------------------------------------------------------ You can choose to force Solstice DiskSuite software to start if half of the state database replicas are available by setting the tunable mirrored_root_flag to 1 in /etc/system. The default value of this tunable is disabled, which requires that a majority of all replicas are available and are in sync before Solstice DiskSuite software will start. To enable this tunable, type the following: echo set md:mirrored_root_flag=1 >> /etc/system Caution Enable the mirrored_root_flag only if the following prerequisites are met: 1) The configuration only has two disks. 2) Unattended reboots of a system with only two disks are a requirement and the risk of booting from a stale replica, and therefore, stale volume management system state, is acceptable. This means that in your setup, system availability is more important than data consistency and integrity. Setting this tunable risks data corruption or data loss in the event of a transient disk failure, but allows unattended reboots. See the Solaris Volume Manager documentation on docs.sun.com for more information about state database replicas (metadbs). *******************************************************