I set up these partitions using the Solaris 10 installer in the console install mode as part of the 
install.  Note that for partition type you have to select DOS then change it later for the ZFS 
partition (this is optional, ZFS doesn't care what the partition type is).  Once the system is 
booted you are then ready to set up RAID.

prtvtoc /dev/rdsk/c1d0s2 | fmthard -s - /dev/rdsk/c2d0s2

Which copies the slice definitions from the first to the second disk on my sample box, obviously you 
change the disk device paths accordingly (if you didn't manually setup fdisk partitions on the 
second disk to match the first, do that then return to this step).  These must match exactly.

metadb -af -c 2 /dev/dsk/c1d0s1 /dev/dsk/c1d0s6
metadb -af -c 2 /dev/dsk/c2d0s1 /dev/dsk/c2d0s6

Which command sets up the initial metadb's on the disks.  Next, metainit the partitions we want to 
use.  Note that this works fine on your booted, live, root filesystem:

metainit -f d10 1 1 /dev/dsk/c1d0s0
metainit -f d20 1 1 /dev/dsk/c2d0s0
metainit d0 -m d10

metainit -f d11 1 1 /dev/dsk/c1d0s3
metainit -f d21 1 1 /dev/dsk/c2d0s3
metainit d1 -m d11

... (rinse lather, and repeat for each set)

Then, there's a cute little script to set up your root filesystem in vfstab for you:

metaroot d0

Follow the model provided in your vfstab by this command to add each additional configuration to 
your setup.

Next, reboot to move your mounted system partitions to your new raid ones.  Afterwards, issue this 
command to attach your second drive raids to the first:

metattach d0 d20
metattach d1 d21

...(rinse lather, and repeat for each set)

You can use metastat -c to watch the progress of your raid syncing. 

You need to duplicate your layout from disk0 to disk1. It's fairly important that the disk geometry 
matches. Metadevices work at the block-level of the disk, and if one disk has fewer blocks than the 
other you'll wind up making a mess. Once you're sure you're ready to proceed, dump the layout from 
disk0 to disk1 thusly:

prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

Second, you need to create your meta-databases. This is for logging, and all but eliminates the need 
for fsck to run after a dirty shutdown. Do the following:

metadb -af -c 2 /dev/dsk/c0t0d0s3 /dev/dsk/c0t0d0s4
metadb -af -c 2 /dev/dsk/c0t1d0s3 /dev/dsk/c0t1d0s4

This adds (-a) 2 (-c for count) meta-databases in each of the slices. If you have more disks, you 
can span the databases across multiple disks for better performance and fault-tolerance.

The next step is to create your raid-devices. In a two-disk system, you're stuck with Raid0 and 
Raid1. Since Raid0 is almost pointless (you're doing this for redundancy, remember?!), we'll go with 
Raid1 - mirrored disks.

We'll deal with the following raid devices and members:

d0 - / mirror
d10 - /dev/dsk/c0t0d0s0
d20 - /dev/dsk/c0t1d0s0

d1 - swap
d11 - /dev/dsk/c0t0d0s1
d21 - /dev/dsk/c0t1d0s1

The device names are somewhat arbitrary. In a simple setup like this, I use d0 to match up with a 
mirrored slice0, and d10 to indicate member 1 of d0 (member 1 d0 = d10, member 2 d0 = d20).

So create the raid devices and members:

metainit -f d10 1 1 /dev/dsk/c0t0d0s0
metainit -f d20 1 1 /dev/dsk/c0t1d0s0
metainit -f d0 d10

metainit -f d11 1 1 /dev/dsk/c0t0d0s1
metainit -f d21 1 1 /dev/dsk/c0t1d0s1
metainit -f d1 d11

This initializes the devices. The command "metastat" will show you that the devices exist, but the 
mirror-halves aren't attached. So let's attach them:

metattach -f d0 d10
metattach -f d1 d11

You've just attached the first half of the mirror. Yes, this is the disk that you're currently 
running on. Your data is still there.

Next, you need to ensure the system will use the metadevices. The root-filesystem is easy:

metaroot d0

Next, you need to edit /etc/vfstab to change the swap device to use /dev/md/dsk/d1 as swap. While 
you're in there, turn on logging under the mount options for the root filesystem (d0). Double-check 
that you haven't screwed up. Save and exit if it all looks good.

Once you're done, issue the following:

lockfs -fa
init 6

Watch your system come up. There will be some new messages, most notably the kernel complaining 
about not being able to forceload three raid modules:

forceload of misc/md_trans failed
forceload of misc/md_raid failed
forceload of misc/md_hotspares failed

You can ignore these messages. They're harmless. Basically, you haven't created any raid-devices 
that require those modules so they're refusing to load.

Now that your system is up (You didn't mess up vfstab, did you?!), you need to finish off the 
process. Log in and do this:

metattach -f d0 d20
metattach -f d1 d21

You'll notice that your system is now a little slower, both commands took a moment to return, and 
your disks are going nuts. Look at the output of "metastat" and you'll see why - your disks are 

You'll need to install the bootsector to your second disk so that you can boot from it. This is 
fairly easy to do:

installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk  /dev/rdsk/c0t1d0s0

You might also want to set the OBP to boot from disk1 if it can't boot from disk0. If you bring the 
machine to the OBP (ok) prompt via init 0, you can enter the following:

setenv boot-device disk disk1
boot disk1

This will set up a failover boot to disk1. The very last command there will also boot from disk1, 
proving to you that this works. Do be sure to substitute the correct disk for "disk1".