Have a question? ask www.kenlet.com
Home  |  FAQ  |  About  |  Contact  |  View Source   
 
SEARCH:
 
BROWSE:
    My Hood
Edit My Info
View Events
Read Tutorials
Training Modules
View Presentations
Download Tools
Scan News
Get Jobs
Message Forums
School Forums
Member Directory
   
CONTRIBUTE:
    Sign me up!
Post an Event
Submit Tutorials
Upload Tools
Link News
Post Jobs
   
   
Home >  Tutorials >  General Coding >  Root on a RAID setup for OpenBSD 3.0
Add to MyHood
   Root on a RAID setup for OpenBSD 3.0   [ printer friendly ]
Stats
  Rating: 3.5 out of 5 by 4 users
  Submitted: 09/01/02
Terry Danylak ()

 
OPENBSD
ROOT ON A RAID SET UP HOW TO

The first step required to setup root on a raid set is a working copy of OpenBSD and two hard disks that have the same geometry. I recommend installing OpenBSD 3.0 release then upgrading to current after the install. Configuring ROOT on a raid set is possible in earlier versions of OpenBSD but you will have to find the raidframe patch for your version of OPENBSD. Special thanks to Thierry Deval for writing the raidframe patch to support root on a raid set and for porting his code to OpenBSD.

These are the steps I went through to install and configure the disks.

Go to the following link and download the install file:
ftp://ftp.openbsd.org/pub/OpenBSD/3.0/i386/floppy30.fs
After you have downloaded that file you will want to create the installation floppy. The method of creating the installation floppy will be different depending on what Operating System you are using to prepare the install floppy.

On Linux/BSD/*NIX

1)Place a usable floppy into the floppy drive of the computer you are creating the installation floppy on.

2)Change to the directory where floppy30.fs resides

3)Run the following command: fdformat /dev/fd0a. You may have to use /dev/fd0 instead depending on the machine.

4)Then you want to copy the image over to the newly formatted floppy. This is done with the following command: dd if=floppy30.fs of=/dev/rfd0c bs=126b ; once again you may have to use /dev/rfd0 instead of /dev/rfd0c.

5)Once that completes you can check to see if the contents on the floppy are the same as the floppy30.fs. This is done by executing the following command: cmp /dev/rfd0c floppy30.fs (once again you may have to play with the device letter)
6)The next step is placing the floppy into the floppy drive of the machine that you will be installing OpenBSD on and rebooting/powering up the computer.

If you successfully made the installation floppy then you should see a boot prompt, you can enter boot fd0:/bsd at the prompt or just let the boot loader time out and boot automatically. You will be prompted for what task you want to do Upgrade, Install or Shell. I chose Install for my setup seeing as I did not have a previously installed version of OpenBSD to upgrade. I wont go into details about the installation process but the most important part to getting root on a raid device is the partitioning of the hard drive.
If you need help with other installation and configuration issues go to this site which explains everything perfectly: http://openbsd.org/faq/faq4.htm




When you get to the disklabel prompt where you can add or remove partitions you want to create a partition for your raided root file system and any other partitions you wish to raid as well as regular partitions to house the operating system and swap partitions.

Here is an example of my values (remember that your values may differ depending on how you wanted to slice up the disk)

> p

To check and see what partitions already exist. In my case I only had C which is fine. One point I have not brought up here is that in OpenBSD the kernel can not reside on a raided file system so I created a separate partition to hold the kernel as well as some kernels.

**Note that on some computers the kernel might only be loadable if it resides under the
1024th cylinder, this is why I created the boot partitions first so that it would be
loadable**

> a g

I accepted the default offset value then entered a partition size of 102400 (50Megs) then selected the default file system type (4.4BSD) and assigned the partition a mount point of /boot

I then added a separate partition for /usr/src which will be used to house the
OpenBSD 3.0 current sources.

> a f

I accepted the default offset value then entered a partition size of 1843200 (900Megs) then selected the default filesystem type (4.4BSD) and assigned the partition a mount point of /usr/src.

I then created the swap partition for the operating system. Its recommended that you make the swap partition at least double the amount of RAM in the computer. I have 64 Megs of ram in this machine so I gave it a swap partition of 128 Megs.

> a b

I accepted the default offset value then entered a partition size of 262144 (128 Megs) then selected the default file system type (swap). Not that you wont me prompted for a place to mount the swap partition.






I then added the / partition to house the root filesystem

> a e

I accepted the default offset value then entered a partition size of 1228800 (600Megs) then selected the default filesystem type (4.4BSD) and assigned the partition a mount point of /

Now that the partitions have been created for the operating system its time to create the partitions for the raid sets.

For this I needed two partitions one to house root and one to hold swap. I created the partition for the soon to be raided swap partition.

> a d

I accepted the default offset value then entered a partition size of 262144 (128 Megs) then selected the default file system type (4.4BSD) and did not assign the partition a mount point

Finally I added the remainder of the hard drive space for the soon to be raided root partition.

> a a

I accepted the default offset value then accepted the default partition size and selected the default file system type (4.4BSD) and did not assign the partition a mount point.

I then continued on with the rest of the installation (refer to the link I posted above if you are uncertain about any options)

So now your operating system should be installed and you should have rebooted your machine and now sitting at a log in prompt. Login as root or a non privileged user and su to root. Verify that all filesystems are mounted by typing df at the prompt, hopefully everything is where it should be. Now what you want to do is upgrade the operating system to OpenBSD 3.0 Current. OpenBSD 3.0 Current has the patched version of raidframe that allows root to reside on a raid set. To get the current source tree go to http://openbsd.org/anoncvs.html as well as http://openbsd.org/faq/upgrade-minifaq.html for detailed instructions about upgrading to Current. Make sure that when you build your kernel you enable raid support and autoraid configuration support. Make sure to pay close attention to the version differences, if you don't your Current build may fail. Here are some snippets of what my kernel configuration file looked like:
.
.
config bsd root on wd0d swap on wd0b
.
.
pseudo-device raid 4 # RAIDframe disk device
options RAID_AUTOCONFIG
.
.
As of this writing raidframe on OpenBSD 3.0 Current supports a maximum of 4 raid devices (so 4 raided partitions).

At the command line enter the following commands in order:
config KERNEL_CONFIG_FILE (ie config GENERIC)
cd ../compile/GENERIC
make depend
make

Once that completes cp the kernel from the current directory to /boot as well as the installed kernel

cp /bsd /boot/bsd.old (if you don't give the kernel a different extension it will overwrite your newly created kernel)

Place your installation floppy into the floppy drive and reboot. When the boot loader comes up let the boot loader load up from floppy and when prompted with Upgrade, Install or Shell, select s for shell to take you to a shell. Adding a file to the boot partition causes the boot on the hard drive to fail at start time so we have to fix that.

What you want to do is mount the device that has is your root partition as well as the boot partition, here is what I did

mount /dev/wd0e /
mount /dev/wd0g /mnt/boot
then run the following command

/mnt/usr/mdec/installboot -v /mnt/boot/boot /mnt/usr/mdec/biosboot wd0

once that completes (hopefully without errors about mad magic in super blocks) remove the floppy from the floppy drive and reboot the machine.

You should be able to get to a boot prompt. Now you want to tell the boot loader to load the correct kernel which if you remember we placed on /boot. So at the boot prompt type this: boot wd0g:/bsd
If your /boot partition is on a different partition use the device name that resembles that partition. At this point the boot loader should be loading the kernel. Once you get to the login prompt log in as root or any user that is a member of the wheel group and su to root. Continue to follow the instructions about updating the operating system to OpenBSD 3.0 Current.

Before we start configuring the raid devices we must create the disklabels for the raid sets on both hard drives.

Here is how I did that on my machine:
disklabel wd0 > /root/disklabel.wd0
vi /root/disklabel.wd0


you will want to change the line

a: 2582977 2416703 4.4BSD 1024 8192 16 # (Cyl. 2397*- 4959)

to

a: 2582977 2416703 RAID 1024 8192 16 # (Cyl. 2397*- 4959)

save that file and exit (:wq)

**Note we use FSTYPE RAID so that we can make the raid device eligible for root**

make a copy of that file that will be used to configure the second disk drive on the system

cp /root/disklabel.wd0 /root/disklabel.wd1

and open up disklabel.wd1 in your favorite text editor (vi) and change the line

# /dev/rwd0c:

to

# /dev/rwd1c:

We do this for reference purposes.

Now you will want to apply the disklabel changes to the disk drives.

This is done by executing the following commands:

disklabel -R -r wd0 /root/disklabel.wd0
disklabel -R -r wd1 /root/disklabel.wd1

Verify that the changes have taken place by executing the following commands

disklabel wd0

then

disklabel wd1

You should see that the changes have taken place and if all has gone well its time to configure the first raid set. For this we will create a file raid0.conf.new in /root. The file raid0.conf.new should have something similar in it:

START array
# numRow numCol numSpare
1 2 0

START disks
/dev/wd0a
/dev/wd1a

START layout
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
32 1 1 1

START queue
fifo 100

If you are confused about the values that should be in this file refer to the raidctl man page for explanations. Briefly, what I have done is said to start the array with 1 row 2 columns and 0 spares. Start the disks wd0a then wd1a (the partition for the root raid set).

Once this file is created we want to initialize and configure the raid0 set which is done in the following order:

raidctl -C /root/raid0.conf.new raid0 (that should complete with no serious failures)
raidctl -I 100 raid 0 (here we are adding component labels to the raid0 device)
raidctl -iv raid0 (initialize with parity)
disklabel raid0 (partition the raid0 device, I just did > a a at the disklabel prompt and selected all the default values)
newfs raid0 (create the filesystem for the raid device)

everything should have worked up until this point, if it has not you may want to trace your steps and make sure you have done everything as I have told you. If all has gone well mount the device and tarball over the directories on / .

mount /dev/raid0a /mnt
cd /mnt

**Note that when tarballing we use the p argument to keep file permissions**
tar cpzvvf /mnt/dir.tgz /dir (where dir is the directory you want to tarball ie /etc )
once that completes
tar zxpvf /mnt/dir.tgz (this should untar the directory contents to the raid0 device)

do this for all directories under / for the exception of mnt. For mnt just create the directory on the raid0 device.

> cd /mnt
> mkdir mnt
and make sure that the permissions match those of /mnt

Once you have completed tarballing over / you will want to execute the following command:

raidctl -A root raid0

this makes raid0 eligible for a root filesystem.

Before rebooting we want to adjust our /mnt/etc/fstab to resemble the changes we have made. Here is what I did:

cp /mnt/etc/fstab /mnt/etc/fstab.old
vi /mnt/etc/fstab
Inside vi I changed the device values for which device has / on it. You may notice there there are two entries for / we only need one, these multiple entries are a result of the installer wanting / to be on partition a of the hard drive

so here is what I have:

/dev/raid0a / ffs rw 1 1
/dev/wd0f /usr/src ffs rw 1 2
/dev/wd0g /boot ffs rw 1 2

save the changes (:wq).
Before you reboot you will want to copy over your /mnt/root/raid0.conf to /mnt/etc so that system will automatically configure the raid set for you and keep you from having to add entries in your rc.conf to enable the raid0 device at boot time. Now reboot and hope for the best. Remember how to boot your machine? (boot wd0g:/bsd). If all went well you should find your way to a login prompt. Once logged in do a df to see what devices are mounted, you should see that /dev/raid0a is mounted on /. So there you have it root is now on a mirrored set.

Now, what's the point of having root mirrored if swap is isn't. Consider that the device that currently has the swap device fails, you loose your swap file contents and may find yourself in a heap of trouble. My solution? Make a raid set for swap!

This is done in a similar manner as your raid0 setup.

Create the file /root/raid1.conf
edit raid1.conf with your favorite text editor (vi)
add entries like the following
START array
# numRow numCol numSpare
1 2 0

START disks
/dev/wd0d
/dev/wd1d

START layout
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
32 1 1 1

START queue
fifo 100

Now go through the same process of initializing and configuring a raid device but stop when it comes to disklabel the raid1 device

raidctl -C /root/raid1.conf.new raid1
raidctl -I 200 raid1 (We have to use different numbers '200' for different raid devices)
raidctl -iv raid1
disklabel raid1 (add device b " > a b " then chose swap as the filesystem type)
newfs raid1
raidctl -A yes raid1

edit /etc/fstab to resemble the differences

**Note that my /etc/fstab did not have an entry for swap so I created one**

/dev/raid1b none swap sw 0 0

save your changes (:wq)

so my /etc/fstab looks like this:

/dev/raid0a / ffs rw 1 1
/dev/raid1b none swap sw 0 0
/dev/wd0f /usr/src ffs rw 1 2
/dev/wd0g /boot ffs rw 1 2


Copy your raid1.conf.new file to /etc (cp /root/raid1.conf.new /etc/raid1.conf)

If you want to set the raid1 device to be auto-configurable execute the following command:

raidctl -A yes raid1

reboot

and when you get to boot prompt enter " boot wd0g:/bsd "

and you should find your way to a login prompt.

Once logged in you can check to see that the swap file is actually being used with the following command:

swapctl -l

And there you have it, so you lost 128 Megs of disks pace, you could always use that to hold some data if you want. Remember that you can only have 4 raid devices on the machine.


I personally wanted to make sure that the raid device was working properly and
the only way to do this is to copy over a file to the device, fail one of the drives
and then reconstruct the failed drive, this is done very easily in the following steps.


This will show what drives are in the raid set, their condition and if you have any
spares:
raidctl -s raid2

Components:
/dev/wd0d: optimal
/dev/wd1d: optimal
No spares.
Parity status: clean
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.

This will manually fail your device (don't worry its not damaging to the HD)
raidctl -f /dev/wd0d raid2


Now to make sure that it failed

raidctl -s raid2
Components:
/dev/wd0d: failed
/dev/wd1d: optimal
No spares.
Parity status: clean
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.

Now to reconstruct the drive
raidctl -R /dev/wd0d raid2

Now to display the reconstruction
serv100# raidctl -s raid2
Components:
/dev/wd0d: reconstructing
/dev/wd1d: optimal
No spares.
Parity status: clean
Reconstruction is 1% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.
Reconstruction status:
99% |***************************************| ETA: 00:28 |

Now to prove that all is well
serv100# raidctl -s raid2
Components:
/dev/wd0d: optimal
/dev/wd1d: optimal
No spares.
Parity status: clean
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.

Return to Browsing Tutorials

Email this Tutorial to a Friend

Rate this Content:  
low quality  1 2 3 4 5  high quality

Reader's Comments Post a Comment
 
Good! 5 *****
-- David Chao, September 04, 2002
 
It would be nice to have more formatting (color, bold, italics, different font sizes, use hyperlinks instead of just stating the URL) like most tutorials. It makes it easier to read. Andrew Ma and Heath Stewart have some good examples.
-- Brent Bishop, September 06, 2002
 
not a bad tutorial, but not to sound stupid or anything what were the > p > a b, and so on for? i wasn't sure what they stood for, and didn't see a point to them. were they a part of a prompt of some sort?
-- J J, September 09, 2002
 
A good tutorial, but like your other one, needs neater formatting so it'll be easier to follow, otherwise, great job!
-- Larry Mak, September 12, 2002
 
Copyright © 2001 DevHood® All Rights Reserved