Archive
Tags
android (3)
ant (2)
beautifulsoup (1)
debian (1)
decorators (1)
django (9)
dovecot (1)
encryption (1)
fix (4)
gotcha (2)
hobo (1)
htmlparser (1)
imaplib (2)
java (1)
json (2)
kerberos (2)
linux (7)
lxml (5)
markdown (4)
mechanize (6)
multiprocessing (1)
mysql (2)
nagios (2)
new_features (3)
open_source (5)
optparse (2)
parsing (1)
perl (2)
postgres (1)
preseed (1)
pxe (4)
pyqt4 (1)
python (41)
raid (1)
rails (1)
red_hat (1)
reportlab (4)
request_tracker (2)
rt (2)
ruby (1)
scala (1)
screen_scraping (7)
shell_scripting (8)
soap (1)
solaris (3)
sql (2)
sqlalchemy (2)
tips_and_tricks (1)
twitter (2)
ubuntu (1)
vmware (2)
windows (1)
zimbra (2)

I thought I would document this for reference for myself and anyone else who might benefit. I recently upgraded with main workstation, replacing my 1.2TB LVM made up of five 250GB drives with a 2TB raid array of five 500GB drives. The additional challenges were that I would have to transfer the data off of the old LVM onto the new raid and I would have to do this over the network as neither motherboard has 10 SATA connections. To further complicate things, I want to use one of the current drives of the LVM as my new system drive. Here is the process.

After setting up the hardware for both systems, I booted the new raid system with a Fedora Core 7 DVD and entered rescue mode by entering linux rescue at the ISOLINUX prompt. Normally, I would setup the raid during the normal Fedora installation. However, I want to use one of the LVM members for the new installation, which I also need to pull data off of first.

Now that I'm in rescue mode, the first thing to do was partition each raid member with one big Linux Raid Autodetect or fd type partition. Once this was done, I could create a level 5 raid with this command:

mdadm --create /dev/md0 --level=5 --raid-devices=5 /dev/sd[abcde]1

When I ran this command the first time, mdadm told me that /dev/sda1 was too small to be part of the array. For some reason, the kernel detected the changes in the other drives, but not the first one. I rebooted the system, as partprobe is not available in rescue mode, and then command worked like a charm.

The next step was to put a file system on the new raid device. I used XFS because the files that would live on this file system were very large, about 100MB to several gigabytes. I formatted /dev/md0 with this command, which is about how you could format any drive with XFS:

mkfs.xfs /dev/md0

Next, I mounted the raid, setup an NFS export on the old system, and mirrored about a terabyte of data. This took about 16 hours over gigabit ethernet.

The next day, after copying completion, I installed Fedora Core 7 on one of the old members of the LVM. After installing, I would then configure the raid as my /home. I had to assemble the raid before I could use it. You could use this method to assemble a raid moved from another system as well as what I am doing.

Before I could assemble it, I needed the raid's UUID. This is easy to get. All you need to know is the device file of one of the raid members, which the first on mine is /dev/sdb1, and you can ask mdadm.

mdadm --examine /dev/sdb1

In the cascade of output, you will see UUID presented as a string of hexidecimal characters. Now, we run this command to assemble the raid device:

mdadm --assemble --name=/dev/md0 --uuid=PUT_UUID_HERE /dev/sd[bcdef]1

The raid device is all ready to mount. Now, I want this raid device to be prepared at every boot. I do this by creating /etc/mdadm.conf and insert this:

DEVICE /dev/sd[bcdef]1

ARRAY /dev/md0 UUID=PUT_UUID_HERE

MAILADDR root@localhost

That's it. Remember to update your /etc/fstab, if you want your raid to be mounted every boot.

Posted by Tyler Lesmann on September 11, 2007 at 8:33
Tagged as: linux raid