The_Unbeliever
Honorary Master
- Joined
- Apr 19, 2005
- Messages
- 103,196
Hey
Just want to share my experience with Software RAID under Linux
Got a server back from overseas, damagement decided to scrap it as it was giving them too much hassles with Windows7 and the hard drives.
The add-on Intel RAID card packed up, so the installers reinstalled Windows7 on one hard drive, and spanned four other hard drives.
Of course this is problematic, given that any one HDD failure on the spanned set will lead to loss of the entire span. And this happened regularly.
So I got my grubby little fingers on it
and tried Zentyal (eBox) first.
Zentyal.
But I wasn't happy-chappy with the Zentyal installation routine. It didn't detect that there was more than one HDD available, and thus installed to a single HDD. So I scrapped it off my list as I need something that a n00b can install onsite if need be.
Next up was SME Server.
SME Server.
Installed it, and it picked up the 5 HDD's straightaway. During installation, the one HDD developed a problem, and I had to redo the installation again. (I used RAID5).
Synchronization was slow, and I had a shufty at the motherboard SATA controller settings, and found that these was set to 'compatibility'. Switched them over to 'native' and 'AHCI' - and this improved the synchronization transfer rate greatly.
Of course I messed around with the HDD's just to make sure that SATA0 = SDA etc, and in the process messed up everything.
After a reinstall everything is up and running sweetly, sync rate is between 200K/s and up to 10000K/sec and things are looking good. This time I opted for RAID6 instead of RAID5.
The only difference between software RAID and a proper RAID controller is that software RAID requires a bit more CPU overhead as the proper RAID controller's onboard CPU will handle the RAID array and other overheads.
Proper RAID controllers do cost a bit of money though.
Personally I would prefer to use a proper RAID controller and a hot spare should a server go to site - purely for the reason that a n00b admin can simply power the server down, remove the faulty HDD, plug in the new one, and be up and away without any issues.
In most cases, a simple mirrored setup between two hard drives will be more than enough though.
So who's done software RAID5 or 6 so far? And is it holding up?
Regards
Ook
Just want to share my experience with Software RAID under Linux
Got a server back from overseas, damagement decided to scrap it as it was giving them too much hassles with Windows7 and the hard drives.
The add-on Intel RAID card packed up, so the installers reinstalled Windows7 on one hard drive, and spanned four other hard drives.
Of course this is problematic, given that any one HDD failure on the spanned set will lead to loss of the entire span. And this happened regularly.
So I got my grubby little fingers on it
Zentyal.
But I wasn't happy-chappy with the Zentyal installation routine. It didn't detect that there was more than one HDD available, and thus installed to a single HDD. So I scrapped it off my list as I need something that a n00b can install onsite if need be.
Next up was SME Server.
SME Server.
Installed it, and it picked up the 5 HDD's straightaway. During installation, the one HDD developed a problem, and I had to redo the installation again. (I used RAID5).
Synchronization was slow, and I had a shufty at the motherboard SATA controller settings, and found that these was set to 'compatibility'. Switched them over to 'native' and 'AHCI' - and this improved the synchronization transfer rate greatly.
Of course I messed around with the HDD's just to make sure that SATA0 = SDA etc, and in the process messed up everything.
After a reinstall everything is up and running sweetly, sync rate is between 200K/s and up to 10000K/sec and things are looking good. This time I opted for RAID6 instead of RAID5.
Code:
Every 1.0s: cat /proc/mdstat Tue Feb 1 15:40:04 2011
Personalities : [raid1] [raid6]
md2 : active raid6 sda2[0] sde2[4] sdd2[3] sdc2[2] sdb2[1]
2929966080 blocks level 6, 256k chunk, algorithm 2 [5/5] [UUUUU]
[>....................] resync = 4.1% (40834304/976655360) finish=3025.0
min speed=5153K/sec
md1 : active raid1 sda1[0] sde1[4] sdd1[3] sdc1[2] sdb1[1]
104320 blocks [5/5] [UUUUU]
unused devices: <none>
The only difference between software RAID and a proper RAID controller is that software RAID requires a bit more CPU overhead as the proper RAID controller's onboard CPU will handle the RAID array and other overheads.
Proper RAID controllers do cost a bit of money though.
Personally I would prefer to use a proper RAID controller and a hot spare should a server go to site - purely for the reason that a n00b admin can simply power the server down, remove the faulty HDD, plug in the new one, and be up and away without any issues.
In most cases, a simple mirrored setup between two hard drives will be more than enough though.
So who's done software RAID5 or 6 so far? And is it holding up?
Regards
Ook