How RAID can give your hard drives SSD-like performance

#4
The most important part this article doesn't seem to cover is access times where RAIDed mechanical drives will not come close to SSDs, this is the single biggest thing that makes SSDs feel so much faster. So unless you're only looking for faster transfer speeds then RAID isn't going to make your computer feel much snappier.
 

Jack Marlow

Honorary Master
#6
How RAID can give your hard drives SSD-like performance

No it can't... but it can certainly improve performance and / or offer redundancy :)
If you can afford a decent RAID array, you can afford use SSDs with a more basic RAID array (Better IMO).

IMO RAID's most significant contribution is the redundancy aspect.
Now 2x Optanes in a RAID 0 would be a speed win of note :)
 
Last edited:
#9
RAID can certainly hike up the read and write speeds, but it lacks every other benefit compared to just one cheap NVMe drive, which is:

  • Low-latency access
  • Lower power
  • Higher random write performance
  • Higher 4K read/write performance


I wouldn't trust a RAID array on one of those Vantec cards anyway. All they're good for is giving you extra SATA ports, nothing more. Either you're using the right stuff from Intel or LSI, or you're doing software RAID.

Tell me when the price of SSD's comes down. How about SSD's in RAID?
You should check out the price of Mushkin drives on Wootware...
 

AndrewAlston

Group Head of IP Strategy – Liquid Telecommunicati
#10
So, as someone who uses both RAID and SSD (And nvme m.2) let me make some observations

Firstly - if you want truly good raid performance - don't bother with a low end raid controller - and the high end raid controllers don't make sense unless you are using some serious disk space (because the impact on the cost per gig is substantial)
Secondly - in terms of Random read in the tests I've done - you can get close - transfer speeds - I actually find my raid-6 arrays will out run a single SSD disk by a long way
Thirdly - On write performance - if you want truly good write performance - see the first point about high end controllers.

I run 3 separate raid arrays in my home systems - two of them on 9361-8i and one of them on 9361-16i controllers (LSI 12gigabit/second controllers) - on two of those raids I run 8 x HGST 6tb 7200 rpm disks, and on one of them I run 10 x HGST 10tb 7200 rpm disks.

I can sustain 900+MB/second read and very close to that in write on that setup copying raid to raid.

A few notes though about those controllers for anyone who wants to go this route - firstly battery backup is a must - and the battery backup units for these controllers aren't exactly cheap (roughly $115) - and another $500 odd for the controller itself (for the 8i, if you want the 16i which can handle 16 drives, look at around $900.

Secondly - on both the 8i and 16i controllers - since there is no way to easily fit a decent cooler on these things - be prepared for the fact that you're going to need to side mount a fan in the side of your case and blow air directly onto it constantly - and this is specially true of the 8i - it *will* overheat under load without that additional cooling - those things run *HOT* (80C+) if you don't use the additional fan.

But seriously - throw a 12gigabit controller in there - and 8 7200 rpm disks - and in sustained transfers you're gonna very quickly outrun SSD's (which btw are normally on 6gigabit bus) - but the cost point of doing this properly - means you either do it with an insane amount of disk space - or it will push your cost per gig through the roof.
 
#11
I wouldn't trust a RAID array on one of those Vantec cards anyway. All they're good for is giving you extra SATA ports, nothing more. Either you're using the right stuff from Intel or LSI, or you're doing software RAID.
This x 1000.

Those cheap cards are just going to give you a huge butt cramp down the line (speaking from experience). I ended up getting a used IBM M5012 (LSI) card off eBay which is great value.
 
#13
[XC] Oj101;21406187 said:
Compare something other than sequential read/write and the SSD is in a COMPLETELY different league. Fail.
Jip, not even sure why they would mention only those speeds, probably cause that's the only thing that came close to SSD performance. Those speeds is a given with most RAID implementations, but that's not what matters
You will never get the performance of a ssd from any mec hard drive RAID implementation, no matter how much money you throw at it. Spend a million on a enterprise SAN, 8 x 16Gb fc connectivity to server, RAID 10 over 48 10K RPM 12Gb SAS drives and watch a single SSD crush it in every performance metric that matters in the real world
 
Last edited:
#14
So, as someone who uses both RAID and SSD (And nvme m.2) let me make some observations

Firstly - if you want truly good raid performance - don't bother with a low end raid controller - and the high end raid controllers don't make sense unless you are using some serious disk space (because the impact on the cost per gig is substantial)
Secondly - in terms of Random read in the tests I've done - you can get close - transfer speeds - I actually find my raid-6 arrays will out run a single SSD disk by a long way
Thirdly - On write performance - if you want truly good write performance - see the first point about high end controllers.

I run 3 separate raid arrays in my home systems - two of them on 9361-8i and one of them on 9361-16i controllers (LSI 12gigabit/second controllers) - on two of those raids I run 8 x HGST 6tb 7200 rpm disks, and on one of them I run 10 x HGST 10tb 7200 rpm disks.

I can sustain 900+MB/second read and very close to that in write on that setup copying raid to raid.

A few notes though about those controllers for anyone who wants to go this route - firstly battery backup is a must - and the battery backup units for these controllers aren't exactly cheap (roughly $115) - and another $500 odd for the controller itself (for the 8i, if you want the 16i which can handle 16 drives, look at around $900.

Secondly - on both the 8i and 16i controllers - since there is no way to easily fit a decent cooler on these things - be prepared for the fact that you're going to need to side mount a fan in the side of your case and blow air directly onto it constantly - and this is specially true of the 8i - it *will* overheat under load without that additional cooling - those things run *HOT* (80C+) if you don't use the additional fan.

But seriously - throw a 12gigabit controller in there - and 8 7200 rpm disks - and in sustained transfers you're gonna very quickly outrun SSD's (which btw are normally on 6gigabit bus) - but the cost point of doing this properly - means you either do it with an insane amount of disk space - or it will push your cost per gig through the roof.
So I take it that RAID arrays are in the same server then, and if not you have a 10Gb network at home between the systems?
 

AndrewAlston

Group Head of IP Strategy – Liquid Telecommunicati
#15
So I take it that RAID arrays are in the same server then, and if not you have a 10Gb network at home between the systems?
Dual 10G single mode fiber pairs into each system (Intel dual 10G Nics) - One "internet" facing (the general LAN NIC) and one dedicated to the storage lan. Storage lan just runs into a separate access VLAN on the switch - nothing real special there.
 

AndrewAlston

Group Head of IP Strategy – Liquid Telecommunicati
#16
Jip, not even sure why they would mention only those speeds, probably cause that's the only thing that came close to SSD performance. Those speeds is a given with most RAID implementations, but that's not what matters
You will never get the performance of a ssd from any mec hard drive RAID implementation, no matter how much money you throw at it. Spend a million on a enterprise SAN, 8 x 16Gb fc connectivity to server, RAID 10 over 48 10K RPM 12Gb SAS drives and watch a single SSD crush it in every performance metric that matters in the real world
Simply not true - there are many applications that require sustained sequential read and sustained sequential write - real world applications - and a 12gigabit/second raid controller with 8 7200 RPM disks behind it will crush a single SSD stone dead in that application - why - because the single SSD is going to be on a 6gigabit SATA port - and you're gonna bottleneck on the port. This obviously doesn't apply to NVME (m.2) drives - which don't have the same 6gigabit limitation though - but even there - on sequential read/write applications - the high end raid is going to get pretty damn close.
 
#17
Dual 10G single mode fiber pairs into each system (Intel dual 10G Nics) - One "internet" facing (the general LAN NIC) and one dedicated to the storage lan. Storage lan just runs into a separate access VLAN on the switch - nothing real special there.
Nice, so at the moment the network is almost a bottleneck then, wonder if it will go faster if you use 40Gb network.
 
#18
Simply not true - there are many applications that require sustained sequential read and sustained sequential write - real world applications - and a 12gigabit/second raid controller with 8 7200 RPM disks behind it will crush a single SSD stone dead in that application - why - because the single SSD is going to be on a 6gigabit SATA port - and you're gonna bottleneck on the port. This obviously doesn't apply to NVME (m.2) drives - which don't have the same 6gigabit limitation though - but even there - on sequential read/write applications - the high end raid is going to get pretty damn close.
Not sure why it will be on 6Gb SATA port, or are you only referring to cheaper SSD's for home use where your pc will have 6Gb SATA ports? Our EMC SAN (Netapp flash SAN as well) has 12Gb backplane, and 47 1.6Tb SSD's has 12Gb SAS, then again these enterprise SSD's costs R30 000 each so does get a bit expensive
There are many SSD's with 12Gb SAS ports and raid cards with 12Gb capabilities where they will be plugged into, but suppose they are not for home use

We have 9 SAN's, ranging from entry level consisting of 80 1Tb NLSAS 7200 disks in RAID 10 to all flash RAID 5 arrays costing millions, all FC from 8Gb to 16Gb, Read and write speeds are a given on all of them, not much difference at all, but the SAN's with only 7200rpm disks are not even good enough for backups anymore so replacing them with 10K SAS disks to speed things up, although I would go SSD for that as well as the backups are IO intensive and even the 10K disks will be the bottleneck.

Oh and the testing we ran on one SSD is a 900Gb internal SSD in a Dell server, also 12Gb SAS, then again, enterprise equipment that no home user will buy
 

AndrewAlston

Group Head of IP Strategy – Liquid Telecommunicati
#19
Figured this would be of interest..

These are tests I just ran on one of my machines -

First test - the Samsung 512gig m.2 nvme drive:

disk-1.jpg

Note - its a very consistent result - and the random access speeds are *extremely* fast - again though - thats not limited by 6gig SATA speeds.

Second result - this was done against a 8 x 6TB HGST based raid on an Avago 9361-16i Raid controller running RAID-6

disk-2.jpg

Random access times are way way up here - but on the sequential read - I'm still outrunning any standard SSD because I'm sitting at 7.4gigabit average there - higher than the port speed available to an SSD (and actually bursting higher than the m.2 drive)

Third result - this was done against another array on the same controller - this time on 8 x 10TB HGST 7200 RPM drives - these drives are newer and have MUCH more built in cache on the drives

disk-3.jpg

You'll notice the random access speeds are slightly better than the second test - at 7.7ms - but the average speed is pretty consistent with the previous test.
 
#20
Figured this would be of interest..

These are tests I just ran on one of my machines -

First test - the Samsung 512gig m.2 nvme drive:

View attachment 515535

Note - its a very consistent result - and the random access speeds are *extremely* fast - again though - thats not limited by 6gig SATA speeds.

Second result - this was done against a 8 x 6TB HGST based raid on an Avago 9361-16i Raid controller running RAID-6

View attachment 515539

Random access times are way way up here - but on the sequential read - I'm still outrunning any standard SSD because I'm sitting at 7.4gigabit average there - higher than the port speed available to an SSD (and actually bursting higher than the m.2 drive)

Third result - this was done against another array on the same controller - this time on 8 x 10TB HGST 7200 RPM drives - these drives are newer and have MUCH more built in cache on the drives

View attachment 515541

You'll notice the random access speeds are slightly better than the second test - at 7.7ms - but the average speed is pretty consistent with the previous test.
For a setup at home that is quite impressive!
 
Top