Update: Hardware vs Software RAID: The great debate.

software raid photo

Photo by wwward0

Hi Everyone,

If you recall our previous article Hardware raid or software raid? Our take on the great religious debate, we made the case that hardware raid served virtually no purpose, and software raid was better in all important ways. Although we still find most of the information in that article to be true, we are always seeking to learn more and challenge ourselves, and so we continued to investigate the situation. As it turns out, there are important use cases where hardware raid is a preferable option.

First, there are Operating Systems where software raid does not function, or does not function as well as linux mdadm. Windows has some form of software raid, but generally it is not recommended and is not easy to boot from. VMWare doesn’t support software raid at all to my knowledge, and so it’s not an option there. Xenserver and Xen Cloud Platform (not to be confused with running the Xen kernel under Centos), also has removed linux mdadm support, even though that OS is based on Centos (which is one among many frustrations I have with Xenserver). For any of these operating systems, “true software raid” (linux mdadm), is not an option, which leaves you with “fakeraid” and “real” hardware raid. Obviously in these situations, a quality hardware raid card is far preferable.

Secondly, there are situations where hardware raid will post higher benchmark scores than software raid. In particular, the linux “dd test”, which writes sequential data to disk, can show artificially high results if you use a hardware raid card with a battery backed cache. This is because the raid card can commit the write to disk before it’s actually been written, fooling the operating system into believing the drives are working faster than they really are. Although this does provide *some* performance improvement in real world results, primarily you see the most benefit in this artificial “dd test” score, giving you 2x or better the apparent speed under this one test. More extensive testing using bonnie++, which better simulates real workloads, actually shows that hardware raid is about the same speed as linux mdadm, while using more cpu. This is especially interesting to me, as one of the well known “facts” about hardware raid is that it uses less cpu than software raid, but in our testing the opposite was true.

In any case, real world performance is about the same with hardware raid and software raid in most cases, but in this particular DD test score, hardware raid posts much better results. Why is this important? Well, it would seem that a large number of VPS customers use this “dd test” to see how fast the server running their VPS is. This stems from situations in the past where it would common to see abysmal disk i/o performance on VPS servers, and one way to prove the host was overselling was to see a “dd score” that was abysmally low (like 10MB/s). Now, with a hardware raid card, you can see scores in excess of 1000MB/s, and many hosts are compared based on whether these scores are 200 or 400 or 1000 MB/s. What used to be a useful way to see if a box is oversold, has turned into a “whose dick is bigger” contest, since performance levels above 100MB/s are meaningless to acertain real-world performance in a server environment. This is because most server applications rely on random i/o performance results, not the sequential results that DD measures, and also because this test is so easily manipulated.

Despite the flaws in “dd testing”, it is a practice that doesn’t appear to be going away any time soon. As a result, people choose a VPS host based on these scores, and in turn, VPS hosts choose which dedicated servers they want based on what kind of scores they can get. Ultimately, despite there being no real world performance advantage in hardware raid, there is a significant marketing advantage in offering it, which can make it worth the money even though it provides little or no tangible benefit. In our original article, we overlooked this fact, believing that the real world performance was more important. In fact, if you can’t sell a VPS without hardware raid, and you can sell one with hardware raid, it doesn’t really matter which of the two is better, because the one that will get sold is the one with hardware raid.

Finally, the high cost of hardware raid is somewhat easier to swallow than we previously calculated, because hardware raid cards depreciate much more slowly than other server hardware. Although a good hardware raid card with BBU may cost $600 (easily increasing the cost of a 32gb ram 4 drive single processor VPS node by 50%), that $600 card will retain nearly all of its value until it becomes completely obsolete in about 5 years. Servers on the other hand steadily decline in value, losing all of their value in 3-5 years, and only holding their “full” value for less than a year after purchase. Because of these factors, from the point of view of a dedicated hosting provider, hardware raid cards are not as bad of an investment as it would normally seem, and because of this slower depreciation, they can be profitably offered at a lower price than you would charge for other kinds of equal cost upgrades.

In conclusion, we feel it necessary to amend our previous article with the following conclusions

1) If you are using one of the common OS’s that don’t support linux mdadm, you should use hardware raid in order to protect your data and provide adequate performance

2) If you are a VPS host that is concerned with the marketing impact of DD test scores, hardware raid can be a good investment allowing you to market higher benchmark scores

3) Due to a longer lifecycle than other server hardware, hardware raid cards are somewhat less expensive than their sticker price would lead you to believe

At IOFlood, a big part of what we believe in is continuous learning and improvement. In many cases, this means re-examining our assumptions and opinions, no matter how sure of ourselves we originally were. Although most of the information in our previous article still holds true, there were a variety of use cases where hardware raid is an appropriate option. Because of this, we do intend to start offering hardware raid as an option on our servers.