View Full Version : RAID 3 VS RAID 5 - need help

08-07-11, 12:51 AM
I had a 1TB drive fail and I lost a bunch of data. My wife, being tired of seeing me heartache over all I lost, told me to fix it to the tune of a large, redundant, storage solution.

Me, in my infinite wisdom, decided that a RAID 5 array with 5 x 2TB disks was the way to go. So I went on Newegg and purchased a RAID 0,1,3,5,10 controller card with 5 SATA ports and one eSATA port. It is supposedly hardware RAID. I got 5 x 2TB Samsung disks for an 8TB array.

Originally, I set the disks up as RAID 5. It formatted out in Win 7 as a 7.27TB disk. I started copying the data I need to back up over to the array. It started out fine.....then went south quick. Initial throughput was 120+ MB/s, but then it dropped pretty quickly down to around 5-7 MB/s. Also, my computer seemed to crawl and would be unresponsive for periods of several minutes. Of note, I was copying MP3's to the array - about 700GB or so. I then tried a single 8GB iso that also dropped quickly to 5 MB/s.

Seeing that the RAID 5 was sucking (unresponsive computers scare me), I decided to try the RAID 3 setup. I reconfigured my card for RAID 3 and booted back into windows. The drive formatted back to 7.27TB. I started copying the same MP3's to the RAID 3 array and it started around 60 MB/s and dropped to maybe 55 MB/s. It stayed at 55 MB/s and all the data finished copying. The computer did not become unresponsive, though it did seem sluggish when looking through my other drives (not on same card, MoBo connected drives). Other copies to the drive have yielded 80-90 MB/s sustained.

Is this typical behavior for RAID 5 arrays? I thought the RAID 5 would be a lot faster than that.

SAMSUNG EcoGreen F4 HD204UI 2TB 32MB Cache SATA 3.0Gb/s 5400RPM (http://www.newegg.com/Product/Product.aspx?Item=N82E16822152245)

SYBA SY-PEX40016 PCI Express SATA II (3.0Gb/s) Controller Card - JMicron JMB36X (http://www.newegg.com/Product/Product.aspx?Item=N82E16816124032)

Thanks for taking the time to read this (if you did) and any help/advice is welcomed.

08-07-11, 01:02 AM
Sounds like a controller issue. Even more so with the crappy controller card you are using. A nice raid card will cost you at least $400.

08-07-11, 08:21 AM
First off, I'm going to say I'm NOT a RAID expert. However, I have dealt with RAID at work quite a bit on some machines I'm responsible for (some server class machines...). I do have some thoughts based on what was said.

RAID 5 write performance can be pretty bad. In particular, write performance can take a nose dive when writing several [relatively] small files to the setup. This is due to how RAID 5 works; there is a parity information and striping done across all the drives while data is written to the drives. In fact, RAID 5 has a lot in common with RAID 0 when it comes to striping. From a peformance stand point, one big difference in beween the two is RAID 5 writes parity info to the drives while RAID 0 doesn't.

One particular problem with RAID 5 is if the file being written is smaller than the stripe currently being written then that is trouble from a performance standpoint. It should be noted read performance of RAID 5 is pretty good. In fact, it's close to the performance of RAID 0 if I recall correctly which shouldn't be suprising since they are closely related.

I noticed you were copying 700 GB's of MP3s (WOW!!). I would suspect the many files when combined with the volume plays against RAID 5's weaknesses when copying them to a RAID 5 setup. I also noticed you are using 5400 RPM harddrives. That's really slow. Don't settle for less than 7200k RPM drives in a RAID 5. I'm not suprised you are seeing sluggish performance. The performance of the ISO copy concerns me a bit but it's hard to say if that's good or bad from a forum post.

Don't compare the performance of your RAID 5 result against your RAID 3 result. For that matter, don't compare your results to any other different type of RAID levels (such as RAID 6 mentioned by another poster...). Different RAID configs have different READ/WRITE performance characteristics. You can't compare performance of a RAID controller/setup across different RAID levels and arrive at a [correct] evaluation as to whether a RAID card is bad or good; it's simply impossible.

It should be noted I'm not suprised you are seeing sluggish performance from your computer as well while this is being done. With that work load and the fact the OS appears to be installed on the same RAID, that's just asking to much. Recall the OS is reading/writing a swap file to the same RAID.

Now for my curiosity part: why on earth did you settle on a RAID 5 setup for this?! If you insisted on using RAID for this, RAID 1 would've been fine. However, if you were willing to spend money on multiple harddrives, why not install 2 or more HD's in your system and then setup a backup job to back data up to another drive on a daily basis. If one drive went away, you would still have your data on the other drive. For that matter, purchase an external drive. Just a thought...;)

08-07-11, 02:03 PM
RAID 5 and RAID 3 have the same number of minimum disks needed: 3. That much I do know. RAID 3 isn't a common RAID so I had to look up the rest of the information on it. Where RAID 5 writes parity info across all the disks, RAID 3 uses one disk out of its array for dedicated parity storage; parity information is not written to the other n-1 disks. RAID 3 requires all disks to spin up and read/write at the same time. It appears RAID 3's read/write performance mirrors that of a single drive. I suspect this performance is based on the drive with the worst read/write performance although what I'm reading doesn't say that. RAID 3's write performance appears to be better which is why the original poster saw a performance improvement when jumping on RAID 3.

I wonder what happens if the parity disk bites the dust? Since that's the only disk with parity info on it, can it be rebuilt from the array? If so, I suspect the system can't be in use while this happens. In a RAID 5 setup, if any disk goes down, the array can be rebuilt while the system is in use if only one HD goes to the great digital dustbin in the sky.

Having said that, I'm shocked RAID 3 was chosen to replace the RAID 5 setup. If the controller will allow this, the poster would've been better off (with 5 drives) to setup two RAID 1 disks (2 drives per each RAID) where one RAID would be drive C: and the other RAID would be drive D: and setup the 5th HD to be a hot global spare. This way, the requirement for redundancy would be there, performance would still be pretty good, a spare HD would exist in case one HD went out, and the user could even setup a manual backup job which would copy one data from one drive to another. That would be a fairly awesome, rock solid setup.

08-07-11, 02:42 PM
RAID 3 is a bad option.

08-08-11, 04:42 PM
Raid3 was an earlier implementation that eventually lead to RAID5.
The difference, as stated, is where the parity resides.
RAID3 is basically a RAID0 array (data is stripped across several disks for performance) with ONE added disk that will store the parity for each stripe. So, on your 5 disks setup, you would have:
Disk1 disk2 disk3 disk4 disk5(parity)
dataA dataB dataC dataD <parity of data A-D>
dataE dataF dataG dataH <parity of data E-H>
In that case, your bottle neck becomes the parity drive, as you will ALWAYS have to write to it, whatever amount of data you write Ex: if you write only dataA, you still need to update the parity.

RAID5 does a similar thing, but will alternate where the parity is stored for better performance:
Disk1 disk2 disk3 disk4 disk5
dataA dataB dataC dataD <parity of data A-D>
dataE dataF dataG <parity of data E-H> dataH

this way, as the parity data is split across drives, so will the writes in case of smaller access.

Now, the kicker: parity is rather hard to calculate, so most GOOD raid5 cards will have a dedicated processor to do it and good cache memory on board. Your card seems to have neither (so CPU is used to calculate and no cache to speak of). This will lead to poor performance.

If you are set on 5+ Sata2 drives, you'll need to spend quite a bit more for reliable perfs:
Either of those will make a huge difference (at a huge premium too :-S )

08-08-11, 06:33 PM
I like Frenchy2k1's explanation. I'm liking it a lot in fact.

Still, I have to go back to my earlier question/observation...

Why is RAID being used at all for this?! Buy 2 HD's, install them both as seperate HD's, and use the second HD purely for backup purposes. Or buy an external HD. Either way, it's cheaper, easier to setup, and less trouble prone than any RAID setup is going to be. It's a piece of cake to setup a backjob in Win7 for Pete's sake compared to going to the trouble and expense of implementing a RAID.

08-08-11, 07:09 PM
RAID 3 is a bad option.

Yes it is.

08-08-11, 08:54 PM
Some factors which haven't been mentioned here:

Watch your stripe sizes for the RAID, partition alignment, and your NTFS Allocation Unit sizes. I'm having some bad performance issues right now with a 12 disk RAID 6 DS3300 iSCSI that gets a clean 125 MBs (line speed) with raw Linux access but a measly 35 MBs when in Windows with a big 6 TB NTFS partition. These things can F you hard in the A.

08-08-11, 10:12 PM
Lots of good info here. The main thing you need to do is first swap the controller out. The hard part is finding a reasonable priced one... If you could drop down 1 drive to 4 you can find controllers in the $400 or less range. If you want 8 internal ports it's going to cost a few hundred more.

Here is a good thread to look over to see what a good RAID controller will give you:

4 port:

LSI $310 plus you'll need a $40 sas cable

Adaptec $420 with cable

Areca 1212 $350 and needs a $40 sas cable

8 Port:

Adaptec 6805 $600 with cables

Areca 1880i $600 with cables

LSI 9261 $500 plus 2 $40 sas cables required

There are some Highpoint cards that might be worth mentioning like the 4 port 4310 or the 8 port 4320 but the reviews aren't all that favorable. It will still need the $40 sas to sata cable as well. http://www.newegg.com/Product/Product.aspx?Item=16-115-063&SortField=0&SummaryType=0&Pagesize=10&PurchaseMark=&SelectedRating=-1&VideoOnlyMark=False&VendorMark=&IsFeedbackTab=true&Page=2

They also have the older sata 2 3510 and 3510 highpoint cards. http://www.highpoint-tech.com/usa_new/series_rr3500.htm

The main thing to notice is the onboard cache. This will play a huge role in performance since the controller can use this to hide the latency of the drives. Also, anything that doesn't list which controller it's using probably isn't a hardware card. All of these are 800mhz + hardware controllers.

I would probably swap those drives out too if you were looking for performance and reliability as they really aren't meant for either. There are a lot of good enterprise drive options if you want something that should last.

All of these carry 5 year warranties and should be 1 million+ MTBF drives.

Hitachi drives: http://www.newegg.com/Product/Product.aspx?Item=N82E16822145310

Seagate :http://www.newegg.com/Product/Product.aspx?Item=N82E16822148610

WD: http://www.newegg.com/Product/Product.aspx?Item=N82E16822136579

Obviously the price tag is really going to hinder the ability to get any of these drives. 2 of these will cost the same as the 5 samsungs did, but these drives will perform better and last longer. There might be some other good alternatives that are a bit cheaper like WD blacks, which still carry a 5 year warranty.

Some factors which haven't been mentioned here:

Watch your stripe sizes for the RAID, partition alignment, and your NTFS Allocation Unit sizes. I'm having some bad performance issues right now with a 12 disk RAID 6 DS3300 iSCSI that gets a clean 125 MBs (line speed) with raw Linux access but a measly 35 MBs when in Windows with a big 6 TB NTFS partition. These things can F you hard in the A.

Definitely worth mentioning as well. If you're going to use really small files you'll want a smaller stripe size, if it's larger files you'll want some pretty decent sized blocks. For 5 drives I'm guessing the smallest stripe is going to be like 64k if you're using mp3 files. Using Windows 7 it will correctly align the drives when it partitions them so you won't have that performance hit. XP / Server 2003 you'll have to use something else to properly partition the drives first.

Actually even with MP3s I'm sure you can make 128k stripes if not a bit bigger than that. The smaller the blocks the more fragmented the data can get, and the more I/O required to read them. The larger the block the less fragmented they will get at a cost of disk space.

08-08-11, 10:12 PM
After talking with a friend of mine today (and all of the welcomed input here at NVNews), I have decided to drop the RAID 3 for a RAID "10". I'll only be using 4 of my drives, but theoretically, I should be able to drop 2 drives and still maintain all of my data. I'll then take my 5th drive and copy all of my critical data to it and take it off site. I don't anticipate needing more than 2TB for the next year or so, and I have a 6th 2TB drive that I could use in a pinch.

I know that the card I bought is relatively cheap, and I did not expect $450 performance from a $75 card. But, for $75, it does provide a lot of options that I had not seen before in a card this cheap. It does have on board hardware RAID acceleration, and it does have a built-in RAID 3/5 write back cache, and supports a hot spare. It's also the only card I found that has support for 5 internal drives.

Thanks again peeps.

08-08-11, 10:42 PM
If you're going that route you might just be better off using the onboard controller. Since there is no parity data to calculate the onboard should be up to the task.

0.6 x SATA 3Gb/s connectors (SATA2_0, SATA2_1, SATA2_2, SATA2_3, SATA2_4, SATA2_5) supporting up to 6 SATA 3Gb/s devices
0.Support for SATA RAID 0, RAID 1, RAID 5, and RAID 10

I do see where the syba does say full hardware RAID in the manual, but it's also on a PCI-e 1x slot so max bandwidth can only be ~300MBps. I honestly wonder if your onboard controller couldn't best the Sybas performance even if it is using cpu cycles to do it. Assuming it's the board in the sig you have an ICH10R chipset so the functionality is built right into the south bridge.

08-08-11, 10:52 PM
RAID 10 is dog slow on software. Doesn't seem like considerable overhead but I've had **** performance system wide the two times I've tried it on Intel Onboard.