PDA

View Full Version : RAID or SSD


bashar-corei7
03-22-10, 02:11 AM
Hi to all....
i gonna to buy one from this tow choice:
1- four HDD western digital green 1TB/64MB cash and put this HDD in RAID 0 or 1 or 5 or 10 or 0,1.
2- or gonna to buy one SSD average read 250Mb/S & average write 170Mb/s and put one HDD for pakging....the question who is the best of every thing in win in games in boot in work????? & what the raid options is very speed in read & write raid 0 or 1 or 5 or 10 or 0,1 . thanks to all.

My pc :::intel 920 corei7 3.8GHZ/RAM 6GB 1600 3*2 /MOTHERBOARD GIGABYTE EX58UD4P / VGA CARD GEFORCE LEADTEK GTX285/MONITOR HP PAVILION W2207 / HDD WD 1TB GREEN + WD 500GB BLUE /sound XFI TITANUM /SPEAKER CREATIVE GIGAWORKS G550 /PSU GIGABYTE 1200W / CASE FULL TOWER GIGA BYTE 3D AURURA / SNAZZI VIDEO CAPTURE CARD.WIn 7/64bit Ultimate.

bob saget
03-22-10, 03:12 AM
Great!!!
This is the correct section of the forums!!!!
Wait patiently for the reply :)

jlippo
03-22-10, 04:34 AM
For system and program drive I would suggest SSD, it simply makes everything feel faster.
Then use storage drive for all bigger files and such.

Here is a link for all the information you ever need for HDDs, including all the RAID levels.
http://www.storagereview.com/guide/single.html
For speed, go for Raid0 for reliability go for raid 1 or 5.

If you want extreme speed go for SSDs with raid0, if the RAID controller is good you get linear speedup with each SSD drive.
So with 4 new SSDs on RAID0 you get 1GB/s read speeds and ridiculous random read characteristics. ;)

Oh, and always take backup of your system drive when it is done for fast recovery.

Roliath
03-22-10, 06:21 AM
I'd definitely go with an SSD for OS and a Large TB+ drive for storage and apps.

Toss3
03-22-10, 06:49 AM
Definitely go for an SSD.

musman
03-22-10, 07:15 AM
I have both, go with the SSD. Forget the read/write times. The seek time is what makes the SSD's so great. My PC takes 7-8sec to power down and 50sec to completely boot up. You will not be sorry.

FlakMagnet
03-22-10, 09:32 AM
Just one warning... if you use an SSD as your OS drive, and then have a RAID array with standard hard discs, you will not be able to use TRIM on your SSD with that mobo chipset.

I also have an X58 chipset motherboard, and using a RAID array on my hard drives means that the SSD is a 'non-member RAID drive'. Intel does not support TRIM pass though in this configuration.

So if you're going to get an SSD, bear in mind that TRIM will not work if you also use RAID.

My SSD is an OCZ Vertex 120GB and the latest firmware supports Garbage Collection which means it tidies itself up when not being used. I'm not sure if other SSDs will also do this when TRIM is not available. Would be worth finding out before you take the plunge though.

logan
03-22-10, 10:28 AM
I'd never do raid0 with standard hard drives unless the data is completely disposable or you have very good backups.

A while back I built myself a software raid5 with 3*1TB WD Black. The write performance wasn't particularly good, topping out ~75MB/sec with ext3 and ~100MB/sec with xfs, and the load the desktop came under while rebuilding was terrible (the machine was nearly unusable). A 3ware 9650se pushed the xfs write performance up to 150MB+ and took a huge load off the system during a verify/rebuild, but had I bought the controller new it would have doubled the cost of the storage. My initial testing (writes only) was done with dd and later included bonnie++. I don't have anything from the software raid5, but the 3ware raid5 performance seemed to vary a bit, ranging anywhere from 145-160MB/sec write and 165-180MB read. I later converted the array to a 4 disk raid10 and performance was higher and more consistent, 160-165MB/sec write and 220-225MB read.

(3w raid5, xfs)
# bonnie++ -n 0 -r 8192 -s 16384 -f -b -d .
1.96,1.96,jam,1,1261622212,16G,,,,154504,23,76335, 16,,,185848,10,383.8,6,,,,,,,,,,,,,,,,,,,532ms,248 ms,,399ms,485ms,,,,,,
(3w raid10, xfs)

# dd if=/dev/zero of=test bs=1M count=16384
17179869184 bytes (17 GB) copied, 99.2081 s, 173 MB/s
# bonnie++ -n 0 -r 8192 -s 16384 -f -b -d .
1.96,1.96,jam,1,1261739024,16G,,,,164876,23,83865, 11,,,231149,13,503.9,7,,,,,,,,,,,,,,,,,,,278ms,287 ms,,176ms,311ms,,,,,,

I recently picked up a used 1u server for myself and did a software raid10 with 4*500GB and the performance was surprisingly good at 135-140MB/sec write and 145-155MB/sec read and the rebuild/verify load was more than acceptable.

(sw raid10, xfs)
# dd if=/dev/zero of=test bs=1M count=16384
17179869184 bytes (17 GB) copied, 121.229 seconds, 142 MB/s
# dd if=test of=/dev/null bs=1M
17179869184 bytes (17 GB) copied, 111.04 seconds, 155 MB/s
# bonnie++ -n 0 -r 8192 -s 16384 -f -b -d .
1.96,1.96,kif,1,1265154874,16G,,,,138034,26,31763, 5,,,148553,11,130.0,1,,,,,,,,,,,,,,,,,,,9803ms,504 ms,,61323us,375ms,,,,,,

I can't speak on the SSD option, but I suggest raid10 if you're looking for a 4 disk raid with speed and redundancy.

bacon12
03-22-10, 10:42 AM
Actually you can now have both, and still have trim..
http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=15251&lang=eng

frenchy2k1
03-22-10, 12:35 PM
Easy to see both why and the trade offs needed as to how a RAID 10 can beat a RAID5.

RAID10 will trade 50% of your space for redundancy (a 4 1TB HDD will only offer 2TB of storage, using 2TB for redundancy). On the other hand, RAID5 "only" uses 1 of the drives (so, 33% in a 3 HDD setup and 25% in a 4 HDD setup) for the same level of redundancy. You trade processing power (you need to XOR all the data every time) for space.

Basically, your IO in a RAID5 setup will be limited on how many XOR operations you can process (for write and rebuild).