View Single Post
Old 01-04-06, 10:16 PM   #2
fhj52
Registered User
 
Join Date: Jan 2005
Posts: 135
Default Re: mdraid vs. dmraid benchmark and Intel RAID0/RAID1 on same pair of drives

Quote:
Originally Posted by kenyee
Two quick questions:
- has anyone found any benchmarks of nVidia's software RAID (dmraid) vs. the Linux built-in software RAID (mdraid)?
There are benchmarks for Linux Software RAID. Both flavors, dmraid and md have been tested. The dmraid has absolutely nothing to do with nVidia Corp ... .
The nVidia Platform RAID(I call it HARM), is a so-called "foreign" RAID as are other vendors RAID solutions. The application dmraid is used by Linux to recognize and use the "foreign" RAID solutions. HARM is just one of them.

I had some md and dmraid benchmarks vs hardware RAID links but they seem to be somewhere else at this time. Sorry. You can Google for them...

I have not found any nVidia nforce* RAID specific benchmarks on the Linux or MS PC platforms compared to anything; not even compared to MS own SW RAID-ish solutions. I suppose they(nVidia) are hiding them because they are not better than any other software solution. Without proof to the contrary, it would be safe to assume that is the case.

Quote:
Originally Posted by kenyee
- anyone know if it's possible to use Linux's software RAID to do something like the mixed RAID 0/1 that Intel's integrated RAID supports where you can put partition two drives into RAID0 and RAID1 partitions so you can put swap/temp on the RAID0 "drive" and your important data on the RAID1 "drive?

ken
IIRC, Linux SW RAID is treated like a filesystem so one can put a RAID 0 on part of the drive and RAID 1 on another and RAID 5 on another ... but you still need to map those to a different (or multiple) HDD.
RAID 10 in Linux is a layered or combination RAID meaning that a RAID 1 is created and then those mirrored drives have the info striped across another drive. The reverse is also possible, RAID 0+1, (striping data and then mirroring it) and what many call RAID 10. The nVidia RAID 10 is 0+1 and requires 4 physical disks.
One can argue about which is better(and many do...). What is notable is that it is possible to use just three disks for RAID 1+0 with a Linux SW RAID.

It is not so easy to manage multiple RAID partitions so maybe LVM or EVMS is what you seek to be able to do that.

HTH
__________________
When two people meet and exchange gifts, each has one object.
When two people meet and exchange ideas, each has two ideas.
... Open Source. Just do it.

---------------------------------
System: BFG GTX260^2 graphics but has ** TERRIBLE BLINKING OS **
SuperMicro H8DCi+AMI BIOS;dual Opt'285;8GB;LSI 320-2x w/ 6xU320 Fuji' MAXs in RAID 10; 4xSATAII on LSI 3041E for backup. Multi-boot Mandriva Linux, openSUSE, WinXPx64 & Win2k-AS; Creative Audigy2-Digital audio.
Gigabyte GA-2CEWH & NVRAID are GONE ... Finally!!
fhj52 is offline   Reply With Quote