nV News Forums

 
 

nV News Forums (http://www.nvnews.net/vbulletin/index.php)
-   General Linux (http://www.nvnews.net/vbulletin/forumdisplay.php?f=27)
-   -   mdraid vs. dmraid benchmark and Intel RAID0/RAID1 on same pair of drives (http://www.nvnews.net/vbulletin/showthread.php?t=62648)

kenyee 01-03-06 11:17 PM

mdraid vs. dmraid benchmark and Intel RAID0/RAID1 on same pair of drives
 
Two quick questions:
- has anyone found any benchmarks of nVidia's software RAID (dmraid) vs. the Linux built-in software RAID (mdraid)?
- anyone know if it's possible to use Linux's software RAID to do something like the mixed RAID 0/1 that Intel's integrated RAID supports where you can put partition two drives into RAID0 and RAID1 partitions so you can put swap/temp on the RAID0 "drive" and your important data on the RAID1 "drive?

ken

fhj52 01-04-06 10:16 PM

Re: mdraid vs. dmraid benchmark and Intel RAID0/RAID1 on same pair of drives
 
Quote:

Originally Posted by kenyee
Two quick questions:
- has anyone found any benchmarks of nVidia's software RAID (dmraid) vs. the Linux built-in software RAID (mdraid)?

There are benchmarks for Linux Software RAID. Both flavors, dmraid and md have been tested. The dmraid has absolutely nothing to do with nVidia Corp ... .
The nVidia Platform RAID(I call it HARM), is a so-called "foreign" RAID as are other vendors RAID solutions. The application dmraid is used by Linux to recognize and use the "foreign" RAID solutions. HARM is just one of them.

I had some md and dmraid benchmarks vs hardware RAID links but they seem to be somewhere else at this time. Sorry. You can Google for them...

I have not found any nVidia nforce* RAID specific benchmarks on the Linux or MS PC platforms compared to anything; not even compared to MS own SW RAID-ish solutions. I suppose they(nVidia) are hiding them because they are not better than any other software solution. Without proof to the contrary, it would be safe to assume that is the case.

Quote:

Originally Posted by kenyee
- anyone know if it's possible to use Linux's software RAID to do something like the mixed RAID 0/1 that Intel's integrated RAID supports where you can put partition two drives into RAID0 and RAID1 partitions so you can put swap/temp on the RAID0 "drive" and your important data on the RAID1 "drive?

ken

IIRC, Linux SW RAID is treated like a filesystem so one can put a RAID 0 on part of the drive and RAID 1 on another and RAID 5 on another ... but you still need to map those to a different (or multiple) HDD.
RAID 10 in Linux is a layered or combination RAID meaning that a RAID 1 is created and then those mirrored drives have the info striped across another drive. The reverse is also possible, RAID 0+1, (striping data and then mirroring it) and what many call RAID 10. The nVidia RAID 10 is 0+1 and requires 4 physical disks.
One can argue about which is better(and many do...). What is notable is that it is possible to use just three disks for RAID 1+0 with a Linux SW RAID.

It is not so easy to manage multiple RAID partitions so maybe LVM or EVMS is what you seek to be able to do that.

HTH

kenyee 01-05-06 09:56 AM

Re: mdraid vs. dmraid benchmark and Intel RAID0/RAID1 on same pair of drives
 
I haven't been able to find any docs or how-to sites that indicate you can put RAID0 on part of a pair of drives and RAID1 on part of another pair (what Intel calls "Matrix RAID"). All the setup sites I've seen say to tell mdraid what your two drives are (RAID0/RAID1) and then give the drives to LVM at which point you can partition the drives and install different filesystems on each partition.

Haven't been able to dig up anything on 3 drive RAID 1+0 either. Do you have any links you bookmarked?

The closest thing I could dig up for benchmarks that compare the Intel, nVidia, and Windows RAID was this and they gave a lame excuse for not trying Windows RAID:
http://techreport.com/reviews/2005q4...d/index.x?pg=2

fhj52 01-09-06 02:50 AM

Re: mdraid vs. dmraid benchmark and Intel RAID0/RAID1 on same pair of drives
 
I do have links(ithink...) but they are on a different unaccessible OS.
The method is simple to create a 3 disk RAID 1+0. Create a RAID 1 array and then use that RAID array as the second volume for a RAID 0. I have no idea how safe or good it is.
...Terrible explanation, I know. Sorry.
The steps are in a RAID forum, the name of which I cannot recall since I have not looked at it for over a year... but it is all about dmraid and the md application. IIRC, the author of dmraid announces new versions in that forum.
You will have to hunt for it -google is your friend.:)
e.g.,
Wikipedia:
For example, 2-way mirroring on 3 drives would look like
http://en.wikipedia.org/wiki/Redunda...nux_MD_RAID_10

A RAID 10, sometimes called RAID 1+0 ...
http://en.wikipedia.org/wiki/Redunda..._disks#RAID_10
[ there is a diagram at the link; making one here that looks decent is too hard]
(NOTE: This is why Wikipedia sux; RAID 10 is supposed to be 1+0, a stripe of mirror, but just about every vendor reverses it so RAID 10 has become RAID 0+1, a mirror of striped disks, but they don't bother to tell you that. -> Never trust a vendor that says they have raid 10 w/o defining what it is.)



AS for making a disk part RAID 0 & part RAID 1, the only way I think that could be done would be by using EVMS.
I have never done that although I have had swap on a RAID array. The swap is always swap no matter where it goes.
EVMS does have a mailing list that is used for Q&A as well as dev., annoucements, etc.

To get good answers to RAID questions, a forum dealing with RAID would be a good location. Obviously NV does not care to assist... and I (obviously, too) am no expert.


Good Luck!

PS: Thanks for that link! I will give it a look later.

kenyee 01-09-06 10:01 AM

Re: mdraid vs. dmraid benchmark and Intel RAID0/RAID1 on same pair of drives
 
Quote:

Originally Posted by fhj52
The method is simple to create a 3 disk RAID 1+0. Create a RAID 1 array and then use that RAID array as the second volume for a RAID 0.

Ahhhh. That seems so obvious now. Still requires 3 drives, but better than nothing. Still don't understand why something like Intel's Matrix RAID can't be done in Linux's software mdraid driver though. I'll have to see if I can find the right forum to ask this of the mdraid coders...

thanks,

ken

kenyee 01-14-06 08:46 PM

Re: mdraid vs. dmraid benchmark and Intel RAID0/RAID1 on same pair of drives
 
Finally found some documentation on how to do Matrix RAID using the mdraid driver. Look at the bottom of section 6.3.2.3:
"It is perfectly possible to have several types of MD at once. For example if you have three 200 GB hard drives dedicated to MD, each containing two 100 GB partitions, you can combine first partitions on all three disk into the RAID0 (fast 300 GB video editing partition) and use the other three partitions (2 active and 1 spare) for RAID1 (quite reliable 100 GB partition for /home)."

There's just no how-to guide that I know of to do this. Sounds fairly simple though... you basically partition your drives, then each partition can be added to an mdraid "drive" as either RAID0 or RAID1. If you're using LVM, you can then add each mdraid drive into an LVM physical volume.

felipegeek 02-24-06 10:03 PM

Re: mdraid vs. dmraid benchmark and Intel RAID0/RAID1 on same pair of drives
 
kenyee,

On your last post you mention doing a combination of paritions across disks to obtain stripping and mirroring of different filesystems. This is not a good idea performance wise. Any access to the mirrored partition pair would cause a slow down in performance of the striped parititions (and vice-versa) as they contend for the same set of heads on the discs. A drive can only execute one seek at a time so if a drive is being queried by the stripe and the mirror simultaneously the drive will have to jump around a lot. This is also true for multiple partitions on the same drive pair if you were only mirroring but it is my belief that it would be more efficient since it has only one RAID type to deal with. Most Mirroing implementations do round-robin reads to boost read performance and write simultaneously to both drives without penalty. Striping (raid 0/5) reads chunks based on the stripe size across all of the drives in the set. The same goes for writes. I would guess to say that the penalty for mixing them on the same set of drives would be substantial if both sets of virtual RAIDs were busy simply due to the different nature of how each type of RAID treats the drives.

Just an opinion (not entirely backed up by scientific fact)
-felipe

kenyee 02-24-06 10:31 PM

Re: mdraid vs. dmraid benchmark and Intel RAID0/RAID1 on same pair of drives
 
Thanks, felipe.
I've actually decided to just RAID1 everything. I had planned to put swap and /tmp on RAID0, but I found a worrisome comment in a how-to that said Linux crashes if swap goes away suddenly. It's still pretty cool that we can do Intel's fancy "MATRIX RAID" using Linux ;-)

Now if I could just get root on LVM (I'm running mdraid RAID1 and then making an LVM out of the RAID1 partition) working on Kanotix (a Debian variant), I'd be happy. The mkinitrd doesn't support LVM if you compile it into the kernel; it's hardcoded to expect LVM as a modu
le, so you have to use the lvm2_createinitrd example script w/ LVM, but it seems to create an initrd that's missing parts because the kernel crashes on startup. It really shouldn't be this painful :-P


All times are GMT -5. The time now is 01:01 PM.

Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.