PDA

View Full Version : A8N-SLI Deluxe question


retsam
12-06-04, 02:48 AM
ok im looking at getting this motherboard sometime around january... i noticed it had two gigabit ethernet ports. im wondering if anyone knows weather those two ports can be aggrigated (trunked ) together?


retsam

netviper13
12-06-04, 03:02 AM
Even if they could it is highly unlikely that you could notice a difference in transfer speeds. At its theoretical maximum, a 2Gbps connection would be capable of transfering 250MB/s of data (for the sake of easy math I'm going by multiples of 10 rather than the true 8). Taking into account the maximum theoretical available Hard Drive interface of SATA's 150MB/s, you automatically lose 100MB/s of transfer speed just in the interface change. Now let's take into account that even most RAID arrays are not quite pushing the PATA theoretical max of 133MB/s burst transfer rates, so you can take off another 17MB/s of transfer rate. This leaves us with a theoretical max of 133MB/s transfer rate; but now let's bring it out of the theoretical and into the practical: you're not going to get sustained transfer rates of 133MB/s to a hard disk. Even 100MB/s would be amazingly quick, but let's take that to be the final maximum sustainable transfer rate.

With all of those performance hits, that juicy 2Gbps connection is all of a sudden choked down to precisely what the 1Gbps connection would be. So even if the two could be combined, it is highly unlikely you would see a performance benefit.

retsam
12-06-04, 03:23 AM
With all of those performance hits, that juicy 2Gbps connection is all of a sudden choked down to precisely what the 1Gbps connection would be. So even if the two could be combined, it is highly unlikely you would see a performance benefit

i see were your coming from but the giga-e interface dont use the pci bus but rather
from what i understand it connects dirrectly to the chipset. and yes im using massive raid between my main server and hopefully my main pc.
ya we can get into things like tcp overhead and scalablity (and yes tcp has come to the end of the line regarding scalablitly.that why at 10gig-e they try not to use tcp in there testing)
ive used two giga-e nics on sun fire servers (and trunked them but mostly used for failover) and they work well and there is a noticable diffrence in performance but then again the nics use pci-x 66-64bit interface

artical about intergrated nforce nic (http://tech-report.com/reviews/2004q4/nforce4-ultra/index.x?pg=2)

retsam
12-06-04, 03:30 AM
Taking into account the maximum theoretical available Hard Drive interface of SATA's 150MB/s, you automatically lose 100MB/s of transfer speed just in the interface change.
i dont think the over head is that substancial for the conversion.. how would you get almost a 2/3rds loose

netviper13
12-06-04, 12:40 PM
i see were your coming from but the giga-e interface dont use the pci bus but rather
from what i understand it connects dirrectly to the chipset. and yes im using massive raid between my main server and hopefully my main pc.
ya we can get into things like tcp overhead and scalablity (and yes tcp has come to the end of the line regarding scalablitly.that why at 10gig-e they try not to use tcp in there testing)
ive used two giga-e nics on sun fire servers (and trunked them but mostly used for failover) and they work well and there is a noticable diffrence in performance but then again the nics use pci-x 66-64bit interface

artical about intergrated nforce nic (http://tech-report.com/reviews/2004q4/nforce4-ultra/index.x?pg=2)

The chipset itself has the bandwidth. But remember that transferring data does not just involve the cabling and NICs, rather it involves all the main parts of a PC - especially the hard drive. Even if the NIC and its interface to the motherboard could handle 2Gbps, no hard drive can write data that quickly, thus you would not see anywhere near those transfer rates.

i dont think the over head is that substancial for the conversion.. how would you get almost a 2/3rds loose

Look at it this way: regardless of how fast the hard drive is, data can only travel to it over the SATA bus at a maximum of 150MB/s. 2Gbps gives a transfer in MB of 250MB/s, but with computers you are limited by the lowest number, which means you are limited to that 150MB/s transfer speed to the hard drive: 250-150=100MB/s lost between the interfaces.

Dazz
12-06-04, 06:52 PM
What he is saying is it wouldn't really matter as you can only download at the speed of your hard drive. Like everything else there is a bottle neck somewhere in this case hard drive performance.

retsam
12-07-04, 12:47 AM
What he is saying is it wouldn't really matter as you can only download at the speed of your hard drive. Like everything else there is a bottle neck somewhere in this case hard drive performance

what he is saying would be true if the gbe was hanging off the vanilla flavored pci bus but it isnt.... its hanging right off the bridge chip. and he is saying that a single hdd wouldnt be able to feed it(witch would be true if i was using single drive) and if the whole sata chip only had 1.5 gigabit bandwidth. but sata has a 1.5 gbps per channel.and isnt hanging off the pci bus its right off the bridge chip. also what im talking about is an 8 drive raid system. the whole point in aggrigation is to keep the network as responsive and the network fed as much as possible. look the question i was asking is does the software for the nforce allow for aggrigation? thats all im looking for. i know this can be done and is done (i use intel cards on a couple servers that have dual gbe aggrigated and im looking at a sun fire 10000 right now that has a four g-bic aggrigation for our work network)


here is a study of load balancing in networked enviroments. (http://www.waterfalltech.com.au/POPnetserver/ExternalFiles/POPnetserver4500TrunkingPerformanceTestReport.pdf)


intel dual gbe cards (http://www.intel.com/network/connectivity/products/pro1000mt_dual_server_adapter.htm) intel calls there trunking... adapter teaming

here is another study (http://www.intel.com/network/connectivity/resources/doc_library/tech_brief/maximizing_gig.pdf)

netviper13
12-07-04, 01:16 AM
Ah crap I missed the "massive raid" part of your earlier post. My bad, I appologize.

retsam
12-07-04, 02:31 AM
Ah crap I missed the "massive raid" part of your earlier post. My bad, I appologize. hehe i was wondering why you were so adiment about alack of bandwidth regarding the hardrive ..... but you see what im getting at.... but im still wondering weather or not the drivers allow for trunking ( i.e aggrigating, or load balancing)