PDA

View Full Version : how to build a really fast home network


Pages : [1] 2

nekrosoft13
10-01-08, 07:44 AM
I would like to speed up my network performance. some tips would be great

right now I have comcast ISP, with linksys modem connected to D-Link DGL-4500 with additional D-Link (consumer level) 8-Port Gigabit Switch.

reason why i'm considering this, i managed to get 100-120mb/s transfer between main rig and server (server 08, still running evaluation).

Server is Phenom 2.2ghz with 4gb ram, but with 100-120mb/s transfer CPU spikes to about 50-80% and ram gets filled up to 100% in under a minute for some reason. Right now i'm using the onboard Gigabit Realtek chip.

Main Rig is using nvidia nforce gigabit chips (based on Marvell).

both machines are connected to DGL-4500, I'm considering buying Intel Pro PCI-E Gigabit NICs for all 3 main PCs (Main Rig, Server, HTPC) to cut down on CPU usage.

and instead of having them all plugged in the Router to get a better higher level Gigabit Switch, and have all machines pluggged into that. Then that switch plugged into router.

Q
10-01-08, 09:33 AM
It sounds like you have the right idea there. The dedicated NICs will offer superior throughput. Remember to enable jumbo frames whenever you can.

nekrosoft13
10-01-08, 09:50 AM
looked around, and it seems these NICs are good
http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=2276643&CatId=2380

just looking for a decent 8-port gigabit switch that can take it ;)

btw, what size jumbo frames do you recommend

Monolyth
10-01-08, 11:41 AM
Jumbo Frames are great for big xfers across your network, but when picking out a switch be sure to check and ensure that it supports Jumbo Frames.

The size depends mostly on the size of the files you will be x-fering. 1500 of course is the default packet size. If you plan on huge MB or GB size files go ahead and top it out @ 9000.

walterman
10-01-08, 02:02 PM
Use network teaming. Basically, it's the SLI version of Ethernet. You need a gigabit switch with support for teaming (and you must check the exact version), and several network cards with teaming support too.

You also can put cat7 cable, and use GG45 connectors, for the future 10Gbit LAN over cable.

crainger
10-01-08, 06:17 PM
I've always wanted to try teaming.

Also try fiber optic Nekro. It's FTW. ::D:

nekrosoft13
10-01-08, 07:23 PM
one of these maybe?
http://www.newegg.com/Product/Product.aspx?Item=N82E16833129168

mailman2
10-01-08, 08:54 PM
The problem with gigabit is that the HD becomes the bottleneck.

crainger
10-01-08, 11:53 PM
one of these maybe?
http://www.newegg.com/Product/Product.aspx?Item=N82E16833129168

Yap. Buy it and win!

Monolyth
10-02-08, 11:27 AM
one of these maybe?
http://www.newegg.com/Product/Product.aspx?Item=N82E16833129168

Waste of money atm, IMHO. 10Gb/s means you would have to have a file-storage system that can write @ at nearly 1GB/s, sure some bigger file-storage systems can hit that but it's not economically viable for consumers yet AT ALL.

Teaming mostly only helps if you are serving multiple clients at the same time, the only other time you might want to use it is when you have more then one teamed box. Also keep in mind that the speeds gained from Teaming will only help if all "teamed" systems are on the same switch, otherwise of course you'll get whatever the uplink port speed is which is almost always 1Gbps with modern switches.

It makes no sense to "team" a server that would only be serving 1-2 Gigabit clients on the same switch, however if you are looking for HD content streaming to more then 2 clients then yes teaming is the way to go. Also keep in mind there are different levels of teaming and most consumer-level switches are sparse on what kinds of teaming they support. Most of the time you will have to contact a vendor to find out what levels of teaming it supports (if at all).

nekrosoft13
10-02-08, 12:03 PM
monolyth that was just a joke ;)

Monolyth
10-02-08, 12:09 PM
monolyth that was just a joke ;)

Yeah...sarcasm is often hard to catch in text. :(

Oh well maybe the teaming info will help.

nekrosoft13
10-02-08, 12:34 PM
i might re-image one of my machines, when i was re-installed windows to fix the blu-ray read speed issue, something got screwed up with network.

now that i know how to fix blu-ray issue, i will reimage back to my older installation and working network. i just hope i still have that image

Monolyth
10-02-08, 12:47 PM
Do you run Daemon-Tools nekro?

nekrosoft13
10-02-08, 01:15 PM
not any more, that was causing the blu-ray read issues.

but now i have network issue on new install, so will try to reflash to old image (if i still have it).

network issues is weird, when i send a large file from the machine, to other machine over gigabit file transfer will freeze at random spot. once it freezes i can't close the copy/transfer window.
computer won't reboot or shut down, because that window won't close.

if you know how to fix it let me know

glObalist
10-02-08, 02:10 PM
Did you try

shutdown -f -r -t 01

?

nekrosoft13
10-03-08, 06:13 AM
glObalist, that didn't shut it down either.

well my network transfer freeze issue is solved. never found a solution, but i restored image from 9/12/08, and everything is fine.

tomorrow i might buy two Intel Pro gigabit nics.

Bman212121
10-03-08, 08:11 AM
Check your logs in the event viewer Nekro. I'm still trying to figure out why my Server 2k3 box refuses to work with my Gig nics. It will start a transfer then basically hang the client pc until I can finally kill the transfer. If you look on the logs on the server it's flooded with error messages.

The Intel Pro 1000 are really nice nics. The ones you linked would work well because they are PCI-E. If it were just a PCI card I'd be afraid of the interface bottlenecking your connection.

Also, did you check your settings for that realtek nic and make sure anything like TCP/IP offloading was turned on? Your usage seems quite high. I have a linksys EG1032 (http://www.bestbuy.com/site/olspage.jsp?skuId=5727975&st=linksys+nic&lp=2&type=product&cp=2&id=1056281313417) that's in a little AMD athlon 1800+ 1GB ram pc running Server 08 and it can get to about 50% network usage before I run out of CPU. You probably have 10x the amount of CPU in that pc than what mine is, so it seems like something isn't setup right.

EDIT: Also look for Flow control for both TX & RX, and Checksum offloading.

Monolyth
10-03-08, 08:25 AM
Check your logs in the event viewer Nekro. I'm still trying to figure out why my Server 2k3 box refuses to work with my Gig nics. It will start a transfer then basically hang the client pc until I can finally kill the transfer. If you look on the logs on the server it's flooded with error messages.

The Intel Pro 1000 are really nice nics. The ones you linked would work well because they are PCI-E. If it were just a PCI card I'd be afraid of the interface bottlenecking your connection.

Naw, older PCI bus caps @ 133MB/s. Even at full Gigabit speeds you can't fill it. Of course all PCI slots share the same channel which is why PCIe is better. :)

One thing I found was a huge bottleneck for me on large xfers was simply the storage I was going to. Remember that as you get further into the center of the platter (more used space on the drive, the slower the drive will be. Right around 60-75% is where the drop off normally occurs. This isn't so bad with higher RPM drives as the faster speeds offset the time it takes the head to reach the inner platter. Be sure to run tests on your file-storage to ensure it can handle the load otherwise your system will throttle the connection and you'll get less then expected performance.

Bman212121
10-03-08, 08:29 AM
Naw, older PCI bus caps @ 133MB/s. Even at full Gigabit speeds you can't fill it. Of course all PCI slots share the same channel which is why PCIe is better. :)

One thing I found was a huge bottleneck for me on large xfers was simply the storage I was going to. Remember that as you get further into the center of the platter (more used space on the drive, the slower the drive will be. Right around 60-75% is where the drop off normally occurs. This isn't so bad with higher RPM drives as the faster speeds offset the time it takes the head to reach the inner platter. Be sure to run tests on your file-storage to ensure it can handle the load otherwise your system will throttle the connection and you'll get less then expected performance.

Yea that is what I was getting at. If he has anything else using the PCI bus it's going to be saturated in a hurry. 133MBps is just the theoretical limit. After you add overhead I've found it to be exactly 110MBps when I had my RAID controller on it. Nothing like a flatline graph at 110MBps for 95% of the drive test. :lol:

Monolyth
10-03-08, 09:19 AM
Yeah that 110MB/s limit sucks, you'll also have TCP processing overhead which will further slow things down. Overall it's near to impossible to get theoretical max on Gigabit even w/Jumbo Frames.

I'm so happy that Server 2k3/2k8 & Vista did away with the SCSIPort xfer limitations in the driver. Though I do wish my preferred RAID card vendor (3Ware) would release StorPort drivers. :(

nekrosoft13
10-03-08, 04:51 PM
Check your logs in the event viewer Nekro. I'm still trying to figure out why my Server 2k3 box refuses to work with my Gig nics. It will start a transfer then basically hang the client pc until I can finally kill the transfer. If you look on the logs on the server it's flooded with error messages.

The Intel Pro 1000 are really nice nics. The ones you linked would work well because they are PCI-E. If it were just a PCI card I'd be afraid of the interface bottlenecking your connection.

Also, did you check your settings for that realtek nic and make sure anything like TCP/IP offloading was turned on? Your usage seems quite high. I have a linksys EG1032 (http://www.bestbuy.com/site/olspage.jsp?skuId=5727975&st=linksys+nic&lp=2&type=product&cp=2&id=1056281313417) that's in a little AMD athlon 1800+ 1GB ram pc running Server 08 and it can get to about 50% network usage before I run out of CPU. You probably have 10x the amount of CPU in that pc than what mine is, so it seems like something isn't setup right.

EDIT: Also look for Flow control for both TX & RX, and Checksum offloading.


well i got the Intel nic in the server now, CPU usage dropped by huge margin. realtek is currently disabled.

but before i did disable it, offloading for every option was on, but cpu usage was still high. problem you are describing is very similar to what i had.

wollyka
10-04-08, 06:38 AM
Sorry to hijack, but does the Cat5 cables wiring needs changing if i want to switch from my 100 mbit switch to 1 Gbps one? thx

npras42
10-04-08, 07:51 AM
I think Cat5e cable is good for something like 100 metres and for longer than that you'd need Cat6. I don't know if the performance tails off as you approach that 100 metre mark though.

nekrosoft13
10-04-08, 09:22 AM
Sorry to hijack, but does the Cat5 cables wiring needs changing if i want to switch from my 100 mbit switch to 1 Gbps one? thx

cat5 is not a good idea for gigabit. cat5e and up should be used