Go Back   nV News Forums > Linux Support Forums > NVIDIA Linux

Newegg Daily Deals

Reply
 
Thread Tools
Old 02-14-07, 12:55 PM   #1
floogy
Registered User
 
Join Date: Aug 2006
Posts: 34
Default 2.6.20-rt5 and libata, sata_nv and ncq patch

Hello,
My Maxtor 6Y200M0 is not very fast. Here some questions:

1. How to enable ATA passthrough (Jeff Garzik) for libata, or is it set by default? How to test that?
Code:
 sudo hdparm -I /dev/sda
gives some output. But the most of the commands are not working, e.g. accoustic Management is set to 128, but the drive supports 192. I'm not able to set it:
Code:
~/download/driver/nvidia/hdparm-6.9# sudo ./hdparm -M192 /dev/sda
/dev/sda:
 setting acoustic management to 192
 HDIO_GET_ACOUSTIC failed: Inappropriate ioctl for device
2. Did someone test the nvidia sata ncq patch? And how to apply that patch against 2.6.20-rt5?
http://lwn.net/Articles/203532/
http://linux-ata.org/driver-status.html#nvidia
http://www.kernel.org/pub/linux/kern...adma.patch.bz2
What is a newer nvidia chipset, which supports ahci? Do I have to enable sata_ahci on asus A8N-SLI deluxe mainboard?

http://linux-ata.org/faq.html#ncq
Code:
# echo 31 > /sys/block/sda/device/queue_depth
bash: /sys/block/sda/device/queue_depth: Permission denied
3. When will libata support smartmontools and vice et versa.
http://linux-ata.org/software-status.html#smart
http://smartmontools.sourceforge.net/#testinghelp
Seems to work here, but the init scripts are (older) corrupt I guess, because it fails on startup.

1. and 2. are important to speed up my harddrive Maxtor 6Y200M0 on my asus A8N-SLI Deluxe? I thought the installation of 2.6.20 would do that automagically:

Code:
~/download/driver/nvidia/hdparm-6.9$ sudo ./hdparm -tT /dev/sda
/dev/sda:
 Timing cached reads:   1146 MB in  2.00 seconds = 573.36 MB/sec
 Timing buffered disk reads:   90 MB in  3.11 seconds =  28.91 MB/sec
 [...]
 Timing buffered disk reads:  148 MB in  3.04 seconds =  48.74 MB/sec
http://www.nvnews.net/vbulletin/show...73#post1050873

On Windows I got a throughput of 95 MB/sec .
So, How to set up ncq on linux?

EDIT: Hmm, it seems, that it is enabled(?), but why is it that slow?

Code:
[   24.879482] ACPI: (supports S0 S1 S3 S4 S5)
[   24.879612] Freeing unused kernel memory: 324k freed
[   24.880247] Time: tsc clocksource has been installed.
[   24.880271] Switched to high resolution mode on CPU 0
[   24.907833] input: AT Translated Set 2 keyboard as /class/input/input0
[   24.911941] Console: switching to colour frame buffer device 128x48
[   24.932342] NFORCE-CK804: IDE controller at PCI slot 0000:00:06.0
[   24.932485] NFORCE-CK804: chipset revision 162
[   24.932557] NFORCE-CK804: not 100% native mode: will probe irqs later
[   24.932663] NFORCE-CK804: 0000:00:06.0 (rev a2) UDMA133 controller
[   24.932767]     ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:DMA, hdb:DMA
[   24.932894]     ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:DMA, hdd:DMA
[   24.933017] Probing IDE interface ide0...
[  106.456740] hda: HL-DT-ST DVDRAM GSA-4167B, ATAPI CD/DVD-ROM drive
[  106.763949] ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
[  106.764076] Probing IDE interface ide1...
[  107.435743] hdc: -DT-ST DVDRAM GSA-4167B, ATAPI CD/DVD-ROM drive
[  107.742478] ide1 at 0x170-0x177,0x376 on irq 15
[  107.749033] SCSI subsystem initialized
[  107.752739] libata version 2.00 loaded.
[  107.756446] sata_nv 0000:00:07.0: version 3.2
[  107.756948] ACPI: PCI Interrupt Link [APSI] enabled at IRQ 23
[  107.757052] ACPI: PCI Interrupt 0000:00:07.0[A] -> Link [APSI] -> GSI 23 (level, low) -> IRQ 23
[  107.757201] sata_nv 0000:00:07.0: Using ADMA mode
[  107.757285] PCI: Setting latency timer of device 0000:00:07.0 to 64
[  107.757345] ata1: SATA max UDMA/133 cmd 0xFFFFC20000002480 ctl 0xFFFFC200000024A0 bmdma 0xD800 irq 23
[  107.757530] ata2: SATA max UDMA/133 cmd 0xFFFFC20000002580 ctl 0xFFFFC200000025A0 bmdma 0xD808 irq 23
[  107.757707] scsi0 : sata_nv
[  108.060675] ata1: SATA link down (SStatus 0 SControl 300)
[  108.060767] scsi1 : sata_nv
[  108.363676] ata2: SATA link down (SStatus 0 SControl 300)
[  108.364195] ACPI: PCI Interrupt Link [APSJ] enabled at IRQ 22
[  108.366084] ACPI: PCI Interrupt 0000:00:08.0[A] -> Link [APSJ] -> GSI 22 (level, low) -> IRQ 22
[  108.368100] sata_nv 0000:00:08.0: Using ADMA mode
[  108.370154] PCI: Setting latency timer of device 0000:00:08.0 to 64
[  108.370197] ata3: SATA max UDMA/133 cmd 0xFFFFC20000004480 ctl 0xFFFFC200000044A0 bmdma 0xC400 irq 22
[  108.372472] ata4: SATA max UDMA/133 cmd 0xFFFFC20000004580 ctl 0xFFFFC200000045A0 bmdma 0xC408 irq 22
[  108.374741] scsi2 : sata_nv
[  119.877746] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[  119.882811] ata3.00: ATA-7, max UDMA/133, 398297088 sectors: LBA48
[  119.885305] ata3.00: ata3: dev 0 multi count 1
[  119.890808] ata3.00: configured for UDMA/133
[  119.893374] scsi3 : sata_nv
[  120.198732] ata4: SATA link down (SStatus 0 SControl 300)
[  120.201481] scsi 2:0:0:0: Direct-Access     ATA      Maxtor 6Y200M0   YAR5 PQ: 0 ANSI: 5
[  120.204316] ata3: bounce limit 0xFFFFFFFFFFFFFFFF, segment boundary 0xFFFFFFFF, hw segs 61
[  120.210833] SCSI device sda: 398297088 512-byte hdwr sectors (203928 MB)
[  120.213884] sda: Write Protect is off
[  120.216933] sda: Mode Sense: 00 3a 00 00
[  120.216946] SCSI device sda: write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[  120.220221] SCSI device sda: 398297088 512-byte hdwr sectors (203928 MB)
[  120.223515] sda: Write Protect is off
[  120.226843] sda: Mode Sense: 00 3a 00 00
[  120.226854] SCSI device sda: write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[  120.230371]  sda: sda1 sda2 sda3 < sda5 sda6 sda7 sda8 sda9 sda10 sda11 >
[  120.361311] sd 2:0:0:0: Attached scsi disk sda
floogy is offline   Reply With Quote
Old 02-14-07, 01:50 PM   #2
Dragoran
Registered User
 
Join Date: May 2004
Posts: 711
Default Re: 2.6.20-rt5 and libata, sata_nv and ncq patch

Quote:
On Windows I got a throughput of 95 MB/sec .
this can't be unless your using raid.
how did you messure this? (which tool)
don't get the burst speed but the averange transfer speed.
Dragoran is offline   Reply With Quote
Old 02-14-07, 02:50 PM   #3
floogy
Registered User
 
Join Date: Aug 2006
Posts: 34
Default Re: 2.6.20-rt5 and libata, sata_nv and ncq patch

Here is a thread I opened last year, but it's in german.
http://forum.ubuntuusers.de/topic/55202/

Quote:
Ein Benchmark unter xp x64 mit hd tach http://www.simplisoftware.com/Public...request=HdTach bringt es auf 95 Mb/s Durchsatz und 230Mb/s burst speed.
A benchmark under xp x64 with hd tach shows a throughput of 95 MB/sec and 230 MB burst speed.
I don't have raid enabled. I got only one hd.

With 2.6.20 the modul got parm adma set to true (I think that's maybe the whole patch, but also for ncq?):
Code:
modinfo sata_nv
filename:       /lib/modules/2.6.20-rt5/kernel/drivers/ata/sata_nv.ko
version:        3.2
license:        GPL
description:    low-level driver for NVIDIA nForce SATA controller
author:         NVIDIA
srcversion:     551FB8A96CE751C52091200
alias:          pci:v000010DEd*sv*sd*bc01sc04i*
alias:          pci:v000010DEd*sv*sd*bc01sc01i*
alias:          pci:v000010DEd000003F7sv*sd*bc*sc*i*
alias:          pci:v000010DEd000003F6sv*sd*bc*sc*i*
alias:          pci:v000010DEd000003E7sv*sd*bc*sc*i*
alias:          pci:v000010DEd0000037Fsv*sd*bc*sc*i*
alias:          pci:v000010DEd0000037Esv*sd*bc*sc*i*
alias:          pci:v000010DEd00000267sv*sd*bc*sc*i*
alias:          pci:v000010DEd00000266sv*sd*bc*sc*i*
alias:          pci:v000010DEd0000003Esv*sd*bc*sc*i*
alias:          pci:v000010DEd00000036sv*sd*bc*sc*i*
alias:          pci:v000010DEd00000055sv*sd*bc*sc*i*
alias:          pci:v000010DEd00000054sv*sd*bc*sc*i*
alias:          pci:v000010DEd000000EEsv*sd*bc*sc*i*
alias:          pci:v000010DEd000000E3sv*sd*bc*sc*i*
alias:          pci:v000010DEd0000008Esv*sd*bc*sc*i*
depends:        libata
vermagic:       2.6.20-rt5 SMP preempt mod_unload
parm:           adma:Enable use of ADMA (Default: true) (bool)

Benchmark:
Code:
  
dbench -t60 -D /tmp 50procs
dbench version 3.04 - Copyright Andrew Tridgell 1999-2004

Running for 60 seconds with load '/usr/share/dbench/client.txt' and minimum warmup 12 secs
50 clients started
  50        45    85.17 MB/sec  warmup   1 sec
  50        64    63.14 MB/sec  warmup   2 sec
  50        78    50.74 MB/sec  warmup   3 sec
  50       109    50.39 MB/sec  warmup   4 sec
  50       122    44.63 MB/sec  warmup   5 sec
  50       142    43.07 MB/sec  warmup   6 sec
  50       163    40.97 MB/sec  warmup   7 sec
  50       174    38.16 MB/sec  warmup   8 sec
  50       185    36.01 MB/sec  warmup   9 sec
  50       188    32.85 MB/sec  warmup  10 sec
  50       190    30.23 MB/sec  warmup  11 sec
  50       190    27.73 MB/sec  warmup  12 sec
  50       190    25.61 MB/sec  warmup  13 sec
  50       190    23.79 MB/sec  warmup  14 sec
  50       190    22.22 MB/sec  warmup  15 sec
  50       198    21.54 MB/sec  warmup  16 sec
  50       238    23.23 MB/sec  warmup  17 sec
  50       276    23.10 MB/sec  warmup  18 sec
  50       286    22.35 MB/sec  warmup  19 sec
  50       287    21.24 MB/sec  warmup  20 sec
  50       310    20.74 MB/sec  warmup  21 sec
  50       322    20.19 MB/sec  warmup  22 sec
  50       323    19.32 MB/sec  warmup  23 sec
  50       375    19.78 MB/sec  warmup  24 sec
  50       399    19.40 MB/sec  warmup  25 sec
  50       556    20.04 MB/sec  warmup  26 sec
  50       839    22.27 MB/sec  warmup  27 sec
  50      1040    23.55 MB/sec  warmup  28 sec
  50      1252    24.69 MB/sec  warmup  29 sec
  50      1491    35.49 MB/sec  execute   1 sec
  50      1637    42.56 MB/sec  execute   2 sec
  50      1672    35.06 MB/sec  execute   3 sec
[...]
  50      5851    44.45 MB/sec  execute  32 sec
  50      6611    48.65 MB/sec  execute  33 sec
  50      7340    53.28 MB/sec  execute  34 sec
  50      7634    57.25 MB/sec  execute  35 sec
  50      7637    55.67 MB/sec  execute  36 sec
  50      7647    54.26 MB/sec  execute  37 sec
  50      7650    52.84 MB/sec  execute  38 sec
  50      7673    51.63 MB/sec  execute  39 sec
  50      7724    50.72 MB/sec  execute  40 sec
[...]
  50      8236    28.26 MB/sec  cleanup  77 sec
  50      8236    27.89 MB/sec  cleanup  78 sec
  50      8236    27.75 MB/sec  cleanup  78 sec

Throughput 36.2631 MB/sec 50 procs
A second try:
Code:
 dbench -t60 -D /tmp 50procs
dbench version 3.04 - Copyright Andrew Tridgell 1999-2004

Running for 60 seconds with load '/usr/share/dbench/client.txt' and minimum warmup 12 secs
50 clients started
  50       885   206.61 MB/sec  warmup   1 sec
  50      1793   231.39 MB/sec  warmup   2 sec
  50      2049   233.44 MB/sec  warmup   3 sec
  50      2727   229.48 MB/sec  warmup   4 sec
  50      2800   193.57 MB/sec  warmup   5 sec
  50      2813   167.69 MB/sec  warmup   6 sec
  50      2824   147.06 MB/sec  warmup   7 sec
  50      2902   134.93 MB/sec  warmup   8 sec
  50      3690   144.11 MB/sec  warmup  10 sec
  50      3766   134.24 MB/sec  warmup  11 sec
  50      3828   125.13 MB/sec  warmup  12 sec
  50      4205    29.79 MB/sec  execute   1 sec
  50      4227    27.58 MB/sec  execute   2 sec
  50      4333    32.65 MB/sec  execute   3 sec
  50      4414    37.84 MB/sec  execute   4 sec
  50      4446    34.13 MB/sec  execute   5 sec
[...]
  50      9768    26.49 MB/sec  cleanup  71 sec
  50      9768    26.12 MB/sec  cleanup  72 sec
  50      9768    25.77 MB/sec  cleanup  73 sec
  50      9768    25.42 MB/sec  cleanup  74 sec
  50      9768    25.08 MB/sec  cleanup  75 sec
  50      9768    24.75 MB/sec  cleanup  76 sec
  50      9768    24.64 MB/sec  cleanup  76 sec

Throughput 31.3449 MB/sec 50 procs

Last year it came out that the service yacy wasn't configured right, what lead into heavy disk usage, and therefor poor benchmark performance.
floogy is offline   Reply With Quote
Old 02-14-07, 03:11 PM   #4
Dragoran
Registered User
 
Join Date: May 2004
Posts: 711
Default Re: 2.6.20-rt5 and libata, sata_nv and ncq patch

Quote:
Here is a thread I opened last year, but it's in german.
http://forum.ubuntuusers.de/topic/55202/
no problem I am from austria
seems that you are doing the benchmarks while running apps in the background which uses disk I/O (is yacy the only one) ?
what does
Quote:
cat /sys/block/sda/device/queue_depth
show? (if its not 0 (should be 31 by default) it means that ncq is enabled and works)
Dragoran is offline   Reply With Quote
Old 02-14-07, 04:01 PM   #5
floogy
Registered User
 
Join Date: Aug 2006
Posts: 34
Default Re: 2.6.20-rt5 and libata, sata_nv and ncq patch

Hi,

It seems, that you're right: hd tach claimed to mesure 95 MB average read, but HD Tune measures something like this:
Code:
HD Tune: Maxtor 6Y200M0 Benchmark

Transfer Rate Minimum : 28.1 MB/sec
Transfer Rate Maximum : 55.2 MB/sec
Transfer Rate Average : 44.5 MB/sec
Access Time           : 20.0 ms
Burst Rate            : 102.2 MB/sec
CPU Usage             : 4.0%
It also shows NCQ greyed out for this drive.
So, I think the mainboard or the harddrive doesn't support NCQ.

Code:
$ cat /sys/block/sda/device/queue_depth
1
0 means disabled, then 1 might say, that it's not implemented?

During benchmarks there may running several processes in the background, but yacy is disabled.

I saw some benchmarks with 75 MB/sec on the net, and thought, that I should reach such throughput (==Transfer Rate Average?) under kernel 2.6.20 .

The Transfer Rate Average is 15 MB/sec higher than on linux though...
floogy is offline   Reply With Quote
Old 02-14-07, 04:26 PM   #6
chunkey
#!/?*
 
Join Date: Oct 2004
Posts: 662
Default Re: 2.6.20-rt5 and libata, sata_nv and ncq patch

Quote:
Originally Posted by floogy
Hi,

It seems, that you're right: hd tach claimed to mesure 95 MB average read, but HD Tune measures something like this:
Code:
HD Tune: Maxtor 6Y200M0 Benchmark

Transfer Rate Minimum : 28.1 MB/sec
Transfer Rate Maximum : 55.2 MB/sec
Transfer Rate Average : 44.5 MB/sec
Access Time           : 20.0 ms
Burst Rate            : 102.2 MB/sec
CPU Usage             : 4.0%
It also shows NCQ greyed out for this drive.
So, I think the mainboard or the harddrive doesn't support NCQ.
Code:
$ cat /sys/block/sda/device/queue_depth
1
0 means disabled, then 1 might say, that it's not implemented?
Sorry, your HDD doesn't support NCQ. But NCQ does very little for linear reads/writes since the request are already in a good order.

Quote:
I saw some benchmarks with 75 MB/sec on the net, and thought, that I should reach such throughput (==Transfer Rate Average?) under kernel 2.6.20 .

The Transfer Rate Average is 15 MB/sec higher than on linux though...
50MB/s is fine for a HDD.


just for the records
Code:
dbench -t60 -D /tmp 50procs
dbench version 3.04 - Copyright Andrew Tridgell 1999-2004

Running for 60 seconds with load '/usr/share/dbench/client.txt' and minimum warmup 12 secs
50 clients started
  50       672   351.60 MB/sec  warmup   1 sec
..
  50     29140   489.75 MB/sec  warmup  18 sec
  50     33450   564.12 MB/sec  execute   1 sec
  50     35546   562.91 MB/sec  execute   2 sec
...
  50    177820   670.60 MB/sec  execute  59 sec
  50    180429   670.92 MB/sec  execute  60 sec
  50    183001   670.93 MB/sec  cleanup  61 sec
  50    183001   664.72 MB/sec  cleanup  61 sec

Throughput 670.942 MB/sec 50 procs
RAID is great!
chunkey is offline   Reply With Quote
Old 02-14-07, 05:12 PM   #7
floogy
Registered User
 
Join Date: Aug 2006
Posts: 34
Default Re: 2.6.20-rt5 and libata, sata_nv and ncq patch

dbench in single user mode (without so many background processes):

Code:
dbench version 3.04 - Copyright Andrew Tridgell 1999-2004

Running for 60 seconds with load '/usr/share/dbench/client.txt' and minimum warmup 12 secs
50 clients started
  50       448   255.52 MB/sec  warmup   1 sec
  50      1073   257.03 MB/sec  warmup   2 sec
 ...
  50      4365   150.47 MB/sec  warmup  11 sec
  50      4405   140.25 MB/sec  warmup  12 sec
  50      4597    24.95 MB/sec  execute   1 sec
  50      4852    62.53 MB/sec  execute   2 sec
  50      4908    50.77 MB/sec  execute   3 sec
 ...
  50     13642    81.52 MB/sec  execute  34 sec
  50     14558    86.04 MB/sec  execute  35 sec
  50     15417    90.10 MB/sec  execute  36 sec
  ...
  50     27510   116.09 MB/sec  execute  56 sec
  50     28034   117.13 MB/sec  execute  57 sec
  50     28573   117.71 MB/sec  execute  58 sec
  50     29222   118.54 MB/sec  execute  59 sec
  50     29706   118.58 MB/sec  execute  60 sec
  50     29971   117.64 MB/sec  cleanup  61 sec
  50     29971   115.75 MB/sec  cleanup  62 sec
  50     29971   113.91 MB/sec  cleanup  63 sec
  50     29971   112.14 MB/sec  cleanup  64 sec

Throughput 117.654 MB/sec 50 procs

That
looks really fine to me :-)
floogy is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 02:23 PM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.