Originally Posted by zander
The Linux kernel's and the GPU's idea of the GPU's PCI configuration space setup differ; does the behavior change if you boot with pci=noacpi or pci=nommconf?
No, neither pci=noacpi nor pci=nommconf makes the driver work after booting to OS X. I also tried pci=bios and pci=nobios, and they don't work, either. In all of those cases, the lspci output for the video card is identical to the broken output I posted earlier (non-virtual ROM mapping and that extra "d3" byte in the hex dump). However, in at least one non-working case, the driver was sharing interrupt vector 16 with the usb2 and usb3 devices, so I think we can eliminate that as the cause.
I can confirm that the OS X dual head/sleep/wakeup/reboot process I described above is what makes the driver work in GNU/Linux. No special kernel boot parameters are required. I've repeated this process multiple times and each time I do it, I get working video in GNU/Linux. It survives reboots and cold starts until I boot into OS X, and then I have to repeat the trick to make video functional again. Weird.
It does *not* work with only the built-in display. If I disconnect all peripherals and displays, boot into OS X, put it to sleep by shutting the display, then wake it up by opening the display and reboot, video is still corrupt in GNU/Linux. So it appears to have something to do with disabling the built-in display and using an external one. (edit: to clarify, I mean that the external display must be connected while the built-in display is off while using OS X to make this trick work; once I reboot from OS X into GNU/Linux, both the external display and the built-in display work in GNU/Linux.)
zander, any more ideas? Is there anything I can do on the OS X side to get you pertinent info? There's no equivalent to lspci on OS X, as far as I can tell.