Maybe this little blurb from nVidia yesterday was a not-so-subtle response:
Tamasi went on to explain how nVidia takes advantage of DDR2:
"There are fundamental differences between DDR1 and DDR2, and if you want to make good use of DDR2, you have to design around longer burst lengths on the memory, because that's how they're going faster. So the entire memory subsystem has to be designed to handle that. You might be able to hook up a chip that's built for DDR1memory to DDR2 memory, and even run it at a high frequency, but you get horrible utilization out of the memory, because that DDR1 memory subsystem is all built around Burst-Length 2 accesses. So you'll get a half the efficiency accessing the Burst-Length 4 memory device."