Originally Posted by newparad1gm
I think there was a case that at Surround resolutions, there was a sizable difference between 8x8 PCI-E lanes vs 16x16 for dual GPU systems, but that might have been at the 2.0 spec.
That could be some cases. However:
During my Tri-SLI testing the cards usually ran at x8 PCIe 2.x spec. Even extremely demanding tests showed no difference between x16/x8/x8 or x8/x8/x8 or x16/x16 (when using only two cards). There might be a few frames on the line but nothing major.
x8/x8 PCIe 3.0 like on Haswell for example is easily enough for a two card solution.
AFAIK, only Sandy Bridge E was "blocked" by nVidia for PCIe 3.0 because that CPU went through testing/certification before the PCIe specification was finalized.