Originally Posted by Rollo
I agree Joe Public may not use half the stuff, but wouldn't you want it available even if you didn't?
Why would I want something available that I never use?
Anyway, out of your long list of things, there are only a few that I care about. Eyefinity and video editing/rendering.
In fact, Eyefinity was the sole reason that I went with ATI this time around. If Nvidia had been a little quicker to the game, then I may not have.
And I used to think that Nvidia had the market cornered for video/GPGPU, but it seems as if ATI is making some headway in that area, as well.
Originally Posted by PCPerspective
After reviewing all the benchmark data as well as the image quality screenshots, both GPGPU technologies had their pros and cons that could affect a consumer's decision to purchase hardware and software that utilizes ATI Stream and/or CUDA. While Stream's transcoding times were slightly better than CUDA in most of our performance tests, CUDA seemed to produce a higher quality image that evened things out a bit. Stream also seemed to be more efficient in using less of the CPU's resources for transcoding while also producing fast transcoding times.
I agree that Nvidia currently has the best feature set (I'll give them that on the more flexible 3D), but you exaggerate this to the point of being blatantly misleading. ATI can do just about everything that Nvidia can. They might not be quite as flexible or user-friendly at this point, but they can do it. The only thing of significance that you can't
do on ATI by comparison is run PhysX. And PhysX, being a propietary technology, is never going to catch on with the majority of developers. Especially with how well ATI is doing with the 4xxx and 5xxx lines.
I think that Anandtech effectively summarizes the problem with propietary technology like PhysX and CUDA in its review of CUDA vs. ATI Stream GPU Computing--
Originally Posted by Anandtech
Meanwhile NVIDIA, and now AMD, want to push their proprietary GPU computing technologies as a reason end users should want their hardware. At best, Brook+ and CUDA as language technologies are stop gap short term solutions. Both will fail or fall out of use as standards replace them. Developers know this and will simply not adopt the technology if it doesn't provide the return on investment they need, and 9 times out of 10, in the consumer space, it just won't make sense to develop either a solution for only one IHV's GPUs or to develop the same application twice using two different languages and techniques.
In the consumer space, the real advantage that CUDA has over ATI Stream is PhysX. But this is barely even a real advantage at this point as PhysX suffers from the same fundamental problem that CUDA does: it only targets one hardware vendor's products. While there are a handful of PhysX titles out there, they aren't that compelling at this point either. We will have to start seeing some real innovation in physics with PhysX before it becomes a selling point.
That's the long and short of it. So waving pom-poms about CUDA or PhysX has very little influence with me, and it should have little influence on anyone's purchasing decision at this point. Both are extremely limited propietary solutions. They will NEVER gain significant developer attention and will eventually be replaced by standards that work on all hardware.