Originally Posted by FlakMagnet
I still don't see how they can eliminate the latency involved. A button press on your controller or keyboard still needs to be sent to game servers before it can be processed, while on a console or PC that same button press is registered immediately.
Exactly. The latency may have been the biggest hurdle that NVIDIA needed to overcome. A lot depends on how much data is going to be sent and NVIDIA could even offer a tiered pricing system to customers.
One way to reduce the amount of data being sent from the cloud to a gaming device would be to compress it. NVIDIA is very good at developing complex algorithms to compress different types of data like 3d, video, hpc, etc.
A second method could offer tremendous savings, but is a little harder to code for as it involves calculating net change from frame to frame. Instead of sending all of the data in the frame buffer (pixels) for each frame being rendered, only send the data (pixels) that changed. Some developers code using a net change technique. It's like occlusion culling. There's no need to process objects that are occluded, or hidden, behind other objects.
This type of programming is perfect for processors with a lot of cores like a GPU. It's still very difficult to develop applications that use all of the cores on a CPU. Most of today's apps are still single threaded, but Intel has been bringing new tools to market to ease the burden for developers.
But the bandwidth to process 3d data, with all the driver and in-game graphics bells and whistles enabled on a PC, is staggering - hundreds of gigabytes of data per second - even in parallel. But this processing will be done on their high-performance supercomputers in the cloud.
I'm only throwing out ideas since, to my knowledge, NVIDIA hasn't briefed web sites on the technology behind the grid. But I'd love to test such a system when it becomes available!