Originally Posted by Vorgus
It's not just bits of storage, bits of ram. It's data path width. Cpus are hitting speed limits, so now they are spawning more cores, 2, 4 6, 12... making the data paths wider will help a single crunch more data without a clock increase. Some things just don't gain much by dividing across cores.
You have to think about it this way: What everyday data types are we going to deal with on a regular basis that need more than 64-bits?
The only ones I can think of already do just fine with already existing GPUs. In fact the GPUs can process this information in ways that the CPU can't and probably shouldn't. Not only that but since the GPU is basically its own subsystem, we can upgrade that component independently of the rest of the system, even when its core architecture changes massively, which effectively makes owning and upgrading computers cheaper. The same can't be said for the CPU.