What's the catch?
Okay, so help me understand this:
If a top end, $1000 Intel CPU can only do, say, 100 GFLOPS but a $500 nVidia or AMD graphics board can do >500 GFLOPS (in dp) and >1TFLOP (in sp), why is anything computationally expensive (compression, physics simulation, CADD, etc.) done on a CPU any more? What is the catch? Why haven't GPUs basically taken over with CPUs doing minimal work?
Intel Core i7 980X Extreme Edition @ 4.00GHz, 2.94GHz uncore| Corsair H60 | ASUS P6X58D-E | Zotac GTX 580 @ 872/1744/2145|Western Digital Caviar Black 1.5TB (Games/Steam)| Seagate 1.5TB (media/misc.) |Seagate 1 TB (OS/Main) | 24 GB G.Skill RipJaws X 1.5V DDR3-1333 | Creative X-Fi Titanium | Sennheiser HD 202 Headset | Dual Gigabit Realtek LAN | Antec True Power Quattro 1200W | Acer G235Hbmbd 23" LCD Display | Logitech K320 Wireless Keyboard | Logitech Performance MX Mouse