Now that ATI and AMD are one, Nvidia is working on a CPU as well. There are a lot of people saying they're working on a "GPU on a chip," but I seriously doubt it. It seems a lot of people are wagering this will go toward the embedded or low-power market, just like when VIA acquired Cyrix. But to do so would be extremely narrow-minded of ATI and Nvidia, even considering the boom of embedded devices and the increasing horsepower needed by set-top boxes.
I'm much more inclined to think that Nvidia is working with Intel (which would be a huge surprise, considering how they've been fighting with SLI on nForce vs. Intel chipsets) to compete with AMD & ATI working on bringing vector processing units to CPU's.
I incessantly bring this topic up, but I had to mention this since it seems that my predictions are actually going to happen. I don't necessarily think this will cause GPU's-on-CPU's to happen, although it might be a by-product. Instead I think this is going to allow for increased parallelization, faster MMX instructions (or 3DNow! if that's your taste) and a movement of putting the work of those physics accelerator cards back on the CPU die.
We may see a CPU with four cores, each with integer and floating point units then a handful of separate vector processors (a la IBM's cell) along side them. Given how the Cell reportedly absolutely sucks for many types of algorithms, this may give developers the best of both worlds. Rapidly branching and conditional logic can be done on the integer units with branch prediction and short instruction pipes while long, grinding algorithms can go to the vector processor. This has already worked for projects like Folding@Home, and could work for many similar algorithms.
Or AMD/ATI and Nvidia could just go for a stupid, embedded GPU's sitting on die with the CPU. But they'd be passing up something much cooler.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.