You basically have it here, ATI has been putting tesslation hardware into their GPUs since what feels like the beginning of time, even the 360 has tesselation hardware the PS3 doesn't! Also, Shader Model 5.0 will matter, only the lack of DirectX 11 support, which is useless anyway, since all DX11 did was standardize tesslation units, and standardize GPGPU (General Purpose Graphics Processing Unit... More on that later ) also, just like there was nothing stopping the PS3 from displaying 3D, there really wasn't anything stopping the WiiU to begin with either...Right there's a few things that are needed to help this move along.
1. Consoles don't use PC graphics card, only GPU's based on their tech, they end up being completely different and essentially more powerful.
2. DirectX is only used in Microsoft Products such as Windows and the Xbox brand (which was originally called Project DirectXbox) every other competitor uses their own developed API engine to make full use of the hardware, so it doesn't matter about DX11 and Shader Model 5.0, the Wii itself didn't even use shaders, it used TEVs, the Wii U will use shaders but in it's own way.
3. The Wii U could always add new tech, just because early dev kits were based off of the R770, doesn't mean they are now, and even then, they still had stuff like Hardware Tesselation, also Nintendo has said that the machine can do SterioScopic 3D, which the 4800 series can't do.
Anyway quick question, would a shared cpu and gpu be considered better than one with its own particular core?
Not at all, current implementations of this idea from both Intel and AMD are rather slow in graphics performance, the only benefit is faster access to the CPU, Memory Controller, and Northbridge, which is then useless due to the lower power and smaller GPU that must be used to prevent heat issues and making production costs too high.
This is the most consise and accurate description of the difference I have seen.6670 also supports DirectX11 and uses a more sophisticated shader model (something that held back the Wii greatly, not just the fact it wasn't HD). But the 4870 can do more processing at once, fill in more pixels quickly, do things faster and let's not forget the 4870 will be customized to be smaller and produce less heat and use up less energy (probably around 40nm instead of the 55nm), and they may add DirectX11 support. But if you do a straight comparison, the 4870 wins in many benchmarks.
My source:
4870 = 1741 G3D Mark
6670 = 1227 G3D Mark
(Higher Number is better)
http://www.videocard...=Radeon+HD+4870
http://www.videocard...=Radeon+HD+6670
As you can see in the link below, no matter how much VRAM Nintendo dedicates to the 4850 or 4870 (either 512MB or 1GB). The 512MB 4850is most likely if also have 1GB of regular RAM
Ignore the GTX 295, that's my own video card and I got curious
http://www.graphicsc...om/page/compare
Also ignore the orange and red bars, you have to include 8 video cards so i just chose randomly. Focus on the 6670 and up (yellow and green)
EDIT: Can't post the pics. Use the website yourselves. Compare the video cards it gives you expected performance % it's awesome.
Even the lowest end 4850 is said to outperform the 6670 by 10.6% (that's the cheapest possible Rv700 card Nintendo could use.)
The mid-range RV700 card the 4870 512GB outperforms the 6670 by 39.4%
The upper-mid range card 4850 1GB outperforms the 6670 by 15.6%
The highest RV700 card the 4870 1GB outperforms the 6670 by 44.3%
On an off-topic note, this next gen we will have something no console before has had, namely ARM-9 based GPUs which have been around since 2006, offered the best performance boost in GPU history... and best of all made massively parallel processing viable on a desktop PC, on a GPU at at that! I'm interested in seeing what developers have planned to do with these... (Better AI? Really awesome physics simulation?)
- Rubix87 likes this