Jump to content


Photo

Wii U's Memory Bandwidth, GPU More Powerful Than We Thought?


  • This topic is locked This topic is locked
79 replies to this topic

#21 krizzx

krizzx

    Green Koopa Troopa

  • Members
  • 43 posts

Posted 24 February 2014 - 09:12 AM

That's not more than wha tI thought. That is exactly what I thought it was from the beginning.

 

The RAM had to be dual channel. It would make no sense for it not be giving that it was strictly two chip access and the technoloogy was so cxommon that there would no reason not to implement it effectly making the bandwith much higher than the PS3/360's without getting into th eDRAM at all.

 

Also, the CPU actually outperforms the PS3/360 CPU's in a lot of areas. Things is, the CPU is so different that no code made for one will work well on the other. You can't put Wii U code on the 360/PS3 and expect optimal reults. Likewise this cannot be done with the last gen consoles to the Wii U.

 

The problem with ports is that most of them are made with the PS3/360 as the lead platform and those are given the highest budget. The Wii U just gets a cheap port of the PS3/360 game that isn't optimized for it so the ports generally come out playing better on the PS3/360. No dev is going to spend the money to make a ground up version for the Wii U.

 

The console is significantly more powerful than last gen consoles when all is said and done though.

 

I'm waiting to see what Project C.A.R.S. has to offer.


Edited by krizzx, 24 February 2014 - 09:18 AM.


#22 Raiden

Raiden

    wall crusher

  • Members
  • 4,738 posts

Posted 24 February 2014 - 11:16 AM

Project Cars & Fast Racing Neo should show off Wii U's power if all is to be believed. I maintain that power wise it's the PS2 of this gen.



#23 MorbidGod

MorbidGod

    Hammer Bro.

  • Members
  • 1,717 posts

Posted 24 February 2014 - 01:35 PM

Project Cars & Fast Racing Neo should show off Wii U's power if all is to be believed. I maintain that power wise it's the PS2 of this gen.


Is Project Cars ever coming out?
Whovian12 -- Nintendo Network ID.

#24 TheUltimateWaddleDee

TheUltimateWaddleDee

    That Guy

  • Members
  • 2,077 posts
  • NNID:UltimateWaddleD
  • Fandom:
    Nintendo, Metal Gear, and WADDLE DEE!

Posted 24 February 2014 - 01:50 PM

Seriously, this should be in the hardware section. This has nothing to to with existing or upcoming software, and everything to do with hardware.

KtOSpy7.jpg
I will not die until I achieve something. Even though the ordeal is high, I never give in. Therefore, I die with no regrets~Ikaruga Prologue
http://fc05.devianta...ask-d5k49sd.jpg


#25 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 24 February 2014 - 01:52 PM

krizzx, on 24 Feb 2014 - 12:12 PM, said:
That's not more than wha tI thought. That is exactly what I thought it was from the beginning.

The RAM had to be dual channel. It would make no sense for it not be giving that it was strictly two chip access and the technoloogy was so cxommon that there would no reason not to implement it effectly making the bandwith much higher than the PS3/360's without getting into th eDRAM at all.

Also, the CPU actually outperforms the PS3/360 CPU's in a lot of areas. Things is, the CPU is so different that no code made for one will work well on the other. You can't put Wii U code on the 360/PS3 and expect optimal reults. Likewise this cannot be done with the last gen consoles to the Wii U.

The problem with ports is that most of them are made with the PS3/360 as the lead platform and those are given the highest budget. The Wii U just gets a cheap port of the PS3/360 game that isn't optimized for it so the ports generally come out playing better on the PS3/360. No dev is going to spend the money to make a ground up version for the Wii U.

The console is significantly more powerful than last gen consoles when all is said and done though.

I'm waiting to see what Project C.A.R.S. has to offer.

That custom 750 is definately surprising a lot of people. Id be willing to say it beats the 360 cpu pretty decently in most areas, except for where that altivec comes into play.... Honestly, I dont know how those paired singles are covering that flop ground. Its not what I expected when I found out there was no modern simd added.

I would love to see see espresso put through a series of benchmarks. Particularly some tailored for putting paired singles through the paces.

banner1_zpsb47e46d2.png

 


#26 Raiden

Raiden

    wall crusher

  • Members
  • 4,738 posts

Posted 24 February 2014 - 02:52 PM

Is Project Cars ever coming out?

Later this year Q4



#27 megafenix

megafenix

    Blooper

  • Members
  • 169 posts

Posted 27 February 2014 - 10:44 AM

interesting, plus the article uses good bases, specially the part of NEC that made the 360 edram and so it also made the wii u edram, the maly, so onin difference its that wii u has 8 macros isntead of 4 like 360 had and also that the wii u gpu has full access to the edram bandwidth cause both are in the same silicon die, while in the 360 case only the ROPS were in the same die with the edram so only those had full access to the 256GB/s, while the gpu had to communictae via an external bus of 32GB/s to the edram+rops

 

since this article from bob paterson

http://siliconvalley...or Xbox 360.pdf

 

 

says that there are 4 macros of 2.5MB each with 500mhz speed and 1024bits each macro, then if we do the math we indeed get the 256GB/s we already knew it had

 

4macros*500mhz*1024bits/(8bits*1000)=256GB/s

 

as we can see in the wii u photo from chipworks we have 8 macros, so

8macros*550mhz*1024bits/(8bits*1000)=563.2GB/s

 

we must take into account that we are supposing the least of bus width per macro based on an old design of more than 7 years ago, so obviously nintendo had access to a more refined technology, maybe something like 2048bits per macro, that would give something near the terabyte of bandwidth, and since sony already said that was aming for more of a terabyte of bandwidth with edram dont see why is not believable. I mean, already Renesas said that wii u edram was using the best technology from NEC at the point that producing and making it elsewhere would be difficult, we also have shinen comments that wii u edram provides a huge bandwidth. But as for now 563.2GB/s sounds ok, could have more but we need more info

 

of course that in order to produce good graphics the edram must be used for tesselation+dislacements, if not then there is very few improvemnt in the polygons department

 

Even if its not 1 terabyte of bandwidth and ends up between 563.2GB/s(minium accounting the old tech from xbox 360 edram from nec 7 years  before the launch of wiiu) and 1 terabyte the nadwidth would be enough and sounds about right and surely the system can hamdle it,  there is no doubt about that, well, at least not for those who have investigated a bit about the gpu internals a bit

 

just hope the number ends up somhwere near the terabyte, lioke 800GB/s or something, but 563.2GB/s seems enough for the wii u to accomplish good graphics

 

dont see why people dont beleive it, come on even the agmecube which was a hardware of 2000-2001 had almost 20GB/s of bandwidth with the embedded memory, and that was for just 3MB, wii u is being produced by the same company who produced both the gamecube embedded memory and the xbox 360 edram and has passed like a decade since then

 

peopel forget so soon

flipper%20datapath.gif



This... isnt even close.

This reeks of eyeofcores garbage over on beyond3d before he was banned, which he went so far as to change the wikipedia page for.... so he could quote himself for 'evidence'.

While the wii u easily COULD have this badnwidth with that 32Mb of ram divided into 4Mb macros, IF the edram was bussed at 1024 per block. It would be massive overkill. There would be very little point in having such a massive excess in memory bandwidth, the processors just dont have what it takes to fill that highway. It would be a better use of money to make the memory faster instead of wider at that point.

And thats not a slight against the wii u either, thats more bandwidth than either the ps4 or xbone could make use of as well.

 

 

there is no overkill, even gpus like hd4000 series can handle terabytes/s of bandwidth, from low to high end

gamecube already had 18GB/s of bandwidth with its 3MB of embedded memory made by the same NEC company we are talking about(now fused with renesas). its been more than a decade since then, gamecube was a technology of 2000-2001, obviously technology has matured since then and now we have 32MB not 3MB, hell, even with 32MB of the old tech from nec on gamecube i can get about 200GB/s, why a technology 10 years later or so and at 40nm and not 180nm would perform less than that?

 

renesas already said that wii u edram was being made with the best technologies from that plan from NEC and also shinen said that wii u edram has plenty of high bandwidth


Edited by megafenix, 26 February 2014 - 10:35 PM.


#28 Raiden

Raiden

    wall crusher

  • Members
  • 4,738 posts

Posted 27 February 2014 - 11:07 AM

pEFimT5.jpg



#29 Azure-Edge

Azure-Edge

    Chain Chomp

  • Members
  • 782 posts
  • NNID:Azure-X

Posted 27 February 2014 - 11:27 AM

pEFimT5.jpg

 

It's funny because that is absolutely what that says.


pNgecl.gif


#30 megafenix

megafenix

    Blooper

  • Members
  • 169 posts

Posted 27 February 2014 - 11:35 AM

It's funny because that is absolutely what that says.

i have no problem proving the contrary, is you who say carp, why dont you investigate a little?

any person who at least has little knowledge on gpus know this, i out there in internet, seriously, inform yourselves before making comments, cause thats all you do, empty comments


Edited by megafenix, 27 February 2014 - 11:39 AM.


#31 Azure-Edge

Azure-Edge

    Chain Chomp

  • Members
  • 782 posts
  • NNID:Azure-X

Posted 27 February 2014 - 12:11 PM

i have no problem proving the contrary, is you who say carp, why dont you investigate a little?

any person who at least has little knowledge on gpus know this, i out there in internet, seriously, inform yourselves before making comments, cause thats all you do, empty comments

 

What?


pNgecl.gif


#32 Raiden

Raiden

    wall crusher

  • Members
  • 4,738 posts

Posted 27 February 2014 - 12:18 PM

i have no problem proving the contrary, is you who say carp, why dont you investigate a little?

any person who at least has little knowledge on gpus know this, i out there in internet, seriously, inform yourselves before making comments, cause thats all you do, empty comments



#33 megafenix

megafenix

    Blooper

  • Members
  • 169 posts

Posted 27 February 2014 - 01:43 PM

dude i warn you, i got the prove right here on the pocket, you really should apologise before i make laugh of you

i give you an hour, and dont bother deleting this, i have already printed it and saved the comments

 

idiot.jpg

 

1 hour, one hour dude, i am not joking, i swear that i will gonna wipe the floor with you if you dont, i have the information right here and anybody even an amateur can understand it

 

1 hour


Edited by megafenix, 27 February 2014 - 01:48 PM.


#34 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 27 February 2014 - 02:00 PM

You dont just add seperate caches of seperate purposes together and claim thats the bandwidth of anything you want to prove on the gamecube. Thats idiotic.

And you have no proof whatsoever the wii u's edram cells are bussed at 1024 per cell.

But hey, why dont you or eye of core try bringing that bullcrap back up on the beyond 3d forums so you can kicked off again, that was hilarious.

banner1_zpsb47e46d2.png

 


#35 Raiden

Raiden

    wall crusher

  • Members
  • 4,738 posts

Posted 27 February 2014 - 02:19 PM

 

 

 

1 hour, one hour dude, i am not joking, i swear that i will gonna wipe the floor with you

 

1 hour

WYVIpRT.jpg

 

 

lc8zrfK.jpg



#36 megafenix

megafenix

    Blooper

  • Members
  • 169 posts

Posted 27 February 2014 - 02:53 PM



 



 

 

 

1 hour, one hour dude, i am not joking, i swear that i will gonna wipe the floor with you

 

1 hour

WYVIpRT.jpg

 

 

lc8zrfK.jpg

 

 

 

 

i warned you, sorry you did this on yourself

here, share it with 3dude too

http://developer.amd...nsform-part-ii/

 

"

Local memory or Local Data Share (LDS) is a high-bandwidth memory used for data-sharing among work-items within a work-group. ATI Radeon™ HD 5000 series GPUs have 32 KB of local memory on each compute unit. Figure 1 shows the OpenCL™ memory hierarchy for GPUs [1].

 

 

Figure 1: Memory hierarchy of AMD GPUs

Local memory offers a bandwidth of more than 2 TB/s which is approximately 14x higher than the global memory [2]. Another advantage of LDS is that local memory does not require coalescing; once the data is loaded into local memory, it can be accessed in any pattern without performance degradation

 

"

 

Each SIMD core has its own local data share, for example the hd4870 has 10SIMD cores, plus each texture unit has its own texture cache and can have a bandwidth of 480GB/s each one. DO the math

http://www.tomshardw...850,1957-5.html

"

With the RV770, the AMD engineers didn’t stop at optimizing their architecture to only slightly increase the die real-estate— they also borrowed a few good ideas from the competition. The G80 had introduced a small, 16-KB memory area per multiprocessor that’s entirely under the programmer’s control, unlike a cache. This memory area, accessible in CUDA applications, can share data among threads. AMD has introduced its version of this with the RV770. It’s called Local Data Share and is exactly the same size as its competitor’s Shared Memory. It also plays a similar role by enabling GPGPU applications to share data among several threads. The RV770 goes even further, with another memory area (also 16 KB) called Global Data Share to enable communicationicon1.pngamong SIMD arrays.

Texture units

While the ALUs haven’t undergone a major modification, the texture units have been completely redesigned. The goal was obvious – as with the rest of the GPU, it was to increase performance significantly while maintaining as small a die area as possible. The engineers set fairly ambitious goals, aiming for an increase of 70% in performance for an equivalent die area. To do that, they focused their efforts largely on the texture cache. The bandwidth of the L1 texture cache was increased to 480 GB/s.

 

"

 

 

So, its 563.2GB/s gonna bottleneck the GPU when the GPUs can handle terabytes/s of bandwidth with their internal caches like the local data shares, texture caches and the global data share?

 

I know how you must feel right now

dumb-and-dumber-dumb-and-dumber-6241055-

 

 

 

I told you i wasnt joking around, you asked for it, you got it

 

 

 

On purpose, even if those caches have big bandwidth, they are very small, each local data share is only 16KB or 32KB, and texture caches are even smaller, only 8KB to 16KB, there is only one big global data sahre for all the gpu of only 64KB or bit more. So obviously even accounting them all you are not even gonna get 1MB, thats weere the Edram enters sinnce its 32MB+OTHER 3MB

 

Dont bother deleting this, i already saved it all


Edited by megafenix, 27 February 2014 - 02:59 PM.


#37 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 27 February 2014 - 03:04 PM

lmfao. Thats not the gpu's embedded dram you moron, its the simd's local data share. You cant use it for ANYTHING you are implying, its reserved completely for the simd wavefront operations. And the global data share is for mantaining coherency between multiple sim d arrays.

You are hilariously clueless.

banner1_zpsb47e46d2.png

 


#38 megafenix

megafenix

    Blooper

  • Members
  • 169 posts

Posted 27 February 2014 - 03:11 PM

lmfao. Thats not the gpu's embedded dram you moron, its the simd's local data share. You cant use it for ANYTHING you are implying, its reserved completely for the simd wavefront operations.

You are hilariously clueless.

so, thats all you got?

what matters if its edram or sram or whatever, so if edram provides 500GB/s of bandwidth gpu cannot handle it cause only would be able to handle it if it was sram?

 

do you have any idea how pathetic that sounds?

 

well, now is your turn isnt it?

 

prove it matter if its sram or edram for a gpu so that it can handle huge bandwidth moron

 

remeber, i am saving all these comments, no point in deleting this, if you dare i swear gonna deliver all this to everyone

 

your hand dude, illustrate us


Edited by megafenix, 27 February 2014 - 03:15 PM.


#39 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 27 February 2014 - 03:14 PM

so, thats all you got?
what matters if its edram or sram or whatever, so if edram provides 500GB/s of bandwidth gpu cannot handle it cause only would be able to handle it if it was sram?
 
do you have any idea how pathetic that sounds?
 
well, now is your turn isnt it?
 
prove it matter if its sram or edram for a gpu so that it can handle bandwidth moron


Because that ram isnt for fetching assets you idiot, thats why it has no CAPACITY, Its for grinding large data structures like arrays already provided by the edram in paralell. Like say taking a PEICE of a texture, because it CANT FIT THE WHOLE TEXTURE BECAUSE OF ITS TINY CAPACITY, and shading it, and then the next peice, and the next... Almost like a thing that shades, like a shader or something.

banner1_zpsb47e46d2.png

 


#40 megafenix

megafenix

    Blooper

  • Members
  • 169 posts

Posted 27 February 2014 - 03:23 PM

Because that ram isnt for fetching assets you idiot, thats why it has no CAPACITY, Its for grinding large data structures like arrays already provided by the edram in paralell. Like say taking a PEICE of a texture, because it CANT FIT THE WHOLE TEXTURE BECAUSE OF ITS TINY CAPACITY, and shading it, and then the next peice, and the next... Almost like a thing that shades, like a shader or something.

seriosuly?

i got anotther article about how developers wnated to use the edram of 360 for particle effects and other stuff but that they couldnt cause there want enough space for it(360 already has to use sub-hd when you add materials on edram besides framebuffer)

but hell, have you even been reading and putting attention?

 

is more than a devcade and using embedded memory or edram for textures has been out there

 

learn

flipper%20datapath.gif

 

 

seriously, seems all you do is to make random comments, i dont recommend taking that approach for your own good

 

i have saved this again


Edited by megafenix, 27 February 2014 - 03:26 PM.





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users

Anti-Spam Bots!