Jump to content


Photo

Wii U's RAM is slower than PS3/Xbox 360.


  • Please log in to reply
270 replies to this topic

#221 routerbad

routerbad

    Lakitu

  • Section Mods
  • 2,013 posts
  • NNID:routerbad
  • Fandom:
    Zelda, Mario, Halo, Star Trek

Posted 08 April 2013 - 01:26 PM

Yea, the bandwidth to the edram on the Wii U would be higher.  The 360's edram wasnt really edram on the first versions of the 360, it was on a seperate chip that was placed onto the 360.  You sure about the 360 bandwidth, I read that it was cut in half for reads and writes.  Like I pointe out, a HD5450 can run Need For Speed Most Wanted and has only 12.8GB/s of bandwidth.  The bandwidth savings from edram is huge, so the Wii U would be in far better shape even with only 12.8GB/s to the main memory pool.  The edram is the performer here.  Its fast, and has enough capacity to hold the bandwidth hogs allowing the main memory to be used exclusively for reads.  Keep in mind that the 360 always has to send the framebuffer to the main memory before it can be sent out of display.  It also has to use tiling to do a 720p frame with AA, eating up even more bandwidth to the main memory.   

That's what we need numbers for, what is the bandwidth between Mem0 and Mem2, and how is the Wii U designed to use it?

 

I'm pretty sure about the 360 bandwidth, and the diagram shows that the 10GB/s number is for GPU/CPU communications.  if the edram was used correctly, as a go between between main ram and the GPU, if the bandwidth between the two RAM pools was high enough, theoretically the GPU would never need to directly read or write with main RAM.  if they can stream assets through the faster bus, as well as use some of that for the high bandwidth users, it would make the effective bandwidth much much higher.  Kind of like a lower cost way of creating GDDR like overall bandwidth (not really that high, but high enough to stay competitive) as well as keeping resource intensive functions off the GPU registers.  A best of both worlds.


Edited by routerbad, 08 April 2013 - 01:28 PM.


#222 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 08 April 2013 - 01:37 PM


routerbad, on 08 Apr 2013 - 07:40, said:That's what we need numbers for, what is the bandwidth between Mem0 and Mem2, and how is the Wii U designed to use it?
I'm pretty sure about the 360 bandwidth, and the diagram shows that the 10GB/s number is for GPU/CPU communications.  if the edram was used correctly, as a go between between main ram and the GPU, if the bandwidth between the two RAM pools was high enough, theoretically the GPU would never need to directly read or write with main RAM.  if they can stream assets through the faster bus, as well as use some of that for the high bandwidth users, it would make the effective bandwidth much much higher.  Kind of like a lower cost way of creating GDDR like overall bandwidth (not really that high, but high enough to stay competitive) as well as keeping resource intensive functions off the GPU registers.  A best of both worlds.


Thats the whole point of the edram, so the gpu never has to wait for data, because the edram always supplies it.

But if the edram has to wait on the main ram because of a really low main ram bandwidth, then the gpu is waiting on the edram, and it defeats the whole purpose.


I mean, total bw, we already know with just the edram pools alone, the wii u bandwidth destroys the entirety of the 360 bandwidth.

Its the main memory bw thats the problem... And im pretty sure nintendo did something to adress it, looking at that unorthodox bus.

But what? Why is that bus so big? Why so many lanes? Whats the mem controller doing with it?


Edited by 3Dude, 08 April 2013 - 01:43 PM.

banner1_zpsb47e46d2.png

 


#223 routerbad

routerbad

    Lakitu

  • Section Mods
  • 2,013 posts
  • NNID:routerbad
  • Fandom:
    Zelda, Mario, Halo, Star Trek

Posted 08 April 2013 - 01:48 PM


routerbad, on 08 Apr 2013 - 07:40, said:That's what we need numbers for, what is the bandwidth between Mem0 and Mem2, and how is the Wii U designed to use it?
I'm pretty sure about the 360 bandwidth, and the diagram shows that the 10GB/s number is for GPU/CPU communications.  if the edram was used correctly, as a go between between main ram and the GPU, if the bandwidth between the two RAM pools was high enough, theoretically the GPU would never need to directly read or write with main RAM.  if they can stream assets through the faster bus, as well as use some of that for the high bandwidth users, it would make the effective bandwidth much much higher.  Kind of like a lower cost way of creating GDDR like overall bandwidth (not really that high, but high enough to stay competitive) as well as keeping resource intensive functions off the GPU registers.  A best of both worlds.


Thats the whole point of the edram, so the gpu never has to wait for data, because the edram always supplies it.

But if the edram has to wait on the main ram because of a really low main ram bandwidth, then the gpu is waiting on the edram, and it defeats the whole purpose.


I mean, total bw, we already know with just the edram pools alone, the wii u bandwidth destroys the entirety of the 360 bandwidth.

Its the main memory bw thats the problem... And im pretty sure nintendo did something to adress it, looking at that unorthodox bus.

But what? Why is that bus so big? Why so many lanes? Whats the mem controller doing with it?

Right, my point being that if the Main RAM --> EDRAM bw is anywhere near as high as the 256GB/s that the 360 had, there would be no waiting on main RAM.  Then again, max bw would be max bw, regardless of where its sent.  That coupled with the really fat bus and extra lanes on the bus make no logical sense given the numbers we can get through official channels.


Edited by routerbad, 08 April 2013 - 01:49 PM.


#224 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 08 April 2013 - 02:01 PM


routerbad, on 08 Apr 2013 - 08:02, said:Right, my point being that if the Main RAM --> EDRAM bw is anywhere near as high as the 256GB/s that the 360 had, there would be no waiting on main RAM.  Then again, max bw would be max bw, regardless of where its sent.  That coupled with the really fat bus and extra lanes on the bus make no logical sense given the numbers we can get through official channels.


Edram interacts with main ram at main ram bw, not edram bandwidth

Its like sonic waiting on 'freinds'...

Hey speaking of waiting, whats the latency of 360's ram? Isnt it >150? Whats the latency of the wii u's ddr3?

High bandwidth is pretty useless if you have to wait forever to access it.


Edited by 3Dude, 08 April 2013 - 02:14 PM.

banner1_zpsb47e46d2.png

 


#225 Goodtwin

Goodtwin

    Bullet Bill

  • Members
  • 356 posts

Posted 08 April 2013 - 02:23 PM

Your treating the edram like its a cache only, its not.  The fact that the GPU can keep the framer buffer, z buffer, and handle AA all in the edram make a huge impact on performance.  Take for example a PC game, turn the textures from low to high and see how it affects performance.  Now take that same game and bump from 720p to 1080p.  I can almost guarantee you that the resolution increase impacted performance quite a bit more than the texture quality increase.  Now start adding AA and watch you performance start to drop rapidly.  High quality assets arent your biggest bandwidth hogs, and I get the idea that you think they are.  You could use high end PC assets in a game and if your rendering it in 480p with no AA, your memory bandwidth requirements wont be very high.  

 

You have to remember that file size and bandwidth are totally separate.  For example, I could have a 1MB file stored in the ram that can eat up more bandwidth than a 10MB file.  If that 1MB file is fetched 15 times per frame, and the 10MB file fetched only once, then the 1MB file is a bigger bandwidth hog.

 

Its why I wanted to find a modern graphics card that had just 12.8GB/s of video memory bandwidth.  Guys are using the AMD HD5450 and getting  games like Need for Speed Most Wanted to run in 720p around 25fps, and 600p 35+ fps.  This is on PC, not optimized and the 12.8GB/s is all the graphics card has, unlike the Wii U that spares the main memory so much of the bandwidth because of the edram.  Running Wii U quality may cause it to run at like 10fps, but thats the point, edram is the difference maker.   


Edited by Goodtwin, 08 April 2013 - 02:25 PM.


#226 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 08 April 2013 - 02:42 PM


Goodtwin, on 08 Apr 2013 - 08:37, said:Your treating the edram like its a cache only, its not.  The fact that the GPU can keep the framer buffer, z buffer, and handle AA all in the edram make a huge impact on performance.  Take for example a PC game, turn the textures from low to high and see how it affects performance.  Now take that same game and bump from 720p to 1080p.  I can almost guarantee you that the resolution increase impacted performance quite a bit more than the texture quality increase.  Now start adding AA and watch you performance start to drop rapidly.  High quality assets arent your biggest bandwidth hogs, and I get the idea that you think they are.  You could use high end PC assets in a game and if your rendering it in 480p with no AA, your memory bandwidth requirements wont be very high.  
You have to remember that file size and bandwidth are totally separate.  For example, I could have a 1MB file stored in the ram that can eat up more bandwidth than a 10MB file.  If that 1MB file is fetched 15 times per frame, and the 10MB file fetched only once, then the 1MB file is a bigger bandwidth hog.
Its why I wanted to find a modern graphics card that had just 12.8GB/s of video memory bandwidth.  Guys are using the AMD HD5450 and getting  games like Need for Speed Most Wanted to run in 720p around 25fps, and 600p 35+ fps.  This is on PC, not optimized and the 12.8GB/s is all the graphics card has, unlike the Wii U that spares the main memory so much of the bandwidth because of the edram.  Running Wii U quality may cause it to run at like 10fps, but thats the point, edram is the difference maker.   


Im not worried about tiny file sizes, being fetched more.

Im concerned about the exact same number of fetches being made at a larger file size, at half the bandwidth. It is a port. I dont think its the biggest bw hog, it doesnt have to be, and its only one example. The game was made to push all aspects, including asset file sizes, and calls to main in general, and now its doing even more, supposedly at half the main mem bandwidth, but somehow with a larger system bus per chip than its being stated it has.

 


Edited by 3Dude, 08 April 2013 - 02:44 PM.

banner1_zpsb47e46d2.png

 


#227 routerbad

routerbad

    Lakitu

  • Section Mods
  • 2,013 posts
  • NNID:routerbad
  • Fandom:
    Zelda, Mario, Halo, Star Trek

Posted 08 April 2013 - 02:45 PM

Your treating the edram like its a cache only, its not.  The fact that the GPU can keep the framer buffer, z buffer, and handle AA all in the edram make a huge impact on performance.  Take for example a PC game, turn the textures from low to high and see how it affects performance.  Now take that same game and bump from 720p to 1080p.  I can almost guarantee you that the resolution increase impacted performance quite a bit more than the texture quality increase.  Now start adding AA and watch you performance start to drop rapidly.  High quality assets arent your biggest bandwidth hogs, and I get the idea that you think they are.  You could use high end PC assets in a game and if your rendering it in 480p with no AA, your memory bandwidth requirements wont be very high.  

 

You have to remember that file size and bandwidth are totally separate.  For example, I could have a 1MB file stored in the ram that can eat up more bandwidth than a 10MB file.  If that 1MB file is fetched 15 times per frame, and the 10MB file fetched only once, then the 1MB file is a bigger bandwidth hog.

 

Its why I wanted to find a modern graphics card that had just 12.8GB/s of video memory bandwidth.  Guys are using the AMD HD5450 and getting  games like Need for Speed Most Wanted to run in 720p around 25fps, and 600p 35+ fps.  This is on PC, not optimized and the 12.8GB/s is all the graphics card has, unlike the Wii U that spares the main memory so much of the bandwidth because of the edram.  Running Wii U quality may cause it to run at like 10fps, but thats the point, edram is the difference maker.   

I understand all of this, I was simply wondering what the interface between edram and main ram allowed for.  

 

For 3Dude, the latency on these chips is straight 9s, pretty much standard latency figures for 1600 (latency times around 10ns)


Edited by routerbad, 08 April 2013 - 02:48 PM.


#228 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 08 April 2013 - 02:54 PM


routerbad, on 08 Apr 2013 - 08:59, said:I understand all of this, I was simply wondering what the interface between edram and main ram allowed for.  
For 3Dude, the latency on these chips is straight 9s, pretty much standard latency figures for 1600 (latency times around 10ns)



Well thats nice to know for the cpu...Graphics arent exactly latency sensitive. I was just curious.

This bus though, its something that cant be dismissed.

Every system follows the same rules. Bus width=lanes from ram chips.

360 has a bus with of 128.
It has 4 chips, with 16 lanes coming from each chip.

16x2=32x4=128.

The wii u doesnt have those extra lanes over a normal 16 bit bus per chip for giggles.


banner1_zpsb47e46d2.png

 


#229 routerbad

routerbad

    Lakitu

  • Section Mods
  • 2,013 posts
  • NNID:routerbad
  • Fandom:
    Zelda, Mario, Halo, Star Trek

Posted 08 April 2013 - 02:57 PM


routerbad, on 08 Apr 2013 - 08:59, said:I understand all of this, I was simply wondering what the interface between edram and main ram allowed for.  
For 3Dude, the latency on these chips is straight 9s, pretty much standard latency figures for 1600 (latency times around 10ns)



Well thats nice to know for the cpu...Graphics arent exactly latency sensitive. I was just curious.

This bus though, its something that cant be dismissed.

Every system follows the same rules. Bus width=lanes from ram chips.

360 has a bus with of 128.
It has 4 chips, with 16 lanes coming from each chip.

16x2=32x4=128.

The wii u doesnt have those extra lanes over a normal 16 bit bus per chip for giggles.

It isn't likely, they are there for something, and I can't think of any reason other than more bandwidth, there is no error checking going on, and the extra lanes are a waste of engineering time if they aren't doing anything.



#230 NintendoReport

NintendoReport

    NintendoChitChat

  • Moderators
  • 5,907 posts
  • NNID:eddyray
  • Fandom:
    Nintendo Directs and Video Presentations

Posted 08 April 2013 - 04:10 PM

popcorn_2.gif

1276798322_drinking-a-jug-in4-secs.gif


Edited by Sorceror12, 08 April 2013 - 04:16 PM.

Keep Smiling, It Makes People Wonder What You Are Up To!
PA Magician | Busiest PA Magician | Magician Reviewed | Certified Magic Professionals

nccbanner_by_sorceror12-d9japra.png-- nintendoreportbox.png -- nintendo_switch_logo_transparent___wordm

#231 Goodtwin

Goodtwin

    Bullet Bill

  • Members
  • 356 posts

Posted 08 April 2013 - 06:44 PM

I understand all of this, I was simply wondering what the interface between edram and main ram allowed for.  

 

For 3Dude, the latency on these chips is straight 9s, pretty much standard latency figures for 1600 (latency times around 10ns)

 

The GPU has insane bandwidth to the edram, only Nintendo knows for sure since its actually embedded on the GPU itself, but its safe to say that its very high. 

 

I think I was missing the point 3Dude is trying to make.  I think he thinks that even with using the edram as I described, going forward only having access to 12.8GB/s worth of data from the main memory could be a serious limitation.  Which he could be right.  With a 60fps game, that would mean about 200MB of data can be pulled from the main memory per frame, or a 30fps game would have about 400MB of data from the memory per frame.  Basically even though the GPU is going the work in the edram, its still limited to the 200MB or 400MB base material that it can get from the main ram.  This is still better than the 360/PS3, simply because they dont have the capacity to give the GPU access to 400MB of texture data, that would leave practically nothing left over in the ram for everything else, but it would still put the available data per frame a lot closer to the 360/PS3 than the PS4.  The bandwidth of the 360/PS3 was actually better than its capacity.  If the 360/PS3 had more memory capacity, they would have had enough bandwidth for better textures.  Its was throwing me because I thought he was looking at it like the Wii U wouldnt be able to write to the edram and save all that bandwidth, but he is actually more or less saying that even if the main memory is read only, and all the writing is done in the edram 12.8GB/s is not what he was hoping for and could become a limitation. 

 

I dont know, if you find something that shows a similar ram setup with a workaround for better bandwidth, post it here.  Everything I find points to the 12.8GB/s.       


Edited by Goodtwin, 08 April 2013 - 06:46 PM.


#232 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 09 April 2013 - 02:22 AM

The GPU has insane bandwidth to the edram, only Nintendo knows for sure since its actually embedded on the GPU itself, but its safe to say that its very high. 
 
I think I was missing the point 3Dude is trying to make.  I think he thinks that even with using the edram as I described, going forward only having access to 12.8GB/s worth of data from the main memory could be a serious limitation.  Which he could be right.  With a 60fps game, that would mean about 200MB of data can be pulled from the main memory per frame, or a 30fps game would have about 400MB of data from the memory per frame.  Basically even though the GPU is going the work in the edram, its still limited to the 200MB or 400MB base material that it can get from the main ram.  This is still better than the 360/PS3, simply because they dont have the capacity to give the GPU access to 400MB of texture data, that would leave practically nothing left over in the ram for everything else, but it would still put the available data per frame a lot closer to the 360/PS3 than the PS4.  The bandwidth of the 360/PS3 was actually better than its capacity.  If the 360/PS3 had more memory capacity, they would have had enough bandwidth for better textures.  Its was throwing me because I thought he was looking at it like the Wii U wouldnt be able to write to the edram and save all that bandwidth, but he is actually more or less saying that even if the main memory is read only, and all the writing is done in the edram 12.8GB/s is not what he was hoping for and could become a limitation. 
 
I dont know, if you find something that shows a similar ram setup with a workaround for better bandwidth, post it here.  Everything I find points to the 12.8GB/s.       


Yeah, we got everything ram manufacturee side.

But nintendo side insists on being mysterious.

banner1_zpsb47e46d2.png

 


#233 Goodtwin

Goodtwin

    Bullet Bill

  • Members
  • 356 posts

Posted 09 April 2013 - 05:23 AM

12.8GB/s is not for component but for for module (64bit bus in the system)
and 3.2GB/s is for component level(16bit IO) that you mentioned (H5TQ4G63MFR)
is it enough answer for you?

 

I got that from Hynix this morning. 



#234 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 09 April 2013 - 06:29 AM

I got that from Hynix this morning.

Whats the context behind 'system'? 'compnent' and 'module'.

Are they talking about the individual components that make up a single module?

Are they actually divulging info on the wii u? If so its appreciated, but wouldnt that get them in trouble?

Edited by 3Dude, 09 April 2013 - 06:34 AM.

banner1_zpsb47e46d2.png

 


#235 Goodtwin

Goodtwin

    Bullet Bill

  • Members
  • 356 posts

Posted 09 April 2013 - 06:46 AM

They are saying per block of ram.  I think what they are saying is that you would likely find these modules placed on a 64bit bus to get the 12.8GB/s, but each component, which is the part number we found on the ram blocks in the Wii U, he is saying 3.2 GB/s per block.  If you were to buy ram for your PC that used this ram from Hynix, there would be four of these blocks put together on a 64bit bus.  Its clear, Hynix is saying that per block its 3.2GB/s.  They already told us that the I/O is 16 pin for one of the H5TQ4G63MFR. 



#236 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 09 April 2013 - 06:50 AM

Ah, so the same context we already had.

And so we are still stuck at why is the system bus so big?

banner1_zpsb47e46d2.png

 


#237 Goodtwin

Goodtwin

    Bullet Bill

  • Members
  • 356 posts

Posted 09 April 2013 - 07:36 AM

I suppose so. 

 

I think another question we could have is are we sure the edram is only 32MB?  For example, if the edram were actually 64MB, that would be enough to not only do everything I described earlier, but also cache 32MB of textures.  I know the 32MB number has always been rumored, but was anyone able to confirm that when the GPU die photos came out? 



One more thing, what about overlocked memory?  Perhaps Nintendo came up with a way to use cheaper ram, overclock the memory and get similar performance to the more expensive ram?  I know that is a stretch, but I am sort of at a loss.



#238 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 09 April 2013 - 07:59 AM


definately 32MB, although there is another smaller denser 1MB pool of s or psram on die as well.

Overclocking doest seem like a very power/thermal concious choice for a very power/thermal concious system, and it still wouldnt explain the too big bus.

Really the only thing i can think of is nintendo ordered custom ram based off of the specifications on the housing.

Those extra wires arent hooked up to nothing after all.

What a bizarre setup.


Edited by 3Dude, 09 April 2013 - 08:12 AM.

banner1_zpsb47e46d2.png

 


#239 routerbad

routerbad

    Lakitu

  • Section Mods
  • 2,013 posts
  • NNID:routerbad
  • Fandom:
    Zelda, Mario, Halo, Star Trek

Posted 09 April 2013 - 08:34 AM


definately 32MB, although there is another smaller denser 1MB pool of s or psram on die as well.

Overclocking doest seem like a very power/thermal concious choice for a very power/thermal concious system, and it still wouldnt explain the too big bus.

Really the only thing i can think of is nintendo ordered custom ram based off of the specifications on the housing.

Those extra wires arent hooked up to nothing after all.

What a bizarre setup.

This is where I'm heading as well.  It doesn't seem likely they would overclock the RAM unless they could keep the power and heat in check, plus that still wouldn't explain the excess lanes or the pins on the GPU side.



#240 Goodtwin

Goodtwin

    Bullet Bill

  • Members
  • 356 posts

Posted 09 April 2013 - 09:04 AM

I dont know, and I dont know that we will ever know.  Remember back in the day when IGN would get these kinds of things answered, even if it was an anoynmous source.  That doesnt happen anymore.  How can we have so many people working on the Wii U every day and no one is willing to divulge this information.  Personally I wish it was a consumers right to know specifications of a product.






1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users

Anti-Spam Bots!