Jump to content


Photo

Sony Prove the Wii U's Texture Bandwidth is not slow


  • Please log in to reply
53 replies to this topic

#1 Zinix

Zinix

    YA HOMIE.

  • Members
  • 4,410 posts
  • NNID:zinixzero
  • Fandom:
    The Twilight Zone Fandom

Posted 14 July 2013 - 09:59 AM

http://playeressence...fb_source=pubv1

 

Didn't see this posted, sorry if it was.


“Any state, any entity, any ideology that fails to recognize the worth, the dignity, the rights of man, that state is obsolete.”— Rod Serling, “The Twilight Zone” The Obsolete Man

Smoke meth. Hail Satan. Watch the yearly Twilight Zone marathons. Talk to dead people. Everyone is gay. Ignore people. Live life to the fullest.


#2 alan123

alan123

    Piranha Plant

  • Members
  • 889 posts

Posted 14 July 2013 - 10:20 AM

where does the guy in the video say that the WiiU texture bandwidth isn't slow ?

GDDR5 & GDDR3 are NOT the same, GDDR3 benefits from low latency at the expense of bandwidth, GDDR5's case is the opposite.

The other point is that the PS4 doesn't have eDRAM because it doesn't need it, the X1 uses eSRAM to boost the GDDR3.

PS4 system bandwidth - 176 GBs = 176

X1 - system bandwidth - 68.3GBs (eSRAM bandwidth 102GBs) =170.3

WiiU system bandwidth 12.8GBs (eDRAM bandwidth 70.4 GBs) = 83.2



#3 Julio93

Julio93

    Hammer Bro.

  • Members
  • 1,564 posts
  • NNID:Julio93
  • Fandom:
    All Nintendo & Capcom franchises.

Posted 14 July 2013 - 12:30 PM

This is making my head hurt. *Looks at video owner.* Oh......That guy.


kingdom-hearts-3-final-fantasy-15-slice.

 


#4 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 14 July 2013 - 01:00 PM

*
POPULAR

where does the guy in the video say that the WiiU texture bandwidth isn't slow ?
GDDR5 & GDDR3 are NOT the same, GDDR3 benefits from low latency at the expense of bandwidth, GDDR5's case is the opposite.
The other point is that the PS4 doesn't have eDRAM because it doesn't need it, the X1 uses eSRAM to boost the GDDR3.
PS4 system bandwidth - 176 GBs = 176
X1 - system bandwidth - 68.3GBs (eSRAM bandwidth 102GBs) =170.3
WiiU system bandwidth 12.8GBs (eDRAM bandwidth 70.4 GBs) = 83.2

Ha ha, guy, esram is way less dense than edram.

32Mb edram is going to have much, much higher bandwidth than 32Mb esram. The wii u's edram bandwidth is well into the triple digits.

You dont use sram for bandwidth, you use it for low latency.

Nice made up numbers for wii u though.

banner1_zpsb47e46d2.png

 


#5 Deaf

Deaf

    Goomba

  • Members
  • 1 posts

Posted 14 July 2013 - 05:26 PM

Ha ha, guy, esram is way less dense than edram.

32Mb edram is going to have much, much higher bandwidth than 32Mb esram. The wii u's edram bandwidth is well into the triple digits.

You dont use sram for bandwidth, you use it for low latency.

Nice made up numbers for wii u though.

Since when Is the RAM density any indication for its bandwith?

All the density has influence on is the on-die-space necessary.

SRAM even offers higher bandwith, with much faster access times, but it is also MUCH more expensive...

 

Does the X1 really use 32 MB of SRAM? That is pretty expensive stuff if true.


Edited by Deaf, 14 July 2013 - 05:30 PM.


#6 Cloud Windfoot Omega

Cloud Windfoot Omega

    Cheep-Cheep

  • Members
  • 148 posts

Posted 14 July 2013 - 05:37 PM

Since when Is the RAM density any indication for its bandwith?

All the density has influence on is the on-die-space necessary.

SRAM even offers higher bandwith, with much faster access times, but it is also MUCH more expensive...

 

Does the X1 really use 32 MB of SRAM? That is pretty expensive stuff if true.

yes this is right, its more expensive because it uses 4 times the transistors, is static  and  is faster due to being static. But we still know next to nothing  about either  one so we cant just assume anything.



#7 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 14 July 2013 - 07:08 PM

Since when Is the RAM density any indication for its bandwith?
All the density has influence on is the on-die-space necessary.
SRAM even offers higher bandwith, with much faster access times, but it is also MUCH more expensive...

Does the X1 really use 32 MB of SRAM? That is pretty expensive stuff if true.

Yup, it sure does use 32 Mb esram. I dont know whether its 4t or 6t, but regaurdless, as far as latency is concerned its gonna be pretty lickity split. Like blazing fast, running circles around ps4's gddr5. But... Not in bandwidth.

As for how edram gets more bandwidth than what should be superior sram of the same capacity:

Space=power. Denser = more space.

denser=more cells of a smaller capacity vs less cells of a greater capacity, more cells= more busses=higher bandwidth.

Its why ibm uses edram with each cell heavily bussed up to emulate the latency of 6tsram (the room for these would be taken up by all the transistors sram needs!), with the bandwidth of edram for power 7, IBM states it gets over twice the bandwidth with this edram as you can get with the same capacity of sram, while only having slightly worse latency and cycle time than 6t sram.

Sram has fantastic low latency because of the fact it doesnt need to refresh, its static. Gamecube and wii use it, as does wii u (3mb of various sram/psram embedded on the gpu) But bandwidth isnt a strong bullet point in light of (kind of) recent developments in edram technology. Nintendo wouldnt switch from exotic choices like 1
tsram and fcram, to seemingly ordinary edram for nothing. Theyve always demanded perfornance ram, if nothing else.

The biggest problem I see is that a pretty great bandwidth is limited to a pretty slow bandwidth when it needs to access that 1Gb main memory. You cant just add the two bandwidths together like the post i quoted was doing, when that edram (or esram a la xbone) is interfacing with main ram, its doing so at main rams speed.

banner1_zpsb47e46d2.png

 


#8 routerbad

routerbad

    Lakitu

  • Section Mods
  • 2,013 posts
  • NNID:routerbad
  • Fandom:
    Zelda, Mario, Halo, Star Trek

Posted 14 July 2013 - 08:40 PM

Ha ha, guy, esram is way less dense than edram.

32Mb edram is going to have much, much higher bandwidth than 32Mb esram. The wii u's edram bandwidth is well into the triple digits.

You dont use sram for bandwidth, you use it for low latency.

Nice made up numbers for wii u though.

Not only that, but on die vs on MCM is very very different



Yup, it sure does use 32 Mb esram. I dont know whether its 4t or 6t, but regaurdless, as far as latency is concerned its gonna be pretty lickity split. Like blazing fast, running circles around ps4's gddr5. But... Not in bandwidth.

As for how edram gets more bandwidth than what should be superior sram of the same capacity:

Space=power. Denser = more space.

denser=more cells of a smaller capacity vs less cells of a greater capacity, more cells= more busses=higher bandwidth.

Its why ibm uses edram with each cell heavily bussed up to emulate the latency of 6tsram (the room for these would be taken up by all the transistors sram needs!), with the bandwidth of edram for power 7, IBM states it gets over twice the bandwidth with this edram as you can get with the same capacity of sram, while only having slightly worse latency and cycle time than 6t sram.

Sram has fantastic low latency because of the fact it doesnt need to refresh, its static. Gamecube and wii use it, as does wii u (3mb of various sram/psram embedded on the gpu) But bandwidth isnt a strong bullet point in light of (kind of) recent developments in edram technology. Nintendo wouldnt switch from exotic choices like 1
tsram and fcram, to seemingly ordinary edram for nothing. Theyve always demanded perfornance ram, if nothing else.

The biggest problem I see is that a pretty great bandwidth is limited to a pretty slow bandwidth when it needs to access that 1Gb main memory. You cant just add the two bandwidths together like the post i quoted was doing, when that edram (or esram a la xbone) is interfacing with main ram, its doing so at main rams speed.

 

Yup, it sure does use 32 Mb esram. I dont know whether its 4t or 6t, but regaurdless, as far as latency is concerned its gonna be pretty lickity split. Like blazing fast, running circles around ps4's gddr5. But... Not in bandwidth.

As for how edram gets more bandwidth than what should be superior sram of the same capacity:

Space=power. Denser = more space.

denser=more cells of a smaller capacity vs less cells of a greater capacity, more cells= more busses=higher bandwidth.

Its why ibm uses edram with each cell heavily bussed up to emulate the latency of 6tsram (the room for these would be taken up by all the transistors sram needs!), with the bandwidth of edram for power 7, IBM states it gets over twice the bandwidth with this edram as you can get with the same capacity of sram, while only having slightly worse latency and cycle time than 6t sram.

Sram has fantastic low latency because of the fact it doesnt need to refresh, its static. Gamecube and wii use it, as does wii u (3mb of various sram/psram embedded on the gpu) But bandwidth isnt a strong bullet point in light of (kind of) recent developments in edram technology. Nintendo wouldnt switch from exotic choices like 1
tsram and fcram, to seemingly ordinary edram for nothing. Theyve always demanded perfornance ram, if nothing else.

The biggest problem I see is that a pretty great bandwidth is limited to a pretty slow bandwidth when it needs to access that 1Gb main memory. You cant just add the two bandwidths together like the post i quoted was doing, when that edram (or esram a la xbone) is interfacing with main ram, its doing so at main rams speed.

That's where developing specifically to manage the smaller pool of EDRAM comes in.



#9 GAMER1984

GAMER1984

    Lakitu

  • Members
  • 2,036 posts
  • NNID:gamer1984
  • Fandom:
    Nintendo

Posted 14 July 2013 - 08:49 PM

It just doesn't matter. I personally expect AC4, blacklist, watchdogs, batman AO, and COD to all run parity with 360 version this year on Wii U. Devs just aren't going to put the time in on Wii U. To try and make more of it. They feel like its a job well done to get Wii U running at parity with 360.

#10 routerbad

routerbad

    Lakitu

  • Section Mods
  • 2,013 posts
  • NNID:routerbad
  • Fandom:
    Zelda, Mario, Halo, Star Trek

Posted 14 July 2013 - 08:59 PM

It just doesn't matter. I personally expect AC4, blacklist, watchdogs, batman AO, and COD to all run parity with 360 version this year on Wii U. Devs just aren't going to put the time in on Wii U. To try and make more of it. They feel like its a job well done to get Wii U running at parity with 360.

That may have been good and well until someone ported a game in two months that was closer to what the game looked like on the PC, without rewriting the engine to take advantage of the differences in CPU or GPU architecture.  The games will look much better than the 360 version.



#11 GAMER1984

GAMER1984

    Lakitu

  • Members
  • 2,036 posts
  • NNID:gamer1984
  • Fandom:
    Nintendo

Posted 14 July 2013 - 09:15 PM

That may have been good and well until someone ported a game in two months that was closer to what the game looked like on the PC, without rewriting the engine to take advantage of the differences in CPU or GPU architecture. The games will look much better than the 360 version.


I'm willing for any developer to prove me wrong. Splinter cell blacklist is up 1st... Let's see. Haven't heard Ubi say anything about it yet with just about a month and some days to go.

#12 routerbad

routerbad

    Lakitu

  • Section Mods
  • 2,013 posts
  • NNID:routerbad
  • Fandom:
    Zelda, Mario, Halo, Star Trek

Posted 14 July 2013 - 09:24 PM

I'm willing for any developer to prove me wrong. Splinter cell blacklist is up 1st... Let's see. Haven't heard Ubi say anything about it yet with just about a month and some days to go.

You aren't an authority in game development, the developers don't have to "prove" anything.  They just make the games for the platforms that the brass tells them to make the games for.  Don't expect any developer of publisher to come out and qualify the ridiculous presumption that for some reason they aren't going to take advantage of better hardware by making a statement about it.  They've talked about the next gen engine they are using, for each individual platform.  The Wii U version of the engine will be more capable than the 360 version of the engine, in all cases. 



#13 GAMER1984

GAMER1984

    Lakitu

  • Members
  • 2,036 posts
  • NNID:gamer1984
  • Fandom:
    Nintendo

Posted 14 July 2013 - 09:37 PM

You aren't an authority in game development, the developers don't have to "prove" anything. They just make the games for the platforms that the brass tells them to make the games for. Don't expect any developer of publisher to come out and qualify the ridiculous presumption that for some reason they aren't going to take advantage of better hardware by making a statement about it. They've talked about the next gen engine they are using, for each individual platform. The Wii U version of the engine will be more capable than the 360 version of the engine, in all cases.


Like I said little over a month to go we shall see. I'm ready to buy the game if its a crappy port(on par with 360) then that money will go to Pikmin 3.

Edited by GAMER1984, 14 July 2013 - 09:38 PM.


#14 MorbidGod

MorbidGod

    Hammer Bro.

  • Members
  • 1,717 posts

Posted 14 July 2013 - 09:56 PM

It just doesn't matter. I personally expect AC4, blacklist, watchdogs, batman AO, and COD to all run parity with 360 version this year on Wii U. Devs just aren't going to put the time in on Wii U. To try and make more of it. They feel like its a job well done to get Wii U running at parity with 360.


Problem is, WE HAVE seen graphics on the Watch Dogs for Wii U, and it looks just as good as the PS4. The other games I don't know. It all depends on that team working on each game.
Whovian12 -- Nintendo Network ID.

#15 GAMER1984

GAMER1984

    Lakitu

  • Members
  • 2,036 posts
  • NNID:gamer1984
  • Fandom:
    Nintendo

Posted 14 July 2013 - 10:03 PM

Problem is, WE HAVE seen graphics on the Watch Dogs for Wii U, and it looks just as good as the PS4. The other games I don't know. It all depends on that team working on each game.


That 10 seconds of footage from 3rd part game trailer by Nintendo. Yeah would be nice if you know Ubi demo the game on Wii U like they did for ps4 you know the console that hasn't even been released yet but not the console that has like 4 million users. I really think these publishers think we are that dumb.

#16 Alianjaro

Alianjaro

    Pokey

  • Members
  • 1,317 posts
  • Fandom:
    Monster Hunter, Legend of Zelda

Posted 15 July 2013 - 04:19 AM

You aren't an authority in game development, the developers don't have to "prove" anything.  They just make the games for the platforms that the brass tells them to make the games for.  Don't expect any developer of publisher to come out and qualify the ridiculous presumption that for some reason they aren't going to take advantage of better hardware by making a statement about it.  They've talked about the next gen engine they are using, for each individual platform.  The Wii U version of the engine will be more capable than the 360 version of the engine, in all cases.

Since we are the ones buying games, we kind of are an authority. Not alone, but if people group up, they can make a difference (cf. tripping removed in SSBWIIU). Many devs do care about people's opinion. Then intervene AAA publishers that force them to keep silence, and that's why we assume stuff like that.
Posted Image

#17 Goodtwin

Goodtwin

    Bullet Bill

  • Members
  • 356 posts

Posted 15 July 2013 - 07:56 AM

The biggest problem I see is that a pretty great bandwidth is limited to a pretty slow bandwidth when it needs to access that 1Gb main memory. You cant just add the two bandwidths together like the post i quoted was doing, when that edram (or esram a la xbone) is interfacing with main ram, its doing so at main rams speed.

 

 

 

This is absolutely true.  A lot of people have wanted to just add up the bandwidth numbers and declare the Wii U as having phenomenal bandwidth, as if its just as good as the memory bandwidth on the PS4, and anyone with some common sense knows that 32MB of super high bandwidth can only do so much for the slow bandwidth of the main memory.  The question is where is the tipping point for hardware.  How powerful does a GPU need to be before its trully being limited by the memory.  With the Wii U, memory management is very important.  What can be done in the edram, and what absolutely needs to be stored there.  You would never want data in the edram that eats up valuable space when it really be stored in the main memory with no issue.  All this is true for the cache on the cpu as well.  So the even if the edram's bandwidth is 256GB/s, the actual memory performance when used corrected may only perform as well as a traditional memory setup that has 35-50 GB/s bandwidth. 

 

Like Shin'en said, everything matches up very well.  The GPU isnt memory bandwidth starved, but at the same time there isnt tons of extra memory bandwidth that does to waste because the GPU isnt powerful enough to process as much data as the memory can supply.  There is a reason that Sony didnt choose this memory setup for the PS4, and that was for ease of programming.  Its obvious that many developers are either not capable, or simply dont have the resources to take advantage of Wii U's unique architecture.  It is funny that even Sony aknowledges that from an outright performance standpoint, the edram with slower main memory can work very well, but its not as easy to program for, and thats why they chose to go with high bandwidth main memory.   



#18 routerbad

routerbad

    Lakitu

  • Section Mods
  • 2,013 posts
  • NNID:routerbad
  • Fandom:
    Zelda, Mario, Halo, Star Trek

Posted 15 July 2013 - 10:58 AM

Since we are the ones buying games, we kind of are an authority. Not alone, but if people group up, they can make a difference (cf. tripping removed in SSBWIIU). Many devs do care about people's opinion. Then intervene AAA publishers that force them to keep silence, and that's why we assume stuff like that.

I understand where you are coming from and I agree.  The problem is that when they've said pubilcly before that all of the next gen versions of games look "the same" but that isn't enough for a minority of people that feel that it needs to be "proved" they aren't going to give a damn.  



This is absolutely true.  A lot of people have wanted to just add up the bandwidth numbers and declare the Wii U as having phenomenal bandwidth, as if its just as good as the memory bandwidth on the PS4, and anyone with some common sense knows that 32MB of super high bandwidth can only do so much for the slow bandwidth of the main memory.  The question is where is the tipping point for hardware.  How powerful does a GPU need to be before its trully being limited by the memory.  With the Wii U, memory management is very important.  What can be done in the edram, and what absolutely needs to be stored there.  You would never want data in the edram that eats up valuable space when it really be stored in the main memory with no issue.  All this is true for the cache on the cpu as well.  So the even if the edram's bandwidth is 256GB/s, the actual memory performance when used corrected may only perform as well as a traditional memory setup that has 35-50 GB/s bandwidth. 

 

Like Shin'en said, everything matches up very well.  The GPU isnt memory bandwidth starved, but at the same time there isnt tons of extra memory bandwidth that does to waste because the GPU isnt powerful enough to process as much data as the memory can supply.  There is a reason that Sony didnt choose this memory setup for the PS4, and that was for ease of programming.  Its obvious that many developers are either not capable, or simply dont have the resources to take advantage of Wii U's unique architecture.  It is funny that even Sony aknowledges that from an outright performance standpoint, the edram with slower main memory can work very well, but its not as easy to program for, and thats why they chose to go with high bandwidth main memory.   

Keep in mind that when they develop their games they should know what assets are coming when, and prepare to manage the EDRAM accordingly.  That's why Shin 'en said that devs need to work to maximize performance, because the potential is there for a very high theoretical bandwidth, but it has to be managed, and devs can't be caught sleeping at the wheel.

 

We also know that developers aren't being limited to the bandwidth of the main memory pool, either.  Somehow they are getting good bandwidth but haven't really said how much or how they are getting it.



#19 ugguy

ugguy

    Red Koopa Troopa

  • Members
  • 60 posts
  • NNID:uglyguy

Posted 15 July 2013 - 12:35 PM

:blink:

So after reading all this is this a bad thing or good thing?



#20 Goodtwin

Goodtwin

    Bullet Bill

  • Members
  • 356 posts

Posted 15 July 2013 - 12:59 PM

I understand where you are coming from and I agree.  The problem is that when they've said pubilcly before that all of the next gen versions of games look "the same" but that isn't enough for a minority of people that feel that it needs to be "proved" they aren't going to give a damn.  



Keep in mind that when they develop their games they should know what assets are coming when, and prepare to manage the EDRAM accordingly.  That's why Shin 'en said that devs need to work to maximize performance, because the potential is there for a very high theoretical bandwidth, but it has to be managed, and devs can't be caught sleeping at the wheel.

 

We also know that developers aren't being limited to the bandwidth of the main memory pool, either.  Somehow they are getting good bandwidth but haven't really said how much or how they are getting it.

 

Developers could be confusing GPU performance with improper use of the edram.  GPU performance cannot be maximized without proper use of the edram, without using it properly the GPU will not be able to run at its maximum potential.  Its up to the developer to make sure they are getting the most bang for their buck with the edram.  From what I can gather, this means that the frame buffers/Z buffer would stay in the edram saving tons of bandwidth.  However, this only leaves about 12-16MB for textures.  So the majority of textures would still need to come from the main memory pool. 






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users

Anti-Spam Bots!