http://playeressence...fb_source=pubv1
Didn't see this posted, sorry if it was.
Posted 14 July 2013 - 09:59 AM
“Any state, any entity, any ideology that fails to recognize the worth, the dignity, the rights of man, that state is obsolete.”— Rod Serling, “The Twilight Zone” The Obsolete Man
Smoke meth. Hail Satan. Watch the yearly Twilight Zone marathons. Talk to dead people. Everyone is gay. Ignore people. Live life to the fullest.
Posted 14 July 2013 - 10:20 AM
where does the guy in the video say that the WiiU texture bandwidth isn't slow ?
GDDR5 & GDDR3 are NOT the same, GDDR3 benefits from low latency at the expense of bandwidth, GDDR5's case is the opposite.
The other point is that the PS4 doesn't have eDRAM because it doesn't need it, the X1 uses eSRAM to boost the GDDR3.
PS4 system bandwidth - 176 GBs = 176
X1 - system bandwidth - 68.3GBs (eSRAM bandwidth 102GBs) =170.3
WiiU system bandwidth 12.8GBs (eDRAM bandwidth 70.4 GBs) = 83.2
Posted 14 July 2013 - 12:30 PM
This is making my head hurt. *Looks at video owner.* Oh......That guy.
Posted 14 July 2013 - 01:00 PM
POPULAR
Ha ha, guy, esram is way less dense than edram.where does the guy in the video say that the WiiU texture bandwidth isn't slow ?
GDDR5 & GDDR3 are NOT the same, GDDR3 benefits from low latency at the expense of bandwidth, GDDR5's case is the opposite.
The other point is that the PS4 doesn't have eDRAM because it doesn't need it, the X1 uses eSRAM to boost the GDDR3.
PS4 system bandwidth - 176 GBs = 176
X1 - system bandwidth - 68.3GBs (eSRAM bandwidth 102GBs) =170.3
WiiU system bandwidth 12.8GBs (eDRAM bandwidth 70.4 GBs) = 83.2
Posted 14 July 2013 - 05:26 PM
Ha ha, guy, esram is way less dense than edram.
32Mb edram is going to have much, much higher bandwidth than 32Mb esram. The wii u's edram bandwidth is well into the triple digits.
You dont use sram for bandwidth, you use it for low latency.
Nice made up numbers for wii u though.
Since when Is the RAM density any indication for its bandwith?
All the density has influence on is the on-die-space necessary.
SRAM even offers higher bandwith, with much faster access times, but it is also MUCH more expensive...
Does the X1 really use 32 MB of SRAM? That is pretty expensive stuff if true.
Edited by Deaf, 14 July 2013 - 05:30 PM.
Posted 14 July 2013 - 05:37 PM
Since when Is the RAM density any indication for its bandwith?
All the density has influence on is the on-die-space necessary.
SRAM even offers higher bandwith, with much faster access times, but it is also MUCH more expensive...
Does the X1 really use 32 MB of SRAM? That is pretty expensive stuff if true.
yes this is right, its more expensive because it uses 4 times the transistors, is static and is faster due to being static. But we still know next to nothing about either one so we cant just assume anything.
Posted 14 July 2013 - 07:08 PM
Yup, it sure does use 32 Mb esram. I dont know whether its 4t or 6t, but regaurdless, as far as latency is concerned its gonna be pretty lickity split. Like blazing fast, running circles around ps4's gddr5. But... Not in bandwidth.Since when Is the RAM density any indication for its bandwith?
All the density has influence on is the on-die-space necessary.
SRAM even offers higher bandwith, with much faster access times, but it is also MUCH more expensive...
Does the X1 really use 32 MB of SRAM? That is pretty expensive stuff if true.
Posted 14 July 2013 - 08:40 PM
Ha ha, guy, esram is way less dense than edram.
32Mb edram is going to have much, much higher bandwidth than 32Mb esram. The wii u's edram bandwidth is well into the triple digits.
You dont use sram for bandwidth, you use it for low latency.
Nice made up numbers for wii u though.
Not only that, but on die vs on MCM is very very different
Yup, it sure does use 32 Mb esram. I dont know whether its 4t or 6t, but regaurdless, as far as latency is concerned its gonna be pretty lickity split. Like blazing fast, running circles around ps4's gddr5. But... Not in bandwidth.
As for how edram gets more bandwidth than what should be superior sram of the same capacity:
Space=power. Denser = more space.
denser=more cells of a smaller capacity vs less cells of a greater capacity, more cells= more busses=higher bandwidth.
Its why ibm uses edram with each cell heavily bussed up to emulate the latency of 6tsram (the room for these would be taken up by all the transistors sram needs!), with the bandwidth of edram for power 7, IBM states it gets over twice the bandwidth with this edram as you can get with the same capacity of sram, while only having slightly worse latency and cycle time than 6t sram.
Sram has fantastic low latency because of the fact it doesnt need to refresh, its static. Gamecube and wii use it, as does wii u (3mb of various sram/psram embedded on the gpu) But bandwidth isnt a strong bullet point in light of (kind of) recent developments in edram technology. Nintendo wouldnt switch from exotic choices like 1
tsram and fcram, to seemingly ordinary edram for nothing. Theyve always demanded perfornance ram, if nothing else.
The biggest problem I see is that a pretty great bandwidth is limited to a pretty slow bandwidth when it needs to access that 1Gb main memory. You cant just add the two bandwidths together like the post i quoted was doing, when that edram (or esram a la xbone) is interfacing with main ram, its doing so at main rams speed.
Yup, it sure does use 32 Mb esram. I dont know whether its 4t or 6t, but regaurdless, as far as latency is concerned its gonna be pretty lickity split. Like blazing fast, running circles around ps4's gddr5. But... Not in bandwidth.
As for how edram gets more bandwidth than what should be superior sram of the same capacity:
Space=power. Denser = more space.
denser=more cells of a smaller capacity vs less cells of a greater capacity, more cells= more busses=higher bandwidth.
Its why ibm uses edram with each cell heavily bussed up to emulate the latency of 6tsram (the room for these would be taken up by all the transistors sram needs!), with the bandwidth of edram for power 7, IBM states it gets over twice the bandwidth with this edram as you can get with the same capacity of sram, while only having slightly worse latency and cycle time than 6t sram.
Sram has fantastic low latency because of the fact it doesnt need to refresh, its static. Gamecube and wii use it, as does wii u (3mb of various sram/psram embedded on the gpu) But bandwidth isnt a strong bullet point in light of (kind of) recent developments in edram technology. Nintendo wouldnt switch from exotic choices like 1
tsram and fcram, to seemingly ordinary edram for nothing. Theyve always demanded perfornance ram, if nothing else.
The biggest problem I see is that a pretty great bandwidth is limited to a pretty slow bandwidth when it needs to access that 1Gb main memory. You cant just add the two bandwidths together like the post i quoted was doing, when that edram (or esram a la xbone) is interfacing with main ram, its doing so at main rams speed.
That's where developing specifically to manage the smaller pool of EDRAM comes in.
Posted 14 July 2013 - 08:49 PM
Posted 14 July 2013 - 08:59 PM
It just doesn't matter. I personally expect AC4, blacklist, watchdogs, batman AO, and COD to all run parity with 360 version this year on Wii U. Devs just aren't going to put the time in on Wii U. To try and make more of it. They feel like its a job well done to get Wii U running at parity with 360.
That may have been good and well until someone ported a game in two months that was closer to what the game looked like on the PC, without rewriting the engine to take advantage of the differences in CPU or GPU architecture. The games will look much better than the 360 version.
Posted 14 July 2013 - 09:15 PM
That may have been good and well until someone ported a game in two months that was closer to what the game looked like on the PC, without rewriting the engine to take advantage of the differences in CPU or GPU architecture. The games will look much better than the 360 version.
Posted 14 July 2013 - 09:24 PM
I'm willing for any developer to prove me wrong. Splinter cell blacklist is up 1st... Let's see. Haven't heard Ubi say anything about it yet with just about a month and some days to go.
You aren't an authority in game development, the developers don't have to "prove" anything. They just make the games for the platforms that the brass tells them to make the games for. Don't expect any developer of publisher to come out and qualify the ridiculous presumption that for some reason they aren't going to take advantage of better hardware by making a statement about it. They've talked about the next gen engine they are using, for each individual platform. The Wii U version of the engine will be more capable than the 360 version of the engine, in all cases.
Posted 14 July 2013 - 09:37 PM
You aren't an authority in game development, the developers don't have to "prove" anything. They just make the games for the platforms that the brass tells them to make the games for. Don't expect any developer of publisher to come out and qualify the ridiculous presumption that for some reason they aren't going to take advantage of better hardware by making a statement about it. They've talked about the next gen engine they are using, for each individual platform. The Wii U version of the engine will be more capable than the 360 version of the engine, in all cases.
Edited by GAMER1984, 14 July 2013 - 09:38 PM.
Posted 14 July 2013 - 09:56 PM
It just doesn't matter. I personally expect AC4, blacklist, watchdogs, batman AO, and COD to all run parity with 360 version this year on Wii U. Devs just aren't going to put the time in on Wii U. To try and make more of it. They feel like its a job well done to get Wii U running at parity with 360.
Posted 14 July 2013 - 10:03 PM
Problem is, WE HAVE seen graphics on the Watch Dogs for Wii U, and it looks just as good as the PS4. The other games I don't know. It all depends on that team working on each game.
Posted 15 July 2013 - 04:19 AM
Since we are the ones buying games, we kind of are an authority. Not alone, but if people group up, they can make a difference (cf. tripping removed in SSBWIIU). Many devs do care about people's opinion. Then intervene AAA publishers that force them to keep silence, and that's why we assume stuff like that.You aren't an authority in game development, the developers don't have to "prove" anything. They just make the games for the platforms that the brass tells them to make the games for. Don't expect any developer of publisher to come out and qualify the ridiculous presumption that for some reason they aren't going to take advantage of better hardware by making a statement about it. They've talked about the next gen engine they are using, for each individual platform. The Wii U version of the engine will be more capable than the 360 version of the engine, in all cases.
Posted 15 July 2013 - 07:56 AM
The biggest problem I see is that a pretty great bandwidth is limited to a pretty slow bandwidth when it needs to access that 1Gb main memory. You cant just add the two bandwidths together like the post i quoted was doing, when that edram (or esram a la xbone) is interfacing with main ram, its doing so at main rams speed.
This is absolutely true. A lot of people have wanted to just add up the bandwidth numbers and declare the Wii U as having phenomenal bandwidth, as if its just as good as the memory bandwidth on the PS4, and anyone with some common sense knows that 32MB of super high bandwidth can only do so much for the slow bandwidth of the main memory. The question is where is the tipping point for hardware. How powerful does a GPU need to be before its trully being limited by the memory. With the Wii U, memory management is very important. What can be done in the edram, and what absolutely needs to be stored there. You would never want data in the edram that eats up valuable space when it really be stored in the main memory with no issue. All this is true for the cache on the cpu as well. So the even if the edram's bandwidth is 256GB/s, the actual memory performance when used corrected may only perform as well as a traditional memory setup that has 35-50 GB/s bandwidth.
Like Shin'en said, everything matches up very well. The GPU isnt memory bandwidth starved, but at the same time there isnt tons of extra memory bandwidth that does to waste because the GPU isnt powerful enough to process as much data as the memory can supply. There is a reason that Sony didnt choose this memory setup for the PS4, and that was for ease of programming. Its obvious that many developers are either not capable, or simply dont have the resources to take advantage of Wii U's unique architecture. It is funny that even Sony aknowledges that from an outright performance standpoint, the edram with slower main memory can work very well, but its not as easy to program for, and thats why they chose to go with high bandwidth main memory.
Posted 15 July 2013 - 10:58 AM
Since we are the ones buying games, we kind of are an authority. Not alone, but if people group up, they can make a difference (cf. tripping removed in SSBWIIU). Many devs do care about people's opinion. Then intervene AAA publishers that force them to keep silence, and that's why we assume stuff like that.
I understand where you are coming from and I agree. The problem is that when they've said pubilcly before that all of the next gen versions of games look "the same" but that isn't enough for a minority of people that feel that it needs to be "proved" they aren't going to give a damn.
This is absolutely true. A lot of people have wanted to just add up the bandwidth numbers and declare the Wii U as having phenomenal bandwidth, as if its just as good as the memory bandwidth on the PS4, and anyone with some common sense knows that 32MB of super high bandwidth can only do so much for the slow bandwidth of the main memory. The question is where is the tipping point for hardware. How powerful does a GPU need to be before its trully being limited by the memory. With the Wii U, memory management is very important. What can be done in the edram, and what absolutely needs to be stored there. You would never want data in the edram that eats up valuable space when it really be stored in the main memory with no issue. All this is true for the cache on the cpu as well. So the even if the edram's bandwidth is 256GB/s, the actual memory performance when used corrected may only perform as well as a traditional memory setup that has 35-50 GB/s bandwidth.
Like Shin'en said, everything matches up very well. The GPU isnt memory bandwidth starved, but at the same time there isnt tons of extra memory bandwidth that does to waste because the GPU isnt powerful enough to process as much data as the memory can supply. There is a reason that Sony didnt choose this memory setup for the PS4, and that was for ease of programming. Its obvious that many developers are either not capable, or simply dont have the resources to take advantage of Wii U's unique architecture. It is funny that even Sony aknowledges that from an outright performance standpoint, the edram with slower main memory can work very well, but its not as easy to program for, and thats why they chose to go with high bandwidth main memory.
Keep in mind that when they develop their games they should know what assets are coming when, and prepare to manage the EDRAM accordingly. That's why Shin 'en said that devs need to work to maximize performance, because the potential is there for a very high theoretical bandwidth, but it has to be managed, and devs can't be caught sleeping at the wheel.
We also know that developers aren't being limited to the bandwidth of the main memory pool, either. Somehow they are getting good bandwidth but haven't really said how much or how they are getting it.
Posted 15 July 2013 - 12:35 PM
So after reading all this is this a bad thing or good thing?
Posted 15 July 2013 - 12:59 PM
I understand where you are coming from and I agree. The problem is that when they've said pubilcly before that all of the next gen versions of games look "the same" but that isn't enough for a minority of people that feel that it needs to be "proved" they aren't going to give a damn.
Keep in mind that when they develop their games they should know what assets are coming when, and prepare to manage the EDRAM accordingly. That's why Shin 'en said that devs need to work to maximize performance, because the potential is there for a very high theoretical bandwidth, but it has to be managed, and devs can't be caught sleeping at the wheel.
We also know that developers aren't being limited to the bandwidth of the main memory pool, either. Somehow they are getting good bandwidth but haven't really said how much or how they are getting it.
Developers could be confusing GPU performance with improper use of the edram. GPU performance cannot be maximized without proper use of the edram, without using it properly the GPU will not be able to run at its maximum potential. Its up to the developer to make sure they are getting the most bang for their buck with the edram. From what I can gather, this means that the frame buffers/Z buffer would stay in the edram saving tons of bandwidth. However, this only leaves about 12-16MB for textures. So the majority of textures would still need to come from the main memory pool.
0 members, 1 guests, 0 anonymous users