Jump to content


Photo

Shin'en Explains Wii U EDRAM Usage


  • Please log in to reply
118 replies to this topic

#81 tboss

tboss

    Pokey

  • Members
  • 1,242 posts

Posted 26 July 2013 - 07:06 AM

Thats the 32Mb embedded pool of ram on the gpu. Completely different subject. I mean, its nice, but nothing out of the ordinary.

These are the actual memory banks attached to each individual shader unit. The ones on wii u gpu are abnormally large. About twice as big as they should be.

... so is it possible to prestream textures/info for the next intruction onto that memory so the GPU can imediatly do said task to eleminate/reduce the gap before its next job?



#82 Arkhandar

Arkhandar

    Dry Bones

  • Members
  • 479 posts
  • Fandom:
    Zelda, Metroid, Mario, Kirby, DK

Posted 26 July 2013 - 03:11 PM

Thats the 32Mb embedded pool of ram on the gpu. Completely different subject. I mean, its nice, but nothing out of the ordinary.

These are the actual memory banks attached to each individual shader unit. The ones on wii u gpu are abnormally large. About twice as big as they should be.

Could you give me a quote please, or some sort of background if it isn't asking much? xD


If you try to fail and succeed, which have you done?

Posted Image

#83 Cloud Windfoot Omega

Cloud Windfoot Omega

    Cheep-Cheep

  • Members
  • 148 posts

Posted 26 July 2013 - 07:58 PM

Could you give me a quote please, or some sort of background if it isn't asking much? xD

you can see it in the  die picture



#84 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 26 July 2013 - 08:54 PM

you can see it in the  die picture


banner1_zpsb47e46d2.png

 


#85 cupoftea

cupoftea

    Green Koopa Troopa

  • Members
  • 40 posts

Posted 27 July 2013 - 12:05 AM

If the Wii U only needed to stream content form the disc then why does it have such horrible loading times for most games? It would render the main RAM pretty much useless.

WHAT HORRIBLE LOADING TIMES THE ONES PORTED ACROSS FROM THE PS3 ETC

 

BLAME THE DEVS ,,BLAMING A PLASTIC BOX FOR HUMAN LAZINESS PLEASE STOP



#86 Cloud Windfoot Omega

Cloud Windfoot Omega

    Cheep-Cheep

  • Members
  • 148 posts

Posted 27 July 2013 - 12:17 AM

WHAT HORRIBLE LOADING TIMES THE ONES PORTED ACROSS FROM THE PS3 ETC

 

BLAME THE DEVS ,,BLAMING A PLASTIC BOX FOR HUMAN LAZINESS PLEASE STOP

I still want to know what caused Lego city to have these problems, unless it was originally being designed for another system or used   an engine they did not fully tailor to the system



#87 Alex Atkin UK

Alex Atkin UK

    Boo

  • Members
  • 528 posts

Posted 27 July 2013 - 01:17 AM

I still want to know what caused Lego city to have these problems, unless it was originally being designed for another system or used   an engine they did not fully tailor to the system

 

Pretty sure its using an engine not tailored to the system, as I believe they mentioned that the engine was a port of the main LEGO engine with improvements.  It meant they weren't starting from scratch as this game was not by the main LEGO developers (who wrote the engine).

 

That also explains the dodgy frame rate, although personally I have played LEGO Lord of the Rings on PC and LEGO Batman 2 on Xbox 360, and I think Undercover looks (and controls) better than both.  I was surprised that on LEGO Lord of the Rings on top graphical settings in 1080p, when those round LEGO pieces are close to the camera they look REALLY blocky, but on Undercover they look perfect.  Although I really wish the draw distance for the bricks was higher on Wii U, it makes spotting some of them tricky (again, thats a legacy of how all LEGO games handle the draw distance).

 

I'm really looking forward to what improvements they do for LEGO on next-gen as its pushed out some games with quite pretty graphics this gen.  If you really look around at the none-LEGO scenery in games, its very nice indeed.


Edited by Alex Atkin UK, 27 July 2013 - 01:18 AM.

Sheffield 3DS | Steam & XBOX: Alex Atkin UK | PSN & WiiU: AlexAtkinUK

 

How to improve the Wii U download speed.


#88 Arkhandar

Arkhandar

    Dry Bones

  • Members
  • 479 posts
  • Fandom:
    Zelda, Metroid, Mario, Kirby, DK

Posted 27 July 2013 - 07:52 AM

you can see it in the  die picture

I'm not that experienced in analyzing GPU die pictures. 


If you try to fail and succeed, which have you done?

Posted Image

#89 Cloud Windfoot Omega

Cloud Windfoot Omega

    Cheep-Cheep

  • Members
  • 148 posts

Posted 27 July 2013 - 09:05 AM

I'm not that experienced in analyzing GPU die pictures. 

nothing we can do for you,  either you trust our expertise or you  remain ignorant so you may continue.



#90 Nintyfan86

Nintyfan86

    Bob-omb

  • Members
  • 262 posts

Posted 27 July 2013 - 09:21 AM

Dont dwell to much on what 'they' say. Time will tell.

That being said, google 'Amdahls law'. More cores can only get you so far, you still need powerful single cores and thread performance for many operations important to what videogames do.

OK, I have read enough about Amdahl's Law to potentially butcher a follow up a question (scratch potentially, insert probably).

 

If we take parallel processing, or the fact that there will be diminishing returns, or rather, some operations that will take X amount of time to complete despite the time saved on those tasks completed in parallel, is it safe to suggest that general code will generally face diminishing returns with added cores?

 

I cite the Battlefield 3 example, and the use of a Bulldozer or Sandy/Ivy Bridge i5/7 to play the campaign, and then moving on to multi-player. Given what Dice has shown about the engine, the campaign could care less until you get down to 1 core, yet, multiplayer generally does better on the I5 over the Bulldozer (or Piledriver, etc.). 

 

My follow up question is now this:

 

Since Amdahl's Law will always be in effect, will there always be that one operation that brings diminishing returns to an increased core count? I realize how ridiculous this sounds, given the definition of a law, but is there a comparable example of a 1 core vs 2 core situation, or otherwise, where a program benefited from more cores/hardware-software hyperthreading?



#91 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 27 July 2013 - 09:43 AM

OK, I have read enough about Amdahl's Law to potentially butcher a follow up a question (scratch potentially, insert probably).
 
If we take parallel processing, or the fact that there will be diminishing returns, or rather, some operations that will take X amount of time to complete despite the time saved on those tasks completed in parallel, is it safe to suggest that general code will generally face diminishing returns with added cores?
 
I cite the Battlefield 3 example, and the use of a Bulldozer or Sandy/Ivy Bridge i5/7 to play the campaign, and then moving on to multi-player. Given what Dice has shown about the engine, the campaign could care less until you get down to 1 core, yet, multiplayer generally does better on the I5 over the Bulldozer (or Piledriver, etc.). 
 
My follow up question is now this:
 
Since Amdahl's Law will always be in effect, will there always be that one operation that brings diminishing returns to an increased core count? I realize how ridiculous this sounds, given the definition of a law, but is there a comparable example of a 1 core vs 2 core situation, or otherwise, where a program benefited from more cores/hardware-software hyperthreading?


All programs that can be split up into seperate parallel jobs will greatly benefit. Texturing, and shaders for example, fit very well into this group. So do 'psuedo physics' Like the old havok stuff popular on ps360, where everything, no matter size or weight flies around like a cardboard box.

These greatly benefit from more threads and cores, however, once you add so many, the overhead of keeping all these cores and threads synchronized and properly communicating will begin to erode the performance gains.... That is, if you are attempting to use it all on just one process, or application, like say a videogame.

For multitasking, doing something completely different at the same time, like processing who from your list is online, recieving messages, downloading a show, recording gameplay, mantaining other operations running so they can instantly be switched too....

More cores are great, and to that end, its likely why there are so many.

However for many general purposes tasks, like game code, ai, and things that arent easily predictable, or simply 'going through the motions'. Well, those can only be handled sequentially, so no number of cores will help speed them up.

Only powerful single thread/core performance will help.

Dice created frostbite to.... Completely avoid this as the ps360 sucked at single thread performance and excelled at paralellism.

It made sense, as battlefield was multiplayer only and no real need of ai or heavily structured game code at all.

It appears the rise of moronic bubblegame cinematic linear roller coaster games gave them the confidence to make bf a single player campaign no one wants. A very linear on the rails experience (not implying its an on rails game, but simply the same stuff will happen the same way, everytime you activate the event). But super cinemarrific. The kind of thing you dont really need strong general purpose processing for. Which is why dice says they no longer even need a real cpu.

banner1_zpsb47e46d2.png

 


#92 Nintyfan86

Nintyfan86

    Bob-omb

  • Members
  • 262 posts

Posted 27 July 2013 - 11:25 AM

All programs that can be split up into seperate parallel jobs will greatly benefit. Texturing, and shaders for example, fit very well into this group. So do 'psuedo physics' Like the old havok stuff popular on ps360, where everything, no matter size or weight flies around like a cardboard box.

These greatly benefit from more threads and cores, however, once you add so many, the overhead of keeping all these cores and threads synchronized and properly communicating will begin to erode the performance gains.... That is, if you are attempting to use it all on just one process, or application, like say a videogame.

For multitasking, doing something completely different at the same time, like processing who from your list is online, recieving messages, downloading a show, recording gameplay, mantaining other operations running so they can instantly be switched too....

More cores are great, and to that end, its likely why there are so many.

However for many general purposes tasks, like game code, ai, and things that arent easily predictable, or simply 'going through the motions'. Well, those can only be handled sequentially, so no number of cores will help speed them up.

Only powerful single thread/core performance will help.

Dice created frostbite to.... Completely avoid this as the ps360 sucked at single thread performance and excelled at paralellism.

It made sense, as battlefield was multiplayer only and no real need of ai or heavily structured game code at all.

It appears the rise of moronic bubblegame cinematic linear roller coaster games gave them the confidence to make bf a single player campaign no one wants. A very linear on the rails experience (not implying its an on rails game, but simply the same stuff will happen the same way, everytime you activate the event). But super cinemarrific. The kind of thing you dont really need strong general purpose processing for. Which is why dice says they no longer even need a real cpu.

I appreciate your thorough response, and as always, your time in answering the question. 

 

I can understand how BF and COD are examples of the rollercoaster linear games, however, they all seem like that from some perspective. However, the possibilities become expanded. I just finished Far Cry 3 on PC. The same events would occur over and over, or at random. Like, cause and effect based on what you did. Obviously not scripted like the former examples, but, I am seeing it as more like, "event A activates event B if parameter 3 is met". This is what represents general game code, and thus falls within diminished returns from additional cores (I guess this is why they do not bother with hyperthreading)?

 

This is such great information for someone that has been mislead through benchmarks and 'teh optimizations' for PC hardware. 



#93 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 27 July 2013 - 12:06 PM

I appreciate your thorough response, and as always, your time in answering the question. 
 
I can understand how BF and COD are examples of the rollercoaster linear games, however, they all seem like that from some perspective. However, the possibilities become expanded. I just finished Far Cry 3 on PC. The same events would occur over and over, or at random. Like, cause and effect based on what you did. Obviously not scripted like the former examples, but, I am seeing it as more like, "event A activates event B if parameter 3 is met". This is what represents general game code, and thus falls within diminished returns from additional cores (I guess this is why they do not bother with hyperthreading)?
 
This is such great information for someone that has been mislead through benchmarks and 'teh optimizations' for PC hardware.


Well, yes and no. The selection process is sequential, but its so simplistic it doesnt really matter. The events may happen at random, or in different order, but as you stated, they are the same pre scripted events, completely defeating the purpose.

What Im talking about is things like emergent gameplay. Like minecraft. You never know whats going to happen. Just look at youtube at all the crazy one of a kind stuff that goes on

or procedural physics/ai

In frostbite 3, for example, dice is touting their destruction engine, which is nothing more than prescripted model swapping on steroids. EVERYTHING has destruction models and animations, but they are always the same. Its quite the lazy waste of power. Using more power to upgrade old tricks, instead of creating a way to do it for real with that power.

Its the difference between red faction and black back on ps2/xbox. Both focused heavily on destruction of the environment, however, black was completely pre scripted, simply swapping out normal assets with premade destroyed assets, always the same, everytime, same broke spot, same scratch, same everything.

While red faction computed the damage area, and created, in real time, new polygon/assets to show the damage. You could literally create tunnels in the walls with a rocket launcher.

Well, blacks solution, can be accelerated by simply adding more memory, and more threads/cores, and its very simple to do now, as its simply become commonplace.

Red factions cant be accelerated, since it cant be predicted, and can only be calculated once the player provides the place size direction and nature of change. You cant simply have a bunch of premade damage scenarios waiting around in memory, It needs stronger single core/thread performance to see improvement. Its also much easier to simply ignore this method in favor of the well established predictable method from the n64/playstation days.

banner1_zpsb47e46d2.png

 


#94 Alex Atkin UK

Alex Atkin UK

    Boo

  • Members
  • 528 posts

Posted 28 July 2013 - 06:59 AM

That's very disappointing to hear as that is exactly the sort of thing you would expect to improve on a next-gen console, rather than being stuck with the same.

 

Does that also mean that GPGPU is useless for that sort of thing as the main basis of that is parallelism?  I'm not very mathmatically minded so I don't really understand what maths is used for what sort of thing and which type of processor is best for it.


Sheffield 3DS | Steam & XBOX: Alex Atkin UK | PSN & WiiU: AlexAtkinUK

 

How to improve the Wii U download speed.


#95 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 28 July 2013 - 07:54 AM

That's very disappointing to hear as that is exactly the sort of thing you would expect to improve on a next-gen console, rather than being stuck with the same.
 
Does that also mean that GPGPU is useless for that sort of thing as the main basis of that is parallelism?  I'm not very mathmatically minded so I don't really understand what maths is used for what sort of thing and which type of processor is best for it.


Well, No, for parts of it, it can be very beneficial, like calculating the finite element solution after the interaction has taken place. That can greatly be paralellized, and the more powerful this end is, the higher resolution/more accurate your changes are going to be.

But selecting those changes based on player interaction, thats cpu bound, and needs to be fast. More cores/threads can work on DIFFERENT interactions simultaneously, but they cant gang up to all expediate one.

Although, every thing from the past 2 gens is powerful enough to do this. Almost nobody wants to use it, because its different, and as different it throws a wrench in the aaa automated game factory.

Red faction did it back on ps2.

red faction did it again on ps360, and force unleashed did it on ps360. I believe mgs rising may have also.

On wii, wii sports did it (tennis targets breaking), excite truck (not bots, real time truck deformation) sports resort (cutting stuff), and skyward sword (cutting stuff, blowing sand/creatures under sand).

On wii u its in Zeldas adventure (cut damage appearing on enemies, splitting the cloth outer layer precisely where the sword makes conact, and chu slicing)

At this point, its really not power holding this back, its the poisonous sloth of the industry at large.

On the bright side, Nintendo seems heavily interested in this interaction technology. As they had more of it on their underpowered wii than anywhere else last gen. So wii u owners can expect some in the future, particularly looking forward to zelda.

banner1_zpsb47e46d2.png

 


#96 Alex Atkin UK

Alex Atkin UK

    Boo

  • Members
  • 528 posts

Posted 28 July 2013 - 09:34 AM

Very looking forward to a new Zelda in HD.  Then again, I haven't even played Skyward Sword yet despite owning it, but I didn't feel Twilight Princess was held back at all by the graphics anyway (even thought it was just slightly upgraded from Gamecube).

 

Then again, fun games aren't exactly tied to graphics anyway.  My favourite games are pretty much Sonic Adventure, Conkers Bad Fur Day and Starfox Adventures.  Conker sure did benefit from the graphics upgrade on Xbox, but it was no less of a game on N64.

 

As is often said though, the kind of games Nintendo excel at have always managed fine.  They are insanely good at hiding limitations with clever art styling.  I just hope that doesn't discourage them from really pushing the Wii U to its limit.


Edited by Alex Atkin UK, 28 July 2013 - 09:34 AM.

Sheffield 3DS | Steam & XBOX: Alex Atkin UK | PSN & WiiU: AlexAtkinUK

 

How to improve the Wii U download speed.


#97 GAMER1984

GAMER1984

    Lakitu

  • Members
  • 2,036 posts
  • NNID:gamer1984
  • Fandom:
    Nintendo

Posted 01 August 2013 - 08:30 PM

Well, No, for parts of it, it can be very beneficial, like calculating the finite element solution after the interaction has taken place. That can greatly be paralellized, and the more powerful this end is, the higher resolution/more accurate your changes are going to be.

But selecting those changes based on player interaction, thats cpu bound, and needs to be fast. More cores/threads can work on DIFFERENT interactions simultaneously, but they cant gang up to all expediate one.

Although, every thing from the past 2 gens is powerful enough to do this. Almost nobody wants to use it, because its different, and as different it throws a wrench in the aaa automated game factory.

Red faction did it back on ps2.

red faction did it again on ps360, and force unleashed did it on ps360. I believe mgs rising may have also.

On wii, wii sports did it (tennis targets breaking), excite truck (not bots, real time truck deformation) sports resort (cutting stuff), and skyward sword (cutting stuff, blowing sand/creatures under sand).

On wii u its in Zeldas adventure (cut damage appearing on enemies, splitting the cloth outer layer precisely where the sword makes conact, and chu slicing)

At this point, its really not power holding this back, its the poisonous sloth of the industry at large.

On the bright side, Nintendo seems heavily interested in this interaction technology. As they had more of it on their underpowered wii than anywhere else last gen. So wii u owners can expect some in the future, particularly looking forward to zelda.

 

 

so i think we had some discussions on this but wanted to bring it up. One guy at Neogaf (fourthstorm) is claiming without a doubt that the Wii U GPU has 176GFLOP no more no less. so im asking you from all you have seen from Wii U GPU specs and die shots do you agree? here is his latest post.

 

 

realize I do not have the credentials to convince everyone, but I have seen enough evidence to call this one. The registers in the SIMDs and the true location of the TMUs (T1 and T2) betray the die's secrets. Latte is not so proprietary that it completely reinvents the core elements of AMD's Radeon tech. Look, I didn't want it to be the worse case scenario either, but in this case, the worst case is what it is. I'm not so arrogant that I just want my own personal theory to be taken as fact (and it's not even mine. I've learned pretty much everything from the contributions of other, more knowledgeable posters...except the TMU/L1 part). Rather, I'm so sure of it, that I just want people to know the truth and lay the topic to rest.

 

 

thoughts?



#98 grahamf

grahamf

    The Happiness Fairy

  • Members
  • 2,532 posts

Posted 01 August 2013 - 09:25 PM

so i think we had some discussions on this but wanted to bring it up. One guy at Neogaf (fourthstorm) is claiming without a doubt that the Wii U GPU has 176GFLOP no more no less. so im asking you from all you have seen from Wii U GPU specs and die shots do you agree? here is his latest post.

 

 

realize I do not have the credentials to convince everyone, but I have seen enough evidence to call this one. The registers in the SIMDs and the true location of the TMUs (T1 and T2) betray the die's secrets. Latte is not so proprietary that it completely reinvents the core elements of AMD's Radeon tech. Look, I didn't want it to be the worse case scenario either, but in this case, the worst case is what it is. I'm not so arrogant that I just want my own personal theory to be taken as fact (and it's not even mine. I've learned pretty much everything from the contributions of other, more knowledgeable posters...except the TMU/L1 part). Rather, I'm so sure of it, that I just want people to know the truth and lay the topic to rest.

 

 

thoughts?

I don't really understand what your theory is.


$̵̵͙͎̹̝̙̼̻̱͖̲̖̜̩̫̩̼̥͓̳̒̀ͨ̌̅ͮ̇̓ͮ̈͌̓̔̐͆ͩ̋͆ͣ́&̾̋͗̏̌̓̍ͥ̉ͧͣͪ̃̓̇̑҉͎̬͞^̸̠̬̙̹̰̬̗̲͈͈̼̯̞̻͎ͭ̐ͦ̋́̆̔̏̽͢$̻̜͕̜̠͔̮͐ͬ̍ͨͩͤͫ͐ͧ̔̆͘͝͞^̄̋̄͗̐ͯͮͨͣ͐͂͑̽ͩ͒̈̚͏̷͏̗͈̣̪͙̳̰͉͉̯̲̘̮̣̘͟ͅ&̐ͪͬ̑̂̀̓͛̈́͌҉҉̶̕͝*̗̩͚͍͇͔̻̬̼̖͖͈͍̝̻̪͙̳̯̌̅̆̌ͥ̊͗͆́̍ͨ̎̊̌͟͡$̶̛̛̙̝̥̳̥̣̥̞̝̱̺͍̭̹̞͔̠̰͇ͪ͋͛̍̊̋͒̓̿ͩͪ̓̓͘^̈ͥͩͭ͆͌ͣ̀̿͌ͫ̈́̍ͨ̇̾̚͏̢̗̼̻̲̱͇͙̝͉͝ͅ$̢̨̪̝̗̰͖̠̜̳̭̀ͥͭͨ̋ͪ̍̈ͮͣ̌^ͦ̏ͬ̋͑̿́ͮ̿ͨ̋̌ͪ̓̋̇͆͟҉̗͍$̛̪̞̤͉̬͙̦̋ͣͬ̒͗̀̍͗̾̽̓̉͌̔͂̇͒̚̕͜^̧͎̖̟̮͚̞̜̮̘͕̹͚̏ͩ͐ͯ͑̍̍̀͒͘*̿ͨ̽̈́͐ͭ̌̈͋̚͟͝҉͕̙*̨̢̭̭̤̺̦̩̫̲͇͕̼̝̯̇ͨ͗̓̃͂ͩ͆͂̅̀̀́̚̚͟%̨͚̙̮̣̭͖͕͙ͣ̽ͮͤ́ͫ̊̊̐̄̌ͣ͌̉̔͊̽̾ͨ^̢̹̭͍̬̖͇̝̝̬̱͈͔̹͉̫̿͛̄̿͊͆ͦ̃ͮͩ͌ͭ̔ͫ̆͞ͅͅ%̵̼̖̻̘ͪͤ̈̃̓̐̑ͩͭ̄̑͊ͫ̆̌̄͡*̴̮̪͕̗̩͇͇ͪ̑̊̈́́̀͞^̼̝̥̦͇̺̘̤̦͕̦̞͑̑ͯ̂ͯ̕͞%ͮͫ̿ͫ̊̈̔̍҉҉̴̸̡*̛̭̖͇͚̝̤̬̰̅̎ͥͯ̓͑̾ͬͨͮ́̕͝^̧̽͋̈ͤͮ̈́́̍ͧ̊҉͇̙̣̯̀́%̴̡̛̘͚͈̗̖̮̫̏̆ͦ̽̔̈̽͒͛̈

 


#99 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 01 August 2013 - 09:44 PM

so i think we had some discussions on this but wanted to bring it up. One guy at Neogaf (fourthstorm) is claiming without a doubt that the Wii U GPU has 176GFLOP no more no less. so im asking you from all you have seen from Wii U GPU specs and die shots do you agree? here is his latest post.
 
 
realize I do not have the credentials to convince everyone, but I have seen enough evidence to call this one. The registers in the SIMDs and the true location of the TMUs (T1 and T2) betray the die's secrets. Latte is not so proprietary that it completely reinvents the core elements of AMD's Radeon tech. Look, I didn't want it to be the worse case scenario either, but in this case, the worst case is what it is. I'm not so arrogant that I just want my own personal theory to be taken as fact (and it's not even mine. I've learned pretty much everything from the contributions of other, more knowledgeable posters...except the TMU/L1 part). Rather, I'm so sure of it, that I just want people to know the truth and lay the topic to rest.
 
 
thoughts?


Well, first, as I explained earlier, flops simply dont transfer linearly across architectures.

So even if it WAS 176 gflops, its not the same amount of power Xenon gets with 176 gflops.

Second. Simply no.

He dismisses Jim from Chipworks, who said he cant recognize the design coming from any other amd chips hes worked with (when asked about the 7xx base), because he made up a story where Jim is unable to identify the brand of the gpu (despite everybody already knowing the brand) Because it wasnt on the gpu, but the heatspreader.

Jim doesnt NEED to see a brand to recognize where a gpu came from. He didnt aquire his position of employment by being an incompetent buffoon, hes not a game journalist.

You simply cant tell that kind of detail by eye balling block shapes. Unless you can see the makeup of the arithmetic units inside, or have a chart, because you can give near any shape block of logic any purpose. You can even have the same exact logic block given a completely different shape simply to maximize the use of die area.

Identifying what logic needs to be by ram pools is a good idea in theory, but ultimately rather worthless in real world applications, as these custom layouts are often done by computers in parts to maximize effecient use of die space, and these are often simply put beyond a humans ability to follow. The reason They can be identified at all by their shapes from successor to successor (typically just die shrinks with minor changes( is merely because its cost effective to reuse a practical design.... In custom hardware it may no longer be practical to mantain the conventional layout, as there may be unconventional changes compared to your conventional designs. Such as a massive block of embedded ram.

And these guys love talking about flops, but they never specify what kind of flop, which is really important. This guy keeps talking about the number of 64 bit registers, but a flop is a 32 bit operation. A double precision flop is 64 bit. and Nintendo is known for wanting custom wierd things like cramming a pair of single precision flops into a single 64 bit register.

Observing him, he just doesnt seem to be trying to find an answer by looking at facts.

he seems to be trying to find facts he can use to support an answer he already came up with.


I dont know what exact number of flops the system is packing. It COULD be 176 Gflops. But, Id be willing to bet if it did, it wouldnt be for any of his reasoning. I feel like its comparable to someone telling us the sky is blue.... Because its made of paint. Except via a much smarter avenue of discourse

banner1_zpsb47e46d2.png

 


#100 GAMER1984

GAMER1984

    Lakitu

  • Members
  • 2,036 posts
  • NNID:gamer1984
  • Fandom:
    Nintendo

Posted 01 August 2013 - 10:25 PM

Well, first, as I explained earlier, flops simply dont transfer linearly across architectures.

So even if it WAS 176 gflops, its not the same amount of power Xenon gets with 176 gflops.

Second. Simply no.

He dismisses Jim from Chipworks, who said he cant recognize the design coming from any other amd chips hes worked with (when asked about the 7xx base), because he made up a story where Jim is unable to identify the brand of the gpu (despite everybody already knowing the brand) Because it wasnt on the gpu, but the heatspreader.

Jim doesnt NEED to see a brand to recognize where a gpu came from. He didnt aquire his position of employment by being an incompetent buffoon, hes not a game journalist.

You simply cant tell that kind of detail by eye balling block shapes. Unless you can see the makeup of the arithmetic units inside, or have a chart, because you can give near any shape block of logic any purpose. You can even have the same exact logic block given a completely different shape simply to maximize the use of die area.

Identifying what logic needs to be by ram pools is a good idea in theory, but ultimately rather worthless in real world applications, as these custom layouts are often done by computers in parts to maximize effecient use of die space, and these are often simply put beyond a humans ability to follow. The reason They can be identified at all by their shapes from successor to successor (typically just die shrinks with minor changes( is merely because its cost effective to reuse a practical design.... In custom hardware it may no longer be practical to mantain the conventional layout, as there may be unconventional changes compared to your conventional designs. Such as a massive block of embedded ram.

And these guys love talking about flops, but they never specify what kind of flop, which is really important. This guy keeps talking about the number of 64 bit registers, but a flop is a 32 bit operation. A double precision flop is 64 bit. and Nintendo is known for wanting custom wierd things like cramming a pair of single precision flops into a single 64 bit register.

Observing him, he just doesnt seem to be trying to find an answer by looking at facts.

he seems to be trying to find facts he can use to support an answer he already came up with.


I dont know what exact number of flops the system is packing. It COULD be 176 Gflops. But, Id be willing to bet if it did, it wouldnt be for any of his reasoning. I feel like its comparable to someone telling us the sky is blue.... Because its made of paint. Except via a much smarter avenue of discourse

thanks for responding






1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users

Anti-Spam Bots!