If the games really do only have access to two of the bricks of ram, and the OS is on the other two, then if Nintendo were to optimize their OS to work in just one of the bricks of ram, not only would games have an extra 512MB of ram to work with, but the memory bandwidth would go up to 38.4 GB/s, a respectable increase over PS3/360. Not to mention that a lot of GPU effects should be possible because of the edram that the 360 and PS3 simply couldnt do.
Wii U's RAM is slower than PS3/Xbox 360.
#81
Posted 01 April 2013 - 07:10 AM
#82
Posted 01 April 2013 - 07:14 AM
If the games really do only have access to two of the bricks of ram, and the OS is on the other two, then if Nintendo were to optimize their OS to work in just one of the bricks of ram, not only would games have an extra 512MB of ram to work with, but the memory bandwidth would go up to 38.4 GB/s, a respectable increase over PS3/360. Not to mention that a lot of GPU effects should be possible because of the edram that the 360 and PS3 simply couldnt do.
There technically is no os...
#83
Posted 01 April 2013 - 07:43 AM
There technically is no os...
Care to exmplain that? They said the other GB of ram is for the OS.
#84
Posted 01 April 2013 - 07:55 AM
Goodtwin, on 01 Apr 2013 - 01:57, said:Care to exmplain that? They said the other GB of ram is for the OS.
Im pretty sure its the media who twisted it into that.
Technically, games actually run on what constitutes as the os, its a kernel, so it can be shifted around cpu cores as needed. Its pretty tiny.
What people think of as the os, the gui of menus and what not is kept in some flash ram or something, and when you suspend the game, its swapped into mem1 (mem1 is only for foreground tasks)
Its the services, like miiverse, tvii, and all the stuff nintendo plans to add that will utilize that GB of ram thats been set aside.
Edited by 3Dude, 01 April 2013 - 07:56 AM.
#85
Posted 01 April 2013 - 08:02 AM
So basically its there so that you can do things like use the internet browser without exiting the game, if the additional ram wasnt there, that wouldnt be possible. If you had to flush the game out of the memory to use the browser, or enter Miiverse, it would cause the game to reset.
Your a bag of good info, thanks.
#86
Posted 01 April 2013 - 07:26 PM
After looking up the hynix ram specs and disproving this bs.
I decided to look up the samsung ram. Surprisingly, I found it was clocked a little slower. MORE surprisingly, was the ram module itself.
Samsung k4j52324qc-bc14 -512Mb on a 136 fricking ball grid array.
If sounds familiar for some reason ill save you the time of trying to remember.
Its the same exact fricking ram chip that was originally in the xbox 360.
http://www.psu.com/f...60-Memory-Chips
Look familiar? Go ahead, compare the nomenclatures.
http://macroware.wor...osoft-xbox-360/
So the xbox 360's ram chip only gets half the performance of the xbox 360 's ram chip because nintendo.
Holy Wii, internet. Holy, Wii.
So where did 12.8 come from? Heres my take.
This thing gets 1.6Gb per second per pin with 6.4 GB a second per chip max. This was brought up in the teardown, and used as the basis of everything. Well, multiply 6.4 by 2 and you get 12.8.
In actuality this version doesnt get 1.6 per pin, its 700MHz, so it gets 1.4, fot 5.6GB 'per chip'.
Well, the wii u has 2GB of ram, and only 4 chips. But it uses the same exact nomenclature.
Assuming its not a mis-label (it happens) The unit is still designated as a 512 megabit chip as per its pdf. NOT 512 MegaBYTES, this is a huge deal. That means each 'chip' in the wii u has four entire 360 'chips' in it. The teardown wizards saw 6.8GB per chip and multiplied by the two ram physical ram chips they saw. But the documentation states bandwidth per 512 megabit chip, NOT MEGABYTES.
Probably makes a difference.
But guessing mistakes aside, lets get this straight.
700 MHz, 64bit(ddr3 standard), 2 moduals with their own channels, 2 bits per line, (also standard).
700*64*2*2=179,200 Mb/s Divide by 8=22,400 MB/s, or a 22.4 GB bandwidth for the game ram.
Oh my god thats almost like, exactly the 360 ram.
Its like they use the same ram chips or something.
Edited by 3Dude, 01 April 2013 - 09:50 PM.
#87
Posted 01 April 2013 - 09:14 PM
After looking up the hynix ram specs and disproving this bs.
I decided to look up the samsung ram. Surprisingly, I found it was clocked a little slower. MORE surprisingly, was the ram module itself.
Samsung k4j52324qc-bc14 -512Mb on a 136 fricking ball grid array.
If sounds familiar for some reason ill save you the time of trying to remember.
Its the same exact fricking ram chip that was originally in the xbox 360.
http://www.psu.com/f...60-Memory-Chips
Look familiar? Go ahead, compare the nomenclatures.
http://macroware.wor...osoft-xbox-360/
So the xbox 360's ram chip only gets half the performance of the xbox 360 's ram chip because nintendo.
Holy Wii, internet. Holy, Wii.
This thing gets 1.6Gb per second per pin with 6.4 GB a second per chip max. this version is a little less than max, at 700MHz, it gets 1.4Gb/s per pin for 5.6 GB/s per chip. Well, they are 512Mb, thats megabits, important to remember, megabit, chips.
The 360 has 4 of them. 4x5.6=22.4 Sound familiar? Its the bandwidth of the 360.
Well, the wii u has 2GB of ram, and only 4 chips. But it uses the same exact nomenclature.
Assuming its not a mis-label (it happens) The unit is still designated as a 512 megabit chip as per its pdf. NOT 512 MegaBYTES, this is a huge deal. That means each 'chip' in the wii u has four entire 'chips' in it. The teardown wizards saw 6.8GB per chip and multiplied by the two ram chips they saw. But the documentation states bandwidth per 512 megabit chip, NOT MEGABYTES.
What does the documentation say? 5.6GB/s per 512mb chip. 4 chips per housing.
5.6*4=22.4
But theres another 'chip'.
22.4*2=44.8
How legit are those niumbers 3dude? Have you doubled are triple checked them?
#88
Posted 02 April 2013 - 01:46 AM
GAMER1984, on 01 Apr 2013 - 15:28, said:How legit are those niumbers 3dude? Have you doubled are triple checked them?
Pretty legit. They come from the samsung ram pdf. The same pdf, since they are, well, the exact same fricking ram chip, In both the wii u and 360.
The computation is just the standard bw computation.
Edited by 3Dude, 02 April 2013 - 01:57 AM.
#89
Posted 02 April 2013 - 03:31 AM
#90
Posted 02 April 2013 - 03:48 AM
GAMER1984, on 01 Apr 2013 - 21:45, said:So you are saying the Wii U ram bandwidth is doublee that of 360?
Technically, if you count the GB of system ram. But i thought that might give off the wrong impression, so i edited it out.
But for the 1GB for games, its actually identical to 360, and matches up pretty nicely with the transfer rate of the disc drive. Like a glove really.
Its funny, with the number of modules the 360 has (four, like wii u, but all available), its bandwidth should actually be higher.
at 1.4gb/s a pin, an 8x32 unit has 256 pins.
That should be 358.4 gb/s, or 44.8 GB/s.
BUT, the ram on the 360 isnt using its interleaved mode, so only 128 pins are accessable at a time. Probably a thermal management necessity. Not a problem the wii u shares.
Edited by 3Dude, 02 April 2013 - 03:59 AM.
#91
Posted 02 April 2013 - 03:51 AM
GAMER1984, on 01 Apr 2013 - 21:45, said:So you are saying the Wii U ram bandwidth is doublee that of 360?
Technically, if you count the GB of system ram. But i thought that might give off the wrong impression, so i edited it out.
But for the 1GB for games, its actually identical to 360, and matches up pretty nicely with the transfer rate of the disc drive. Like a glove really.
Does the EDRAM do anything to increase that speed? Its shared correct... Between CPU and GPU?
#92
Posted 02 April 2013 - 04:13 AM
GAMER1984, on 01 Apr 2013 - 22:05, said:Does the EDRAM do anything to increase that speed? Its shared correct... Between CPU and GPU?
Those are seperate parts of the memory heiarchy. They cant increase the bandwidth of main memory. But yes, the cpu does have access to the gpu's edram, and on a blazing fast interconnect, at an incredibly short distance.
What they can do is take large chunks of reusable data, like textures and stuff out of main memory, and then use that at a massive bandwidth. And the wii u, with 32MB on its gpu, can keep 3x the data as the 360 (like textures and stuff) to be used at that bandwidth.
However, if you want to swap something out from main memory, it would be reduced to main memories bandwidth.
At the ridiculous 12.8 GB a second, that would be a huge bottleneck. Thats like, slower than the disc drive(no not really, but its an appropriate amount of sarcasm).
At 22-25, its a non issue.
Edited by 3Dude, 02 April 2013 - 04:34 AM.
#93
Posted 02 April 2013 - 07:52 AM
#94
Posted 02 April 2013 - 09:57 AM
I have another request into Hynix, so I will let you know what they send back to me. This time I simply asked what is the maximum bandwidth of the chip, and didnt ask about the bit interface. This is the problem with trying to make sense of these model numbers, they dont alway give clear specifications. At least to those of us who arent in the memory business. I want to be able to say for 100% certainty that the Anandtech article is bogus, and the main memory claim is 100% false.
I was having a discussion last night with some people about the edram. A lot of the haters want to discredit the edram on the Wii U, but the fact is that the 3MB of onboard memory in the GC and Wii was considered extrememly useful in preserving valuable memory bandwidth. Do you think a powerful CPU like a Core I7 would suffer if they elimited the cache? Yes, it could very well cripple the CPU performance. By looked at the edram in a similar way but for the GPU, you can get an idea of beneficial the edram can be for the gpu.
The CPU uses cache memory to store instructions that are repeatedly required to run programs, improving overall system speed. The advantage ofcache memory is that the CPU does not have to use the motherboard's system bus for data transfer. Whenever data must be passed through the system bus, the data transfer speed slows to the motherboard's capability. The CPU can process data much faster by avoiding the bottleneck created by the system bus.
These same principles can be applied to the GPU, but they wont happen on their own. The developer has to take more precise control over the memory management, and dicate exactly what would be most beneficial to store in the edram.
I will say this, remember when Shin'en praised the memory of the Wii U? Well, I didnt think about this before, but Nano Assault Neo is only about 80MB in size. What that means is the game was able to load the levels in their entirety to the edram and cpu cache, and the main memory pool would have seen very little usage.
I would also like to point out that because the edram bandwidth is so high, the GPU has access to 100% of the data in the edram for every frame rendered. Take the PS3's 22.4GB/s memory for example, at 30fps the bandwidth allows around 700gbs of data to be accessed per frame, but that gets cut in half when you move to 60fps. So the advantage of edram actually increases significantly proportionally when moving from 30fps to 60fps.
I have another request into Hynix, so I will let you know what they send back to me. This time I simply asked what is the maximum bandwidth of the chip, and didnt ask about the bit interface. This is the problem with trying to make sense of these model numbers, they dont alway give clear specifications. At least to those of us who arent in the memory business. I want to be able to say for 100% certainty that the Anandtech article is bogus, and the main memory claim is 100% false.
I was having a discussion last night with some people about the edram. A lot of the haters want to discredit the edram on the Wii U, but the fact is that the 3MB of onboard memory in the GC and Wii was considered extrememly useful in preserving valuable memory bandwidth. Do you think a powerful CPU like a Core I7 would suffer if they elimited the cache? Yes, it could very well cripple the CPU performance. By looked at the edram in a similar way but for the GPU, you can get an idea of beneficial the edram can be for the gpu.
The CPU uses cache memory to store instructions that are repeatedly required to run programs, improving overall system speed. The advantage ofcache memory is that the CPU does not have to use the motherboard's system bus for data transfer. Whenever data must be passed through the system bus, the data transfer speed slows to the motherboard's capability. The CPU can process data much faster by avoiding the bottleneck created by the system bus.
These same principles can be applied to the GPU, but they wont happen on their own. The developer has to take more precise control over the memory management, and dicate exactly what would be most beneficial to store in the edram.
I will say this, remember when Shin'en praised the memory of the Wii U? Well, I didnt think about this before, but Nano Assault Neo is only about 80MB in size. What that means is the game was able to load the levels in their entirety to the edram and cpu cache, and the main memory pool would have seen very little usage.
I would also like to point out that because the edram bandwidth is so high, the GPU has access to 100% of the data in the edram for every frame rendered. Take the PS3's 22.4GB/s memory for example, at 30fps the bandwidth allows around 700gbs of data to be accessed per frame, but that gets cut in half when you move to 60fps. So the advantage of edram actually increases significantly proportionally when moving from 30fps to 60fps.
#95
Posted 02 April 2013 - 10:12 AM
Goodtwin, on 02 Apr 2013 - 04:11, said:I have another request into Hynix, so I will let you know what they send back to me. This time I simply asked what is the maximum bandwidth of the chip, and didnt ask about the bit interface. This is the problem with trying to make sense of these model numbers, they dont alway give clear specifications. At least to those of us who arent in the memory business. I want to be able to say for 100% certainty that the Anandtech article is bogus, and the main memory claim is 100% false.
I was having a discussion last night with some people about the edram. A lot of the haters want to discredit the edram on the Wii U, but the fact is that the 3MB of onboard memory in the GC and Wii was considered extrememly useful in preserving valuable memory bandwidth. Do you think a powerful CPU like a Core I7 would suffer if they elimited the cache? Yes, it could very well cripple the CPU performance. By looked at the edram in a similar way but for the GPU, you can get an idea of beneficial the edram can be for the gpu.
These same principles can be applied to the GPU, but they wont happen on their own. The developer has to take more precise control over the memory management, and dicate exactly what would be most beneficial to store in the edram.
I will say this, remember when Shin'en praised the memory of the Wii U? Well, I didnt think about this before, but Nano Assault Neo is only about 80MB in size. What that means is the game was able to load the levels in their entirety to the edram and cpu cache, and the main memory pool would have seen very little usage.
I would also like to point out that because the edram bandwidth is so high, the GPU has access to 100% of the data in the edram for every frame rendered. Take the PS3's 22.4GB/s memory for example, at 30fps the bandwidth allows around 700gbs of data to be accessed per frame, but that gets cut in half when you move to 60fps. So the advantage of edram actually increases significantly proportionally when moving from 30fps to 60fps.
I have another request into Hynix, so I will let you know what they send back to me. This time I simply asked what is the maximum bandwidth of the chip, and didnt ask about the bit interface. This is the problem with trying to make sense of these model numbers, they dont alway give clear specifications. At least to those of us who arent in the memory business. I want to be able to say for 100% certainty that the Anandtech article is bogus, and the main memory claim is 100% false.
I was having a discussion last night with some people about the edram. A lot of the haters want to discredit the edram on the Wii U, but the fact is that the 3MB of onboard memory in the GC and Wii was considered extrememly useful in preserving valuable memory bandwidth. Do you think a powerful CPU like a Core I7 would suffer if they elimited the cache? Yes, it could very well cripple the CPU performance. By looked at the edram in a similar way but for the GPU, you can get an idea of beneficial the edram can be for the gpu.
These same principles can be applied to the GPU, but they wont happen on their own. The developer has to take more precise control over the memory management, and dicate exactly what would be most beneficial to store in the edram.
I will say this, remember when Shin'en praised the memory of the Wii U? Well, I didnt think about this before, but Nano Assault Neo is only about 80MB in size. What that means is the game was able to load the levels in their entirety to the edram and cpu cache, and the main memory pool would have seen very little usage.
I would also like to point out that because the edram bandwidth is so high, the GPU has access to 100% of the data in the edram for every frame rendered. Take the PS3's 22.4GB/s memory for example, at 30fps the bandwidth allows around 700gbs of data to be accessed per frame, but that gets cut in half when you move to 60fps. So the advantage of edram actually increases significantly proportionally when moving from 30fps to 60fps.
---------------------------------------------------------
fun fact about nano assault neo. Its file size is so small because all its assets are procedurally generated, and thus not stored in rom, but created on the fly upon initialization, and only ever existing in ram!
Shin en's pretty crazy.
Edited by 3Dude, 02 April 2013 - 10:13 AM.
#96
Posted 02 April 2013 - 10:16 AM
Goodtwin, on 02 Apr 2013 - 04:11, said:I have another request into Hynix, so I will let you know what they send back to me. This time I simply asked what is the maximum bandwidth of the chip, and didnt ask about the bit interface. This is the problem with trying to make sense of these model numbers, they dont alway give clear specifications. At least to those of us who arent in the memory business. I want to be able to say for 100% certainty that the Anandtech article is bogus, and the main memory claim is 100% false.
I was having a discussion last night with some people about the edram. A lot of the haters want to discredit the edram on the Wii U, but the fact is that the 3MB of onboard memory in the GC and Wii was considered extrememly useful in preserving valuable memory bandwidth. Do you think a powerful CPU like a Core I7 would suffer if they elimited the cache? Yes, it could very well cripple the CPU performance. By looked at the edram in a similar way but for the GPU, you can get an idea of beneficial the edram can be for the gpu.
These same principles can be applied to the GPU, but they wont happen on their own. The developer has to take more precise control over the memory management, and dicate exactly what would be most beneficial to store in the edram.
I will say this, remember when Shin'en praised the memory of the Wii U? Well, I didnt think about this before, but Nano Assault Neo is only about 80MB in size. What that means is the game was able to load the levels in their entirety to the edram and cpu cache, and the main memory pool would have seen very little usage.
I would also like to point out that because the edram bandwidth is so high, the GPU has access to 100% of the data in the edram for every frame rendered. Take the PS3's 22.4GB/s memory for example, at 30fps the bandwidth allows around 700gbs of data to be accessed per frame, but that gets cut in half when you move to 60fps. So the advantage of edram actually increases significantly proportionally when moving from 30fps to 60fps.
I have another request into Hynix, so I will let you know what they send back to me. This time I simply asked what is the maximum bandwidth of the chip, and didnt ask about the bit interface. This is the problem with trying to make sense of these model numbers, they dont alway give clear specifications. At least to those of us who arent in the memory business. I want to be able to say for 100% certainty that the Anandtech article is bogus, and the main memory claim is 100% false.
I was having a discussion last night with some people about the edram. A lot of the haters want to discredit the edram on the Wii U, but the fact is that the 3MB of onboard memory in the GC and Wii was considered extrememly useful in preserving valuable memory bandwidth. Do you think a powerful CPU like a Core I7 would suffer if they elimited the cache? Yes, it could very well cripple the CPU performance. By looked at the edram in a similar way but for the GPU, you can get an idea of beneficial the edram can be for the gpu.
These same principles can be applied to the GPU, but they wont happen on their own. The developer has to take more precise control over the memory management, and dicate exactly what would be most beneficial to store in the edram.
I will say this, remember when Shin'en praised the memory of the Wii U? Well, I didnt think about this before, but Nano Assault Neo is only about 80MB in size. What that means is the game was able to load the levels in their entirety to the edram and cpu cache, and the main memory pool would have seen very little usage.
I would also like to point out that because the edram bandwidth is so high, the GPU has access to 100% of the data in the edram for every frame rendered. Take the PS3's 22.4GB/s memory for example, at 30fps the bandwidth allows around 700gbs of data to be accessed per frame, but that gets cut in half when you move to 60fps. So the advantage of edram actually increases significantly proportionally when moving from 30fps to 60fps.
---------------------------------------------------------
fun fact about nano assault neo. Its file size is so small because all its assets are procedurally generated, and thus not stored in rom, but created on the fly upon initialization, and only ever existing in ram!
Shin en's pretty crazy.
I did not know that, but that is awesome. They are incredible. Its hard to believe they are just a small Indie developer, but they know thier stuff.
#97
Posted 02 April 2013 - 10:20 AM
Goodtwin, on 02 Apr 2013 - 04:30, said:I did not know that, but that is awesome. They are incredible. Its hard to believe they are just a small Indie developer, but they know thier stuff.
-------------------------------------------------------
Shin en really knows their stuff. They only release games under the shin en label, but they also offer their services in engine effeciency optimization to third parties, including work on engines for ps3 and 360, ane even super computers.
They have made enough money to expand, but instead, decided to stay small and use it to improve their quality of life. Some very smart gentlemen. And they make some fun great looking games too.
#98
Posted 02 April 2013 - 10:36 AM
GAMER1984, on 01 Apr 2013 - 21:45, said:So you are saying the Wii U ram bandwidth is doublee that of 360?
Technically, if you count the GB of system ram. But i thought that might give off the wrong impression, so i edited it out.
But for the 1GB for games, its actually identical to 360, and matches up pretty nicely with the transfer rate of the disc drive. Like a glove really.
Its funny, with the number of modules the 360 has (four, like wii u, but all available), its bandwidth should actually be higher.
at 1.4gb/s a pin, an 8x32 unit has 256 pins.
That should be 358.4 gb/s, or 44.8 GB/s.
BUT, the ram on the 360 isnt using its interleaved mode, so only 128 pins are accessable at a time. Probably a thermal management necessity. Not a problem the wii u shares.
The arrays may be the same, but the interface is quad pumped, so we would be looking at 44.8GB/s even with only two chips.
GAMER1984, on 01 Apr 2013 - 15:28, said:How legit are those niumbers 3dude? Have you doubled are triple checked them?
Pretty legit. They come from the samsung ram pdf. The same pdf, since they are, well, the exact same fricking ram chip, In both the wii u and 360.
The computation is just the standard bw computation.
22.4 wouldn't be taking into account the different interface has double the bus clock multiplier of DDR2 or GDDR3. 44.8 should be the actual number just based on the interface bandwidth calculations.
#99
Posted 02 April 2013 - 11:05 AM
routerbad, on 02 Apr 2013 - 04:50, said:The arrays may be the same, but the interface is quad pumped, so we would be looking at 44.8GB/s even with only two chips.
22.4 wouldn't be taking into account the different interface has double the bus clock multiplier of DDR2 or GDDR3. 44.8 should be the actual number just based on the interface bandwidth calculations.
Im pretty sure thats already accounted for, considering its ddr3 1600.
the base clock is 200MHz, quad pumped the i/o clock is 800MHz (700 for the samsung) then x2 for double data rate gets the 'effective' 1600.
Oy, goodtwin, remember, you are asking about the bandwidth of that specific nomenclature, not its configuration in wii u.
They will only give you the bandwidth of a single 4 gigabit chip on its own. Well have to take it from there.
Edited by 3Dude, 02 April 2013 - 11:06 AM.
#100
Posted 02 April 2013 - 11:16 AM
routerbad, on 02 Apr 2013 - 04:50, said:The arrays may be the same, but the interface is quad pumped, so we would be looking at 44.8GB/s even with only two chips.
22.4 wouldn't be taking into account the different interface has double the bus clock multiplier of DDR2 or GDDR3. 44.8 should be the actual number just based on the interface bandwidth calculations.
Im pretty sure thats already accounted for, considering its ddr3 1600.
the base clock is 200MHz, quad pumped the i/o clock is 800MHz (700 for the samsung) then x2 for double data rate gets the 'effective' 1600.
Oy, goodtwin, remember, you are asking about the bandwidth of that specific nomenclature, not its configuration in wii u.
They will only give you the bandwidth of a single 4 gigabit chip on its own. Well have to take it from there.
Thats what I did. Their spec sheet says the chip is a 96ball chip, I cant find good info on what that means. Is that where the data is transfered?
2 user(s) are reading this topic
0 members, 2 guests, 0 anonymous users