Jump to content


routerbad's Content

There have been 1000 items by routerbad (Search limited from 20-June 20)


By content type

See this member's


Sort by                Order  

#171802 [Photo] Wii U GPU Die

Posted by routerbad on 15 February 2013 - 11:58 AM in Wii U Hardware

I understand a ltd amount of this stuff but three times the power of a 360 overall IF the devs no how to code the hardware correctly,but seeing as Nintendo are keeping silent about exact hardware spec its not easy for the devs and so third party games will struggle unless the devs know there stuff.

Anyone please feel free to correct me if my overview thoughts are way off lol.


It still sports the same Power core instruction set, so games will run on the CPU, but should be optimized for an architecture that uses out of order instruction processing and has different FPU and SIMD characteristics than the PowerXCell PPE and Xenon cores. Those cores were designed specifically for floating point operations and were for all intents and purposes a modified PPC 970.

0% its definately a 750 series. Now more than ever since we can see it. I do see how the 476fp would fit many of the requirements... Though i think it would be even smaller than espresso if it were.

Espresso needed to be compatable with broadway for bc, something that can very easily be done to any cx or later vanilla member of the 750 family... But would be considerably more difficult with the 476.


Could they not have added the 750cl instructions to core0? It was pointed out that if it were a broadway core shrink it would be much bigger than it is, especially considering the increased cache. I would think it would be easier to add compatibility to a modern core rather than go through the process of shrinking a defunct core and adding instructions to it.



#171784 [Photo] Wii U GPU Die

Posted by routerbad on 15 February 2013 - 10:44 AM in Wii U Hardware

3Dude,

What are the chances this is based on the PPC476FP? I think that may have already been speculated before we had the die, but it fits quite well and looks like it would pack quite a punch.



#171153 [Photo] Wii U GPU Die

Posted by routerbad on 13 February 2013 - 01:01 PM in Wii U Hardware

Thats also easily accomplishable.


I assume they'll have sensitivity settings available to tailor the gameplay to your needs.



#167264 [Photo] Wii U GPU Die

Posted by routerbad on 05 February 2013 - 12:27 PM in Wii U Hardware

Current count I think leaves 16 of the blocks unexplained on the chip, thraktor believes asymmetric SPU's is a strong possibility considering the number of repeating groups of logic units that can't be readily explained. Looks like ~30-50% of the chip is unaccounted for so far. This thing could be a closet beast for all we know, and with the memory structure they are using they should be able to get much closer to whatever theorietical peak it has than anything we've ever seen before.



#171808 [Photo] Wii U GPU Die

Posted by routerbad on 15 February 2013 - 12:20 PM in Wii U Hardware

Im sure they could have done something like that. But it really looks like they went with a 750 solution.


okay, its fun to speculate :)



#171939 [Photo] Wii U GPU Die

Posted by routerbad on 15 February 2013 - 09:54 PM in Wii U Hardware

... not very knoligable on each of the cores =/. im guessing its not a fx or cancled vx/wx whaterver it was called. this one is probly a newly designed chip that doesnt link greatly to the other chips beyond the standered 750 series.

can anyone confirm if chip 0 or 2 is a braodway/upgraded broadway chip? or reltibly similar? or can run as a gimped version of one.

how much do developers know of the seperate catches and differences between teh core, or other differences between cores? asking because it could be a cause of weakness to xbox360 ports(assuming all its cores are identical) the worst optimized core can bring the whole system down if true.

at this point i think each has there own spacfic purpose. core 1(middle) dedicated to mass simple operations, such as controlling the pikmin in that upcoming game. core 0 and/or 2 dedicated to very complexe but reletibly few code, so handling AI, physics, and similar. physics i have run by the core 1 identifiying when something happened, then core 0 or 2 actualy doing the math. is there any weight to this theory?


Only core 0 is in use in Wii mode, the clock is lowered and much of the cache is locked.



#181975 [Photo] Wii U GPU Die

Posted by routerbad on 12 March 2013 - 11:59 AM in Wii U Hardware

Don't get too optimistic. Not bashing the Wii U, but its pretty clear that the PS4 is at least 3x more powerful. Still, more than enough for me considering the 20x power gap between Wii and PS3 and 360. 

Exactly.  How that translates into gameplay is anyone's guess at this point, though we know that it won't be very drastic at all, it still doesn't take away the PS4's best performer crown.




#181967 [Photo] Wii U GPU Die

Posted by routerbad on 12 March 2013 - 11:55 AM in Wii U Hardware

A TFLOP at 550Mhz?! Madness....

Yeah that isn't the actual number, it was just mentioned that it performs that way, when the features of the GPU are put to good use.  You can only really calculate flops based on programmable shaders, unless you know exactly what's going on in the rest of the logic, and that would put us somewhere in the 500GFLOP range based on what we know about the E6760.  Because it has one less SIMD engine than a stock E6760 and that GPU comes in at 576GFLOPS stock 



Ok BUT can anyone accurately to close to accurately predict the Specs of the Wii U and compare it to what the PS360 can do, and compare it on what the PS4 can do.

 

It's obvious the PS4 will be able to do more, but how much more? I know the gap is apparently small, but how much smaller? I'm not trying to bash the GPU in the Wii U I'm actually trying to do the opposite, but without much effect heh

 

The GPGPU is able to do stuff the CPU used to do in current gen consoles. Does this mean the CPU in the Wii U uses it's cores for the gamepad and OS instead? 

we are in diminishing returns territory, meaning that more polygons =/= better looking graphics.  The gap is small already, and once you take into account that the increased performance isn't going to mean better graphics it becomes even smaller.  PS4 will still win out performance wise, and we'll likely see games toward the end of the gen that run smoother on that console than on Wii U, and maybe some that look marginally better, though not enough to change the experience.




#173298 [Photo] Wii U GPU Die

Posted by routerbad on 19 February 2013 - 09:50 PM in Wii U Hardware

it proves to be an E6760 that nobody accepts easily, they dont want to!!.. lol look at this

http://www.nintendo....ch-specs/ it says : AMD Radeon™-based High Definition GPU

and look at this.

http://en.wikipedia....i/Radeon_R700 and this http://en.wikipedia....en_(GPU_family) Both are rv700 and r800 (4xxx gpu and 5xxx gpus) They are NOT AMD branded, it is ATI branded (amd bought ATI and the AMD gpus start from 6xxx series!!) http://en.wikipedia....ds_(GPU_family)

I pointed that to gaf, because they all speak for rv7.... They trolled me in IGN forums, they dont want to listen about e6760, they want to believe that wii U does not have the dx11 hardware features!!! lol funny but true.

that alone proves that wii U is using AMD radeon based gpu (As nintendo states) technology, that means 6xxx series and up. So it may not be 320 spus, but 480 instead!

Nobody can say that wii U uses rv700 chip, because it conflicts with the branding... it should be ATI radeon rv7 instead. Its not even r800 (5xxx gpu) because it is also ATI branding.

Even if AMD bought ATI and they are one company... AMD keeps the ATI branding for 4xxx and 5xxx gpus http://www.amd.com/u...on-hd-5000.aspx

but its not accepted by the reviewers.. hmmmm


It is true they didn't start branding the GPU's with AMD until the 6000Series, but I would assume any GPU produced from that time forward would carry AMD branding regardless of the base family from which its derived. I believe it is an e6760 as well, though if it were they have removed one of the SIMD engines. the e6760 has 6 SIMD engines, Latte has 4 (2 SIMD cores = 1 SIMD engine @ 80ALU/engine). Thats just the programmable shader piece. You also have a crapton of logic all over that GPU that could be fixed shaders, asymmetric shaders, who knows!

Also, it would still be 320 ALU's even if they went with e6760 SIMD engines, because there are two missing.



#169331 [Photo] Wii U GPU Die

Posted by routerbad on 09 February 2013 - 03:25 PM in Wii U Hardware

The wierd thing about reading this thread, is it reads like no one has actually seen wii u games in real life. The eurogamer article seems right on the money. The wii u is clearly current gen performance overall but with a higher performance gpu but lower performance cpu.

If you are reading this thread and considering a purchase of wii u I would check out real world performance first. I'm loving the wii u but this thread actually reads like people are expecting some massive improvement in wii u performance and that current wii u games aren't the real benchmark of what the wii u can achieve.

Some of that silicon will be the original wii gpu and 3 megabytes of memory. Maybe the 32 megabytes of fast video memory doubles as the 24megabytes of main wii memory.

Dare I mention the real world where some wii u games have reduced resolution, weaker frame rates, missing a.i, missing physics, reduced sound channels, missing 3D support, missing graphical effects and I'm sure other things I've forgotten.

Even Nintendo's own titles Nintendo World and Mario are technically weak.

The wii u is NIntendo's entry into the current gen with regard performance, in some ways its stronger and in some ways its weaker.

I'm sure a big part of the failure of the wii u so far is its weak specification. We all wish it had a stronger spec but it doesn't. The fact is, the wii u has absolutely no chance of competing with the new xbox and playstation on a technical level. It's only just competing with the current gen consoles.

I strongly suggest people who really want to understand the potential performance of the wii u look elsewhere on the internet to more balanced viewpoints and always factor in realworld performance of games.


Your argument applies to basically every console launch in the history of gaming save the first gen. That's why everyone is curious about what the system is capable of, because it will still take years for developers to realize and take advantage of its full potential. So no, current titles, built for older hardware or ported from older hardware with completely different performance characteristics is not a benchmark for what the system can and will do. You talk of potential performance like the conclusion was written before the start.

If you were reading this thread and think we are making a case for or against buying a wii u please reread, like, all of it. Can't some people just geek out a little without people taking the entire conversation too seriously from a sales perspective and making sweeping conclusions about the intentions of Nintendo or the fate of the console itself? Whether good or bad, learning about the deep level hardware characteristics and discussing them is fun and exciting for some people.



#171064 [Photo] Wii U GPU Die

Posted by routerbad on 13 February 2013 - 09:57 AM in Wii U Hardware

I've been following this thread here and over at neogaf, b3d and simliar discussion at gamerConnect. I am a newbie here and came from the 3ds forums. Great discussion here and I am glad to see everyone (the majority) keeping an open mind and level head compared to the other forums I mentioned.

I came across this link on b3d and this interview with the developers of Need for Speed Most Wanted for the Wii U really shines light on the potential power of system and the uniqueness of the gamepad. I think many naysayers will begin to quiet down if they watch and listen. This port is coming from the PC, not the PS3Xbox360 and the developer mentions how they were able to keep better textures, drawing distance etc because of the Wii U unlike the other ported titles.


Definitely interested in this, and I'm impressed with these developers. I've been following the other threads as well, and there are quite a few more Nintendo haters posting in them but there is still some great discussion going on over there. I think this thread here allows us to both react to what's happening on those threads and postulate our own ideas in deciphering this hardware. And we have 3Dude, they don't, lol.



#169189 [Photo] Wii U GPU Die

Posted by routerbad on 09 February 2013 - 09:31 AM in Wii U Hardware

What is Wii U's bandwidth now with this GPU info when not relying on eDram? Just wondering if the 12.8 GB from anandtech is correct or not.


12.8GB/s for the DDR3, Anandtech had no info on the 140GB/s EDRAM in the GPU. Wii U definitely has the most complicated, and the most tightly designed memory architecture out of all of the 8th gen consoles.



#167461 [Photo] Wii U GPU Die

Posted by routerbad on 05 February 2013 - 08:28 PM in Wii U Hardware

Yeah, I am beginning to think that the Wii U hack is all smoke and air from his recent comments as of late. Especially when he said I could have provided Gaf with the details before Chipworks released the photo. So if he had the info all this time then why all the secrecy until today? He knew people wanted the information yet he decided to stay quite. Has anyone else tried to authenticate his claims about the CPU by chance?

Someone responded on twitter that GAF members were ignored when they did ask him.

If WiiU is using Cayman based SIMD cores it would be 8*64*2*.55=563GFLOPS. Everyone assumes that the SPUs in Espresso are based on R7XX VLIW5 cores rather than SIMD. Assuming Nintendo used SIMD cores, we are looking at somewhere between 563-704 GFLOPS. If they went with the older architecture at 40ALU's per SPU (which I'm reading was unorthodox for VLIW cores) we're looking at 352GFLOPS. Either way there is still nearly 50% of the die that can't currently be identified that could be additional asymmetric SIMD or SPU cores, or fixed function logic.



#167445 [Photo] Wii U GPU Die

Posted by routerbad on 05 February 2013 - 07:28 PM in Wii U Hardware

Trinity uses a Cayman GPU core. 24 SIMD Cores with 64 PE's per SIMD. Clocked at 880MHz looks like ~2.7TFLOPS

If Wii U uses the same SIMD cores, with 64 PE's per core, its 8*64*2*.55=563GFLOPS



#167435 [Photo] Wii U GPU Die

Posted by routerbad on 05 February 2013 - 07:05 PM in Wii U Hardware

So you don't account for the number of SP's per SIMD core?



#167831 [Photo] Wii U GPU Die

Posted by routerbad on 06 February 2013 - 04:50 PM in Wii U Hardware

http://www.neogaf.co...&postcount=1663


*Comes out of his hole (again)*

I tend to forget some people take this stuff a lot more serious than I do.

First a thanks to Chipworks for going above and beyond for the picture and to blu, Durante, Fourth Storm, Thraktor, and wsippel for the work they did. Shinjohn let me know that the picture had been obtained and sent me a link, but I also checked out the thread. I wanted to come back and help with the confusion and what not.

As some of you know getting info about the hardware was a pain because what Nintendo released essentially boiled down to a features list. And by that I mean general features of a modern GPU that could easily be looked up. Info that dealt with performance apparently was not given out leaving devs to figure have to figure it out on their own. I had two working ideas of the GPU based on a more traditional design (which I was hoping for) and a non-traditional design. I see that some of you actually remembered the non-traditional idea. Wsippel and I would compare notes on whatever info we could come up with. Some of those notes led us to come up with how it may look if Nintendo took the non-traditional route.

http://www.neogaf.com/forum/showpost...ostcount=12053

In this post you’ll see both wsippel’s take and my take. I’m going to address some things in that post because I know some of you will try to take them out of context. First you’ll see wsippel’s baseline ended up being more accurate than mine. When I talked about the potential performance of 1TF or more that was in comparison to the R700 series because new GPUs are more efficient than that line, a higher baseline, and my idea focused on the dedicated silicon handling other performance tasks.

So what was the basis for the non-traditional view? I shared two of those bits of info before.

http://www.neogaf.com/forum/showpost...postcount=6136



Quote:
Well, I can't reveal too much. The performance target is still more or less the same as the last review from around E3. Now it's more balanced and "2012" now that it's nearer to complete and now AMD is providing proper stuff. As far as specs, I don't see any big change for better or worse, other than said cost/performance balance tweaks... It won't make a significant difference to the end user. As far as the kit goes, it's almost like what MS went through. Except more Japanese-ish... If you know what I mean.
http://www.neogaf.com/forum/showpost...postcount=6305



Quote:
Anyway, things are shaping up now with the new year. There was some anxiety with some less close third parties about what they were doing with GPU side, whether things were going to be left in the past... but it looks more modern now. You know, there simply wasn't actual U GPU data in third party hands this time last year, just the target range and R700 reference GPU for porting 360 titles to the new cafe control. Maybe now they finally can get to start debugging of the specifics and start showing a difference...
Here is one more specific piece that I didn’t fully share.



Quote:
I can't confirm or deny, sorry. The cat is very confidential and I repeat non-final. The target, last checked, is triple core with XX eDram and exclusive Nintendo instructions. 1080/30 capable Radeon HD w/tess. and exclusive Nintendo patented features. On a nice, tight bus that MS wishes they had on 360. ;)

I appreciate the individual for sharing as much as he did. He was a little paranoid though (I can understand) and at one point thought I was leaking info on a messageboard under a different name, but wouldn’t tell me the board or the username, lol.

I’m sure some of you remember me talking about games being 720p. It’s because with this I knew devs would use those resources for 720p development. I’m sure some of you also remember me mentioning the bus. The key thing in this is the “Nintendo patented features”. In the context of things we talked about, it seemed to me these were going to be hardwired features. What is certain for now is that the die shot shows a design that is not traditional, fewer ALUs (in number) from where things supposedly started with the first kit, and GPU logic that is unaccounted for. I’ve seen some saying fixed functions, but that’s too specific to be accurate right now. Dedicated silicon would be a better alternative to use, though I say that as a suggestion. In my opinion I think lighting is a part of this. The Zelda and Bird demos emphasized this. Also in the past it was discussed how Nintendo likes predictability of performance. It would also suggest Nintendo wasn’t ready to embrace a “fully” programmable GPU and kept on the water wings when jumping in the pool.

I did what I could to get as much info on the hardware as possible since Nintendo was giving out so little. From there I gave the best speculation I could based on that info. As of today, I still stand by the evaluations I made about Wii U’s potential performance from all the info I could gather. And until Nintendo’s games show otherwise I’ll continue to stand by them because in the end it’s on Nintendo show what Wii U is capable of.

And if you think I deserve flak for what I’ve said in the past then I’m here, but you’re wasting your time trying because my view hasn’t changed yet.

I made the farewell post to hold myself accountable to avoid posting, but I haven’t done well sticking to that, haha. I wasn’t going to make this post, but since I was one of the primary ones gathering info it’s unfair to you guys to leave things as they were.




#167615 [Photo] Wii U GPU Die

Posted by routerbad on 06 February 2013 - 08:06 AM in Wii U Hardware

He may be right about all R6xx SPU-s on chip? 160 ALUs in total?

What do you think about e6760 SIMD-s theory?

He's said he doesn't know or care to know a whole lot about the specific graphics tech. The R600 registers would only make sense if they were needed for emulation, there could be some R600 based tech in there, but it isn't this GPU. R6xx was never produced at 40nm, never came close to the TDP necessary for Wii U. Hollywood, whether based on it or not, performed like an R6XX, we aren't seeing that level of performance here. It's much more likely that Espresso is based on an 80ALU evergreen SIMD core, considering what we've seen done on it.



#167962 [Photo] Wii U GPU Die

Posted by routerbad on 06 February 2013 - 07:01 PM in Wii U Hardware

@routerbad: Thanks!

This guy was right about everything before, I think he is right now.

Its kind of disappointing to see that Nintendo did reduce the amount of programmable shaders from 400-500 to 350 (1-2 SIMD cores), but they maybe put something more powerful instead of those cores!

Like he said, Nintendo has always been about predictable quality, so certain things they wanted taken advantage of probably have dedicated logic to pull those functions off the shaders themselves.



#168118 [Photo] Wii U GPU Die

Posted by routerbad on 07 February 2013 - 06:59 AM in Wii U Hardware

dsp also right next to the high speed i/o's and starbuck. Freaking eurogamer silly pony 'tech analyst'.

What was his name again? jeff butter? whatever his name is, he sure made a fool of himself.

Agreed.



#168893 [Photo] Wii U GPU Die

Posted by routerbad on 08 February 2013 - 02:00 PM in Wii U Hardware

So are we looking at a highly customized e6760 with fixed function after all?

If so that would easily equate 3-6x xenos.


With everything we don't, and everything we do know, I'm leaning toward e6760 based SIMD engines, though 4, not 6 of them. obviously my opinion on the matter is next to worthless. I think they may have derived the design from R7XX, but I think given the level of customization and the costs associated they would have gone with a more modern shader engine than R7XX, along with some baked in lighting and tessellation hardware.



#168114 [Photo] Wii U GPU Die

Posted by routerbad on 07 February 2013 - 06:57 AM in Wii U Hardware

I just want to know if I made the right gamble getting one of these. Can developers potentially port Next gen games with only a few minor things such as resolution and frame rate (I don't care much for 1080p or frame rate)?

If the answer is "yes, so long as publishers and developers don't get all pissey about doing that" then I'm fine :) I just don't want it to be another Wii situation where publishers can't be bothered because it means them paying another developer to use a separate engine to make another version of the game for Nintendo....

We don't know yet whether it will support UE4 though I suspect it will. I believe Frostbite2 and CryEngine3 are already said to be supported on Wii U. Buying a console isn't a gamble, it's an investment. Spending $350 some odd dollars is putting faith in Nintendo to deliver a worthwhile entertainment experience. And it's not like you can't own multiple consoles. It should handle "next gen" ports just fine, but I'd rather see multiplats that are built and optimized for Wii U.



#191685 Wii U's RAM is slower than PS3/Xbox 360.

Posted by routerbad on 08 April 2013 - 02:45 PM in Wii U Hardware

Your treating the edram like its a cache only, its not.  The fact that the GPU can keep the framer buffer, z buffer, and handle AA all in the edram make a huge impact on performance.  Take for example a PC game, turn the textures from low to high and see how it affects performance.  Now take that same game and bump from 720p to 1080p.  I can almost guarantee you that the resolution increase impacted performance quite a bit more than the texture quality increase.  Now start adding AA and watch you performance start to drop rapidly.  High quality assets arent your biggest bandwidth hogs, and I get the idea that you think they are.  You could use high end PC assets in a game and if your rendering it in 480p with no AA, your memory bandwidth requirements wont be very high.  

 

You have to remember that file size and bandwidth are totally separate.  For example, I could have a 1MB file stored in the ram that can eat up more bandwidth than a 10MB file.  If that 1MB file is fetched 15 times per frame, and the 10MB file fetched only once, then the 1MB file is a bigger bandwidth hog.

 

Its why I wanted to find a modern graphics card that had just 12.8GB/s of video memory bandwidth.  Guys are using the AMD HD5450 and getting  games like Need for Speed Most Wanted to run in 720p around 25fps, and 600p 35+ fps.  This is on PC, not optimized and the 12.8GB/s is all the graphics card has, unlike the Wii U that spares the main memory so much of the bandwidth because of the edram.  Running Wii U quality may cause it to run at like 10fps, but thats the point, edram is the difference maker.   

I understand all of this, I was simply wondering what the interface between edram and main ram allowed for.  

 

For 3Dude, the latency on these chips is straight 9s, pretty much standard latency figures for 1600 (latency times around 10ns)




#190389 Wii U's RAM is slower than PS3/Xbox 360.

Posted by routerbad on 05 April 2013 - 06:41 AM in Wii U Hardware

MT41K256M16HA-125

Is the micron ram found in another teardown.

its wired 32 megs (mb of course) a pop by 16x 8 banks inside each 4gb chip.

So, good.



Also affixed on a 96 fgba, 10x 14mm in size, that samsung on its 84 fgba is looking completely like an odd man out.

huh.

Found a samsung dram brochure.

with that rams nomenclature with different specifications more in line with the others than that samsung ram overview page.

K4W4G1646B -HC(12/11/1A). 96-FBGA.

Could be a mis printed label.




#189835 Wii U's RAM is slower than PS3/Xbox 360.

Posted by routerbad on 03 April 2013 - 03:32 PM in Wii U Hardware

If anything wouldnt the GPU having so many pins/lanes debunk the idea of 16bit right away?  The GPU is custom, so there would be no point in having more memory lanes at the GPU than required. 

The fact that it's DDR3 should have debunked it right away.  They applied a 16 bit width per chip which isn't the case in a dual die package.  They should have assumed 32 bit per chip or count the lanes, which is normal for graphics applications.




Goodtwin, on 03 Apr 2013 - 09:11, said:If anything wouldnt the GPU having so many pins/lanes debunk the idea of 16bit right away?  The GPU is custom, so there would be no point in having more memory lanes at the GPU than required. 



------------------------
Exactly.


routerbad, on 03 Apr 2013 - 09:07, said:Just did another count, 158 pins.  79 per side.  Unless I'm counting some of them wrong, a few of them are hard to make out.


That gddr3 64bit card has 63 pins.

I think weve had a breakthrough. I thought those lanes were abnormally large.

158 pins... 158 bit bus????

(fixed, lmao)

800x2x158=252,800/8=31,600

31.6GB/s?

I'm good with that.  What's going into the GPU is going to tell the whole story with regard to the bus width.


Interesting, maybe we're missing 1 pin per channel here.




#188605 Wii U's RAM is slower than PS3/Xbox 360.

Posted by routerbad on 31 March 2013 - 11:58 AM in Wii U Hardware


routerbad, on 31 Mar 2013 - 05:54, said:I don't think they would separate the bus on the silicone, RAM access through the memory controller would be 64 bit per chip, but the software doesn't know that, the memory controller is handling 2GB of DDR3 on a 256 bit bus, and is able to use all of that speed to load/unload RAM, unless they have two chips on one controller, and two on another.  It doesn't have to partition RAM based on the chips themselves, and it doesn't have to place data sequentially within the chips either, and it's more efficient if it doesn't.
The controller and CPU/GPU would access RAM indiscriminate of the hardware separation.  It recognizes 2GB and addresses it all the same, and unless they lock the games to only using specific address tables, it's up to the memory controller to make the big decisions each cycle based on shortest wait.

You havent seen the mother board?

Its not certain, but its heavily looking like the ram is seperated into groups of 2 chips, each with a 128 bit bus, leading seperately into different sides of the mcm.

Certainly not set in stone, but a very solid foundation. Though i wouldnt be one to complain if nintendo starts allocating some of that untouchd Gig of ram to game side.

Regaurdless.... The topic at hand 12.8 misinformation has hereby been officially debunked.

Its 25.6.

Easily 25.6  I saw the motherboard, but I may need to take a closer look, the back of the mobo the RAM leads all seemed to head into the same side of the MCM, but I may be wrong.  It would make little sense to use two separate memory controllers.





Anti-Spam Bots!