Jump to content


megafenix's Content

There have been 36 items by megafenix (Search limited from 19-June 20)


By content type

See this member's


Sort by                Order  

#316792 Who would buy a Wii U now?

Posted by megafenix on 03 September 2015 - 05:20 AM in Wii U Hardware

4 reasons:

Xenoblade

Fast Racing Neo

Fatal Frame V

star fox

 

https://www.youtube.com/watch?v=gXP0UCh2-NE

https://www.youtube.com/watch?v=vbex7PQ8XYs




#316780 AMD revealed the Nintendo NX?

Posted by megafenix on 01 September 2015 - 11:48 AM in Wii U Hardware

I am not sure that NX(if its a home console of course) will keep backward compatibility via hardware since it could be said that IBM is out of the semiconductor market after selling its fabs to GlobalFoundaries;although IBM has a contract with GlobalFoundaries so that they will keep making chips for IBM(contract seems to be for 10 years since 2014), it seems that will only cover the new power pc designs like watson or chips for super computers and not desktops; IBM seems to be interested only in high end supercomputers and in cloud, mobile, analytics and other technologies, so it could be said that desktop chips are no longer in their plans

 

http://www.pcworld.c...end-design.html

 

 

It seems nintendo has two reasons for moving into the x86 design, developers need and no one to make powerpc chips for them or at least in the line for desktop pruposes

 

As for the GPU, i feel it will be just a little above ps4 in terms of raw power, but there is one thing i am almost certain, NX will use HBM/HBM2 technology(i feel it will be HBM2 since HBM has a limit of 4 gigabytes), with this the power consumption will be lower and edram will not be any longer necessary for GPU bandwidth needs, with this the development process will be easier, specially for third parties




#313823 Fast Racing NEO - New Pics!

Posted by megafenix on 04 June 2015 - 11:50 AM in Wii U Games and Software

4 local multiplayers at 60fps?

mm, i am a little sceptic but i will give them the benefit of the doubt, the reason for having doubts is that mario kart achieved 60fps but with 1 or two players, for a split screen of 3 or 4 framerate drops to 30fps so i would imagine that this game might do that as well




#313753 Fast Racing NEO - New Pics!

Posted by megafenix on 03 June 2015 - 11:28 AM in Wii U Games and Software

Looks bad ass, cant wait for a gameplay video to see it in motion, hope they can show us somethign on the E3 at treehouse but since its still in alpha stage that might not happen and maybe the game will not make it for this year




#313463 Project Cars struggling to hit 30FPS on Wii U... devs might wait for NX

Posted by megafenix on 27 May 2015 - 06:53 PM in Wii U Games and Software

Its to bad that they are considering NX when clearly this console wont be out this or next year, if they dont want to put so much effort in squeezing the Wii U specific features(fixed shaders most likely) there are other approaches they can take

 

Use SMAA isntead of MSAA or EQAA

1.-If THEY by chance are using MSAA or maybe EQAA like the ps4 and xbox one, they can replace it with SMAA which gives similar results to MSAA with less strain improving performance and doesnt blur as much as FXAA and can be used as a post-processing antialaising solution http://www.tweakguid.../Crysis3_6.html

Occlusion Culling may also be of big help:

 

Try Occlussion Culling

 

2.-Occlusion Culling is a feature that disables rendering of objects when they are not currently seen by the camera because they are obscured by other objects. This does not happen automatically in 3D computer graphics since most of the time objects farthest away from the camera are drawn first and closer objects are drawn over the top of them (this is called “overdraw”). Occlusion Culling is different from Frustum Culling. Frustum Culling only disables the renderers for objects that are outside the camera’s viewing area but does not disable anything hidden from view by overdraw. Note that when you use Occlusion Culling you will still benefit from Frustum Culling.

http://docs.unity3d....ionCulling.html
http://http.develope...ugems_ch29.html

 

3.-Some adjustments on the LOD(level of detail) for renders that are far away is also an option.

 

 

Try out a variable framebuffer so that resolution will change instead of the framerate (Wipe Out HD did this)

 

4.-Another approach is to use a variable resolution/framebuffer so that resolution will change isntead of the framerate during the gameplay, this way whenever there would be fps peaks the resolution will change isntead. This trick was used in Wipe Out HD for the ps3 tokeep the 60fps at variable 1080p(resolution could go as low as 1440×1080 certain times) and tricked many as is not very noticeable in race games http://www.eurogamer...sleight-of-hand

 

 

A, i almost forgot this other solution(reduces the number of draw calls)

http://docs.unity3d....llBatching.html

"

Draw Call Batching

To draw an object on the screen, the engine has to issue a draw call to the graphics API (e.g. OpenGL or Direct3D). The graphics API does significant work for every draw call, causing performance overhead on the CPU side.

Unity can combine a number of objects at runtime and draws them together with a single draw call. This operation is called “batching”. The more objects Unity can batch together, the better rendering performance (on the CPU side) you can get.

Built-in batching support in Unity has significant benefit over simply combining geometry in the modeling tool (or using theCombineChildren script from the Standard Assets package). Batching in Unity happens after the visibility determination step. The engine does culling on each object individually, and the amount of rendered geometry is going to be the same as without batching. Combining geometry in the modeling tool, on the other hand, prevents efficient culling and results in a much greater amount of geometry being rendered.

Static Batching

Static batching, on the other hand, allows the engine to reduce draw calls for geometry of any size (provided it does not move and shares the same material). Static batching is significantly more efficient than dynamic batching. You should choose static batching as it will require less CPU power.

 

 

"




#313462 Should Zelda U = NX Masterquest edition?

Posted by megafenix on 27 May 2015 - 06:44 PM in Wii U Games and Software

Its very unlikely that nintendo NX will come next year when iwata mentioned that he wot give any specifics until 2016, meaning that the ealiest you could expect it is 2017, but i think its going to be 2018. Nintendo has always released their new console after 2 to 3 years of announcing its codename, just go to the past and you will see

 

nintendo revolution: announced in E3 2004 and released in 2006

Gamecube: announced in 1999 and then launched in 2001

n64: announced in 1993 as project Reality, changed to ultra 64 in 1994 and was launched in 1996




#313184 Do you think nintendo should give special treatment to Bizerta:Silent Evil?

Posted by megafenix on 22 May 2015 - 08:28 PM in Wii U Games and Software

Well, i thought that maybe this game could attract people that want more classical approach instead of what resident evil games have become, and since silent hill has been cancelled then Bizerta has no direct competition giving perfect chance for nintendo and edrrox to offer something fresh and with classical approach




#313165 Do you think nintendo should give special treatment to Bizerta:Silent Evil?

Posted by megafenix on 22 May 2015 - 12:25 PM in Wii U Games and Software

Considering that Capcom resident evil franchise games are following the footspes of shooters more and more and that not only Konami has cancelled the promising Silent Hill game(P.T) but also decided to focus their business more on the mobile market, do you think this is a perfect opportunity for Nintendo to give special treatment to Bizerta:Silent Evil just like they did to Bayonetta 2 by announcing it at events like E3 or Gamescom and help with the advertisement?

 

The game might be an indie game but the screenshots and story background look promising, it feels like a mix of classical resident evil and silent hill elements(though its unlikely gonna use pre renders like the classical resident evils and spoil the free movement and camera feeling).

 

When I saw this

bizerta-silent-evil-640x371.jpg?44d51a

 

I felt back to this

 

The game might be an indie game but the screenshots kind of reveal a good horror atmosphere that remind us how resident evil agmes and silent hill started

10421323_10153347545399769_5102709468163

 

10406494_10153347545459769_8397310831550

 

11060887_10153347545824769_7295987242549

 

11224816_10153347545724769_2873566050935

 

The only thing i dont like its that it seems there wont be physical release, so, what are your thoughts?




#311532 Next Level Wii U game in development *rumor*

Posted by megafenix on 05 April 2015 - 11:26 AM in Wii U News

Interesting, this reminded me of certain game from level 5 that was supposedly going to be launched by the ned of 2014 but nobody knows what happened to it

large.jpg




#311531 Zelda Wii u DELAYED until 2016

Posted by megafenix on 05 April 2015 - 11:20 AM in Wii U News

i woulnt expect nintendo nx until 2017 or maybe even 2018, with the bad experience nintendo got with wii u its obvious they will launch the new console until they are ready and have a good launch titles both from them and third parties




#311530 Earthlock Wii U RPG

Posted by megafenix on 05 April 2015 - 11:10 AM in Wii U News

It's not trying to be Eternal Arcadia. Rogue Galaxy sorta was tho.

haha, never knew of that gam, its true, ii even see escenes that feel familiar

rogue-galaxy-couple.jpg

 

FinaVyse.jpg

 

 

 

 

 

its to sad that instead of a sequel we get this :(

 

valkyria_chronicles_vyse__aika_and_fina_

 

 

Still kind of feel that at least earthlock is inspired on these kind of games, although i dont like neither the art style nor the characters, hope the gameplay will make up for those things




#311527 Earthlock Wii U RPG

Posted by megafenix on 05 April 2015 - 10:48 AM in Wii U News

it doesnt look that bad, but surely will not be as good as skies of arcadia

171-Skies_of_Arcadia-5.jpg

 

the characters on skies of arcadia are badass, these ones kind of feel like meh, i see no determination on their eyes(well maybe the girl passes but her eyes look cold instead of determined)




#311525 Fatal Frame Wii U (this year)

Posted by megafenix on 05 April 2015 - 10:40 AM in Wii U Games and Software

I wonder if the game will preserve the erotic outfits :laugh: , i wouldnt bother but i feel the outfits just ruin the essence of the game. How are we gonna feel scared while our attention i focused on other thing? :P




#311522 Wii U eDRAM vs x360 eDRAM

Posted by megafenix on 05 April 2015 - 09:25 AM in Wii U Hardware

Thanks for  old news we have topics on. No really we needed to be reminded because we can't remember anything from 4 days ago.

haha, sorry, to be honest i just watched the thing yesterday at night since i didnt have time to do it before, on purpose, could you give me the link of the topic?

thanks




#311519 Wii U eDRAM vs x360 eDRAM

Posted by megafenix on 05 April 2015 - 08:41 AM in Wii U Hardware

good news, in the last nintendo direct the new fatal frame V was confirmed for western release

minute 24:40

 

I cant wait to play it

10857152_10153061084789769_4290132116732




#310859 Wii U eDRAM vs x360 eDRAM

Posted by megafenix on 21 March 2015 - 08:49 PM in Wii U Hardware

Then get used to walls of those posts a lot in these topics.

Shin'en is,Nintendo of course will,PG did. Published games will and many indies. Big AAA games likely not as they abandoned Wii U but most of them not all but most are not worth it anway. Now smaller publishers that make some great but hidden gems kinda sucks no on Wii U. XCX and Zelda U make Wii U groan like a all night orgy baby.

Besides shinen i would say that those behind fatal frame for the wii u also did good, we also have to wait if devil´s third graphics have vastly improved as itagaki claims and i am also hope that the wii u version of shadow of the eternals is still on the works but right now we only know that Shadow of the eternals development has started again at Quantum Entanglement Entertainment, that was confirmed past year on october 31

http://www.polygon.c...t-entertainment




#310518 Wii U eDRAM vs x360 eDRAM

Posted by megafenix on 14 March 2015 - 11:23 AM in Wii U Hardware

3Dude, on 14 Mar 2015 - 1:20 PM, said:snapback.png

You are making the mistake of thinking bandwidth is the only solution, when your quote itself states '/memory trafficking'.

Nintendo uses low latency for its deferred rendering. Nintendo doesnt need high bandwidth, to send over massive piles of data that build up during latency waits, because there is no waiting on latency. The wii u is also not as powerful as those systems, and doesnt need to trafic a fraction of what GCN does.

Also, that bandwidth is operational bandwidth, it is used specifically FOR calculating each pixel. Yes, you are still confusing operational bandwidth, with what the edram does, which is store and transport. The very fact you keep making quotes ABOUT OPERATIONAL BANDWIDTH, and then referencing it to a storage pool is proof of that.

 

 

 

nope, as i said before i consider both latency and bandwidth important factors, neverthless i am giving more focus on memory bandwidth since its the thing that its being more understimated on the wii u edram(take for example shy guy's quote), thats why i brought topics like deffered rendering, g-buffer quotes from shinen like the triple 720p buffering+g-buffer(for deffered rendering)+intermediate buffers, and of course cryteks quote about the memory bandwidth requirements for the deffered rendering to even be a burden for the xbox one esram of 200GB/s




#310516 Wii U eDRAM vs x360 eDRAM

Posted by megafenix on 14 March 2015 - 11:10 AM in Wii U Hardware

 

First off, I never said bandwidth was not important, stop trying to put words in my mouth to make your broken arguments look better, I said there are 3 important factors to ram performance, and you generally only get to focus on 2 in the real world. A design with two high performing peices to that factor will always outperform a design with only one high performing factor like bandwidth which you are obsessing on, like the 360 edram had, will suck because of having crappy latency and low capacity.

Fast racing neo, and every engine nintendo uses on wii u is a deferred rendering engine, and Nintendo has been heavy on multipass since the gamecubes tev, which was an 8 pass, and the wii's was 16 passes. Your quote is about a forward rendering engine. Your quote also heavily relies on bandwidth because its about a system design where you have to travel across a high latency bus to get the data. The lower the latency, the more often you can send data, instead of having to wait. Most engines are designed around pc gpu's, the latency is horrible, they have to wait many cycles every time, so they need to send as much data as possible at one time to catch up when they get an opening. Although I shouldnt have made this explanation, as it will only confuse you because you still dont understand the difference between operations bandwidth and transportation bandwidth.

The wii u edram, like the cube and wii before it, is designed for minimal to no waiting on latency. When you can constantly send data whenever you want without having to wait, you dont desperately need super high bandwidth to play catch up.

You are still confusing operational bandwidth with bandwidth for transporting/holding data. The ram attached to the logic on the wii u gpu has very very high operational bandwidth as well. It cant be used to transport data across and between the system. Its operational bandwidth. Its for operations. That is the ram and the bandwidth your quote is tlking about, the 32Mb edram is NOT THAT. The wii u's edram doesnt render the image or rasterize it, it does NOT do calculations per pixel, thats the simd engines job, and it has its OWN MEMORY attached to the logic for that purpose. the Edram pool just holds the finished product, a 3.6 Mb image, for 60 fps that image needs 216Mb/s X3 thats 648Mb/s in bandwidth, to move that data, Half a Gb. those high operational bandwidth operations talked about in your quote, are handled by the rops and simd engines, which have their own memory attached directly to the logic, then they SEND the FINISHED product, to the edram, which any part in the system can access from there. The edram is a bucket, its a scratch pad, it does NOT do render operations. Your quote is talking about the bandwidth required to do render operations, like per pixel lighting calculations. That has NOTHING TO DO WITH THE EDRAM.

 

 

 

And when did i say edram was used for rendering or rasterizing?

i was marely talking about memory bandwidth and latency in separate topics, i never did a direct comaprasion between the two, the mentioning of the deffered rendering was due that the technique requires a g-buffer and g-buffers are very well known to be very hungry on memory bandwidth

here

https://hacks.mozill...ferred-shading/

"

Deferred Shading

Deferred shading takes a different approach than forward shading by dividing rendering into two passes: the g-buffer pass, which transforms geometry and writes positions, normals, and material properties to textures called the g-buffer, and the light accumulation pass, which performs lighting as a series of screen-space post-processing effects.

// g-buffer pass
foreach visible mesh {
write material properties to g-buffer;
}
 
// light accumulation pass
foreach light {
compute light by reading g-buffer;
accumulate in framebuffer;
}

This decouples lighting from scene complexity (number of triangles) and only requires one shader per material and per light type. Since lighting takes place in screen-space, fragments failing the z-test are not shaded, essentially bringing the depth complexity down to one. There are also downsides such as its high memory bandwidth usage and making translucency and anti-aliasing difficult.

"

 

Crytek also has a mentioning about the memory bandwidth problems with deffered rendering(the tchnique requires g-buffer)

http://wccftech.com/...andwidth-gains/

"

Crytek Shares a Secret Method for Utilizing Xbox One eSRAM’s Full Potential – Resulted In High Bandwidth Gains
Recently, GamingBolt published a snippet of their interview with Crytek’s US Engine Business Development Manager Sean Tracy. Talking about utilization of CryEngine with tiled textures, Tracy talked about the role of Xbox One eSRAM in saving ‘big’ bandwidths, and shared a secret method that the Ryse development used to unlock Xbox One eSRAM’s full potential. He said:

 

This technique helped the developer a lot in optimizing Ryse: Son of Rome on Xbox One as it resulted into high bandwidth gains and allowed the development team to use just a single compute shader for lighting and culling.

“CryEngine has a unique and novel solution for this and was shipped with Ryse. One of the problems when using Deferred Shading is that it’s very heavy on bandwidth usage/memory traffic. This gets exponentially worse as overlapping lights cause considerable amounts of redundant read and write operations. In Ryse our graphics engineers created a system called tiled shading to take advantage of the Xbox One.”

“This splits the screen into tiles and generates a list of all the lights effective each title using a compute shader. It then cull’s light by min/max extents of the tile. We then loop over the light list for each tile and apply shading.”

 

“In practice this made for the biggest bandwidth save we could have hoped for, as just reading the Gbuffer once and writing shading results once at the end for each pixel. Only a single compute shader was used in Ryse for light culling and executing entire lighting and shading pipelines (with some small exceptions for complex surfaces like skin and hair).”

"

 

So, if even the xbox one esram high memory bandwidth of 200GB/s can run into troubles with the bandwidth requirements of the deffered rendering for even to crytek to come up with additional soltions, then how in the world wii u edram could handle the bandwidth requirements of the triple 720p buffering+gbuffer(for the deffered rendering)+intermediate buffes all at 60fps with less then xbox one esram memory bandwidth?

 

And yea, i already know that gpu have their own tiny memories like lovcal data shares, texture caches and such, thats precisely why i told shy guy that 500GB/s of memory bandwidth is not an overkill for the gpu at all

here

http://developer.amd...nsform-part-ii/

"

Why to use Local Memory?

Local memory or Local Data Share (LDS) is a high-bandwidth memory used for data-sharing among work-items within a work-group. ATI Radeon™ HD 5000 series GPUs have 32 KB of local memory on each compute unit. Figure 1 shows the OpenCL™ memory hierarchy for GPUs [1].

Fig1.png

Figure 1: Memory hierarchy of AMD GPUs

Local memory offers a bandwidth of more than 2 TB/s which is approximately 14x higher than the global memory [2]. Another advantage of LDS is that local memory does not require coalescing; once the data is loaded into local memory, it can be accessed in any pattern without performance degradation. However, LDS only allows sharing data within a work-group and not across the borders (among different work-groups). Furthermore, in order to fully utilize the immense potential of LDS we have to have a flexible control over the data access pattern to avoid bank conflicts. In our case, we used LDS to reduce accesses to global memory by storing the output of 8-point FFT in local memory and then performing next three stages without returning to global memory. Hence, we now return to global memory after 6 stages instead of 3 in the previous case. In the next section we elaborate on the use of local memory and the required data access pattern.

 

"

 

So, if each local data share on an amd  hd 5000 gpu(wii u is either based on hd4000 to hd6000 sicne there is also a rumor about the e6760, not to mention that from hd4000 to hd6000 are all based on the rv770 architecture) can have as much as 2TB/s of memory bandwidth, why an edram with 500GB/s of memory bandiwdth would be an overkill?

thats precisely why i told shy guy that 500GB/s of edram memory bandwidth wouldnt be a problem for the gpu to handle.




#310471 Wii U eDRAM vs x360 eDRAM

Posted by megafenix on 13 March 2015 - 11:01 AM in Wii U Hardware

"

I dont know where the problem is Megafenix. The eDRAM in WiiU is rumored to be between 35 and 70 GB/s as much i heard. Thats enough. You dont need 500 GB/s... that would be an overkill. And if you remember correctly, Shin'en said in the interview with HD WARRIORS, that bandwidth is not the bottleneck of todays GPU's. Latency is the real problem, and that the WiiU is great at latency.

"

 

that wouldnt do since g-buffer for defered rendering requires lots of bandwidth, and while shinen commented of bandwidth not being a problem on modern hardware, they were refering to the RAM not the Edram.

https://dromble.word...he-wii-u-power/

"

When testing our first code on Wii U we were amazed how much we could throw at it without any slowdowns, at that time we even had zero optimizations. The performance problem of hardware nowadays is not clock speed but ram latency. Fortunately Nintendo took great efforts to ensure developers can really work around that typical bottleneck on Wii U. They put a lot of thought on how CPU, GPU, caches and memory controllers work together to amplify your code speed. For instance, with only some tiny changes we were able to optimize certain heavy load parts of the rendering pipeline to 6x of the original speed, and that was even without using any of the extra cores.

 

"

 

The comment is not refering to the gpu memory and as you know every gpu has their own memory called vram and the wii u edram is basically that, system ram is for other stuff. The bandwidth requirements depend on which techniques you use, if you use forward rendering then you dont need that much memory bandwidth but it costs you lots of processing power, with deffered rendering(confirmed by shinen on fast racing neo) you save up lots of processing power but requires much more memory bandwidth, its a trade-off

here

http://jcgt.org/publ...02/04/paper.pdf

"

The size of the surface attribute buffer—the g-buffer—is typically 16 to 32 bytes per visibility sample in optimized high-quality real-time systems. The DRAM bandwidth consumed in writing this buffer, then reading it for each light pass is signifi- cant, even with only a single light pass. For example, a screen with a four-megapixel display, using four 24-byte samples per pixel at 60 Hz, would consume 46 GB/s of bandwidth, assuming only one lighting pass, just for the uncompressed g-buffer write and subsequent read. Thus, in practice, either anti-aliasing or pixel resolution (or both!) is often sacrificed to maintain high frame rates on economical hardware. This is perhaps the most serious issue with the technique, as low visibility sampling rates confound simple solutions to efficiently rendering partially transparent surfaces, edge anti-aliasing, and higher-dimensional rasterization.

 

"

thats just an example with one light pass, even xbox one with 200GB/s of sram bandwidth has troubles handling the framebuffer and using a g-buffer(ryse of rome for example which is 900p), 70GB/s of edram bandwidth are very short for wii u edram to handle triple buffering 720p+g-buffer+intermediate buffers in games like fast racing neo and other games as well

 

As for 500GB being an overkill, no it wouldnt, actually gpus from amd hd4000 to current ones can handle terabytes of bandwidth and i have the proof just under my pocket.




#310467 Wii U eDRAM vs x360 eDRAM

Posted by megafenix on 13 March 2015 - 08:47 AM in Wii U Hardware

I remember that twitter conversation.

Those things are all true, but they are made possible because of the higher capacity of the wii u edram, not because MS pr made a fake claim to a 256 Gb/s bandwidth that they couldnt use for anything they were trying to insinuate, and that we should use that as a basis for the wii u's bandwidth, which is what you constantly come back to say.

Your reasoning is that the new console must have better bandwidth than a half truth claims ms pr made about the 360, ignoring the fact its the Capacity and latency that are far, far, far more important in this case.

latency is important, i know that since the gamecube era, but bandwidth is important factor too, you can read documentation about the importance of bandwidth for deffered rendering using the g-buffer which is very demanding, do not ignore that fact either




#310463 Wii U eDRAM vs x360 eDRAM

Posted by megafenix on 13 March 2015 - 07:05 AM in Wii U Hardware

 

The picture you posted blatantly shows you are wrong. In fact, its in the last part of the text you quoted.

The 360 did not have 256Gb of bandwidth to the edram, the edram had 256Gb of bandwidth to itself, and the rops, on the daughter die, across a bridge, from the rest of the system, if anything else wanted access to that edram, it had to cross the bridge at 32Gb/s for the request, and back again, waiting on the added latency of crossing the bridge to the daughter die. The 360's edram didnt have enough capacity, so it had send the framebuffer in over chunks, the latency was horrible, and the bandwidth sucked because there was a 32Gb a second bridge between the edram and the system.

Its not bandwidth that makes the wii u's edram smoke the 360's, its the fact it has 3x the capacity, is within immediate access of the entire system instead of gated off on a duaghter die, and so it had none of the latency.

There are 3 factors that affect the performance of ram, you want all three to be as high performing as possible, but realistically, you usually can only get to choose two that will be high performing, bandwidth is just one of them. Capacity and Latency are just as important, and depending on the situation, can be far more important than bandwidth.

Shinen uses that Wii u edram to store 3 720p framebuffers for fast racing neo (And thats only using half of it, and already over the 360's capacity a time and a half). The 360 couldnt dream of doing that even if its bandwidth on that edram and rops was 500 Petabytes. It doesnt the capacity, and it has too much latency.

 

 

well, in the interview with hd warriors you can clearly read that shinen mentioend they need 16MB for 1080p with double buffering, and recently they did mention they are using 3 720p framebuffers on fast racing neo but they did not say it takes 16MB, in fact a 720p framebuffer on wii u edram, according to shinen, takes about 3.6MB for each of them

 

here, you can confirm that on their twitter

https://twitter.com/...639073798500353

 

"

@eubank_josh Even having all three buffers in EDRAM would be no problem as they are only 3 x 3.6Mb of 32Mb.

 

"

 

Thats about 10.8MB of edram for the triple buffering with 720p buffers, so, what they use the rest of the 21.2MB of edram(not to menion that there is another tiny edram of avout 2MB and 1MB of sram according to the photo)?

well, just keep reading and you will see that they also use edram for the g-buffer(you need this bufer to store the lighting information of the deffered rendering), intermediate buffers and other stuff




#310439 Wii U eDRAM vs x360 eDRAM

Posted by megafenix on 12 March 2015 - 08:36 PM in Wii U Hardware

and this is what pisses me off no one is really showing what it is capable of. I hope Retro next game is not a donkey kong game and something more realistic because the hater always use artstyle as to why Wii U games look good. I want something gritty to shut them up.

well, you could show them games like fatal frame, deus ex revolution and such, a good example to show them that wii u is no old tech is with this example between final fantasy with directx11 improvements against fatal frame

 

Ask them, if this is directx11

final-fantasy-xiv-a-realm-reborn-2015130

 

final-fantasy-xiv-a-realm-reborn-2015130

 

final-fantasy-xiv-a-realm-reborn-2015130

 

 

then this is directx10?

1496573_10153159281694769_87370180065998

 

10914786_10153061076874769_2022900545832

 

10904546_10153061084754769_3841249629064

 

 

enjoy some gameplay here

 

this is also a good example, but surely people would say that cartoony games dont count

captain-toad-treasure-tracker_xr28.jpg




#310437 Wii U eDRAM vs x360 eDRAM

Posted by megafenix on 12 March 2015 - 08:11 PM in Wii U Hardware

Actually what

 

I remember Shin en says they use about 16Mb for three 720p frame buffers so there is no screen tearing, and have tons of room to use the rest to preload a bunch of stuff, archive a bunch of stuff, use it as a cpu scratchpad...

The Wii u's edram design is a lot different than the 360's, the 360's was on a seperate daughter die with the rops, it only got its full bandwidth with the rops on that die, any other interaction, access or sending data, had to cross the 32Gb/s bridge, and a lot of latency, to get the rest of the system.

THe wii u's is embedded on the same substrate with the rest of system, so it gets its full bandwidth without having to cross a bridge, and doesnt have any of the added latency that comes with that.

The publisher incumbents who decide what games get made, and where, simply dont want the wii u to be a player in their game.

actually what shinen mentioned is that 16MB of edram on wii u is enough for 1080p with double buffering

http://hdwarriors.co...ined-by-shinen/

 

"

Manfred Linzner of Shin’en:

Wii U eDRAM usage is comparable to the eDRAM in the XBOX360, but on Wii U you have enough eDRAM to use it for 1080p rendering.

In comparison, on XBOX360 you usually had to render in sub 720p resolutions or in mutliple passes.

Even if you don’t use MSAA (MultiSample Anti-Aliasing) you already need around 16Mb just for a 1080p framebuffer (with double buffering). You simply don’t have that with XBOX360 eDRAM. As far as I know Microsoft corrected that issue and put also 32MB of Fast Ram into their new console.

We use the eDRAM in the Wii U for the actual framebuffers, intermediate framebuffer captures, as a fast scratch memory for some CPU intense work and for other GPU memory writes.

Using eDRAM properly is a simple way to get extra performance without any other optimizations.

 

"

 

the xbox 360 edram+ROPS and other components had an internal bandwidth of 256GB/s, the bridge between the gpu and the edram was limited at 32GB/s

_1393188032.jpg

 

 

So obviously the wii u edram has more than 256GB/s otherwise the porting wouldnt be so easy, right now speculation goes for 563.2GB/s or more, not to mention that if xbox 360 10MB was barely enough for the 720p with double buffering and wii u 7.1MB of edram is enough to hold 720p with double buffering then clearly the wii u edram must pack much more bandwidth




#310393 Official Wii U specs and technical discussions thread

Posted by megafenix on 12 March 2015 - 07:54 AM in Wii U Hardware

it amazes me that some stafff of the eurogamer reporters still call wii u directx10 tech when pretty much directx11 equivalent features like multithreaded rendering(project cars), compute shaders(report from eurogamer secret developers, or the wii u sdk documentation) and other stuff

http://www.eurogamer...ategy-behind-it

"

In Theory: Nintendo's next-gen hardware - and the strategy behind it

Digital Foundry on the tech that best matches the Big N's revamped approach to console R&D.

 

By Richard Leadbetter Published 10/01/2015

 

 

The hardware make-up of Nintendo's last console is based on two key components -ancient PowerPC cores from IBM (the presence of which appears to have been dictated mostly by Wii back-compatability), along with DirectX 10-era graphics technology from AMD. While the Wii U was a power-efficient design, its PowerPC CPU architecture would be immensely difficult to scale down to mobile, while AMD left the kind of graphics tech utilised by the Wii U behind many, many years ago.

 

"

 

Not to mention that there are wii u games there taht clearly use diretx11 equivalent features and even if the gpu started as a customized rv770 we must remeber that the the hd5000 and hd6000 gpus from amd are basically using the same architecture as the hd4000(rv770) gpus with tiny little changes to improve performance and give directx11 support, since the wii u gpu is custmized is no wonder that nintendo made the necessary changes for next generation graphics support, and this wouldnt be the first time since nintendo also has made modifications on cpu and gpu devices back on the gamecube era, and the gamecube cpu was different than the standard powerpc 750 series of those times




#307507 Ps4=Wii U in graphics IMO

Posted by megafenix on 23 January 2015 - 10:46 PM in General Gaming

Obviously not, the wii u may be fficient and the fixed stream cores may give a boost in performance but still is a step awa in power vs the xbox one and ps4, but despite that the wii u can show pretty good next generation graphics looking games if companies give it a try

 

10904546_10153061084754769_3841249629064

 

 

 

10914786_10153061076874769_2022900545832

 

 

Captain-Toad--Treasure-Tracker.jpg





Anti-Spam Bots!