Jump to content


Photo

what about physics gc and wii were fantastic just thought id ad a physics thread


  • Please log in to reply
30 replies to this topic

#21 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 24 July 2012 - 11:55 AM

So, I was right? :laugh:

Don't take this too seriously, those are just opinions. If you get irritated by something posted on a forum then... well.


Yes, your observations were correct. Until you began claiming the cube could perform programmable shader required tasks like unified lighting and shadowing.

If you had further read what you were responding too you would have realized I dumped a metric ton of hard supporting evidence for your observations.

But just because I was impartial and also stated the one advantadge the orginal box had over cube and wii, you freaked.

banner1_zpsb47e46d2.png

 


#22 Grooseland

Grooseland

    Spear Guy

  • Members
  • 90 posts

Posted 24 July 2012 - 12:05 PM

Yes, your observations were correct. Until you began claiming the cube could perform programmable shader required tasks like unified lighting and shadowing.

If you had further read what you were responding too you would have realized I dumped a metric ton of hard supporting evidence for your observations.

But just because I was impartial and also stated the one advantadge the orginal box had over cube and wii, you freaked.


Well, let me ask you this and this is an honest question because I really don't know:

What's the difference between the Gamecube's TEV and the Programmable shaders that the Xbox have? I heard both of them but really don't know the specific function they do.

Also, I didn't "claimed" anything or "freaked" LOL... and as you said those were my observations. Since you seem to know the Xbox well I ask you the question above.

#23 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 24 July 2012 - 03:12 PM

Well, let me ask you this and this is an honest question because I really don't know:

What's the difference between the Gamecube's TEV and the Programmable shaders that the Xbox have? I heard both of them but really don't know the specific function they do.

Also, I didn't "claimed" anything or "freaked" LOL... and as you said those were my observations. Since you seem to know the Xbox well I ask you the question above.


Not specifically state. You didnt specifically state it. The problem was you
quoted what i said, which was very specific, and then sweepingly stated the cube could run those games. Which it just cant. It can have higher poly environments, more texture layers, higher res tectures, more objects, bigger areas, but it cant acheive the unified lighting and shadowing engines thst gave those games the signature look of their time.

The tev is a fixed function rendering environment. That means, it was supplied with all the effects developers would use for games, on the hardware itself, and that is all it would have the whole systems life.

Fixed function does enjoy the benefits of being blazing fast at its supplied functions, with minimized overhead for resources required per effect though.

The cubes flagship effect was environment mapped bump mapping. And was no surprise really. It was definately a smart decision at the time. Embm is so imcredibly versatile its almost a shame its labeled as bump mapping. Its not very good at traditional bump mapping, like dot 3 normal mapping, so you wouldnt be able to use it throughout your scene for like rock or tree bark. It would have to be a highly reflective material, like metal, glass, ice, or water.

Embm can offset the pixels of any textures, or even just over the rendered scene itself according to the data of a heightmap. Utilizing a cube map, a texture either procedurally made by snapping a pic of the gameworld from a certain perspective, or one premade by devs and applying it to the surface of a material, embm could make wonderfully convincing reflective and high specular surfaces.

But it was also used for so much more. The distortian effect from a charge shot in metroid prime? embm, heat distortian in wind waker? embm Reflections on the moving mirror platforms in mario sunshine? embm, the slick bump mapping on the frog mini boss in twilight princess? embm, glass reflections and refractions, embm.

best known for the signature cube water effect. Since this was one of the cubes fixed functions, it was blazing fast. The xbox, being a programmable shader environment, COULD of course do embm. And it did. But for things like water, well, it was noticably slower, choppier, and loterally had a different frame rate than the rest of the game, easily.observable in splinter cell chaos theory on the box.

Anyways, so the cube was fixed with a certain number of effects. In order to create new looking ones devs could pass the processed image and effects through the tevs stages, with each pass mixing and matching various functions to get a desired look.

So, you have 16 stages. Each stage allows 4 inputs and spits out 1 output (your pixel color) So you put in your multiple images with the effects you want to combine and it mixes them in.

However its not very flexible. Fixed function pipelines were being rapidly replaced by programmable shader pipelines around the time the cube came out.

Programmable shader pipelines can be programmed to shade pixels (mix colors) in any manner they want, they arent hard coded, and so can feature any effect the devs can get running.

Shaders replace a section of video hardware typically called the Fixed Function Pipeline (FFP) – so-called because it performs lighting and texture mapping in a hard-coded manner. Shaders provide a programmable alternative to this hard-coded approach.[1]

from wiki.

This wasnt a very big deal during the cubes lifetime, since the functions chosen for it were good for its time and place, and.it was able to use the overhead saved from a simple fixed function environment for more polygons, more characters, larger areas.... The cubes stock of graphical effects didnt wear thin until the system was ready for retirement.

The problem was when it came to the wii, Nintendo used the same exact fixed graphical library as the gamecube. This is why many devs kept saying things like 'the xbox is still more powerful in some ways' despite the wiis ability to run circles around the box in everything but programmable shaders.

In order to get programmable shaders, to get things like dot 3 normal mapping, which wasnt a supplied effect (cube/wii had embm and emboss mapping instead), devs would have to emulate a programmable shader pipeline within the game engine. Simply put the large majority of devs dont know how to do that, and it would have a huge resource overhead.

The problem wasnt so much keeping the wii fixed function, but keeping the same effects library. For example, look at the 3ds, its fixed function, but equipped with a modern graphical feature set, which gives it games like reveleations.

banner1_zpsb47e46d2.png

 


#24 Desert Punk

Desert Punk

    Chain Chomp

  • Members
  • 656 posts

Posted 26 July 2012 - 05:05 AM

Clearly the Gamecube has 43 Megabytes of memory 24 Meg of main memory, 2 meg frame buffer, 1 meg video cache and the very slow 16 meg of buffer memory.

The idea that the gpu of the Gamecube is more advanced is truly laughable, isn't the Gamecube equivilant of a DX8 gpu compared to the DX9 of Xbox. The gamecube doesn't even have 32bit colour to deal with transparency effects etc.

Gamecube games in the UK were locked to either 50hz pal interlace or had a 60hz interlace mode that could be selected. So in the UK at least the gamecube was capable of either 25 frames per second or 30 frames per second as a locked maximum or alternatively 50 or 60 fields a second.

Not all Gamecube games supported progressive scan even in America or Japan as it required an expensive cable. Later Gamecubes didn't even feature a connector socket to attach a component cable to use progressive scan.

In contrast worldwide the Xbox supported progressive scan for all game allowing 60 frames per second. some games were locked to only 30 unique frames per second and others werent. Many gamecube games were reviewed in interlace mode and thats not 60 frames thats 60 fields per second. The xbox also supported 720p and 1080i games which take a lot more resources than 480p for those Gamecube games that supported it. The xbox also supported 32bit colour in those modes.

The amount of mis-information and bias by people here is staggering.

You don't need to match mhz to make effective use of different speeds of memory, gpu and cpu. They have what is called cache memory that allows them to process independently of the main memory. The idea that a PC has to have a GPU that is exactly a divided mhz of its CPU is laughable to be honest.

#25 Grooseland

Grooseland

    Spear Guy

  • Members
  • 90 posts

Posted 26 July 2012 - 07:32 AM

^Really like your Xbox, huh? :laugh:

#26 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 26 July 2012 - 08:34 AM

Clearly the Gamecube has 43 Megabytes of memory 24 Meg of main memory, 2 meg frame buffer, 1 meg video cache and the very slow 16 meg of buffer memory.

The idea that the gpu of the Gamecube is more advanced is truly laughable, isn't the Gamecube equivilant of a DX8 gpu compared to the DX9 of Xbox. The gamecube doesn't even have 32bit colour to deal with transparency effects etc.

Gamecube games in the UK were locked to either 50hz pal interlace or had a 60hz interlace mode that could be selected. So in the UK at least the gamecube was capable of either 25 frames per second or 30 frames per second as a locked maximum or alternatively 50 or 60 fields a second.

Not all Gamecube games supported progressive scan even in America or Japan as it required an expensive cable. Later Gamecubes didn't even feature a connector socket to attach a component cable to use progressive scan.

In contrast worldwide the Xbox supported progressive scan for all game allowing 60 frames per second. some games were locked to only 30 unique frames per second and others werent. Many gamecube games were reviewed in interlace mode and thats not 60 frames thats 60 fields per second. The xbox also supported 720p and 1080i games which take a lot more resources than 480p for those Gamecube games that supported it. The xbox also supported 32bit colour in those modes.

The amount of mis-information and bias by people here is staggering.

You don't need to match mhz to make effective use of different speeds of memory, gpu and cpu. They have what is called cache memory that allows them to process independently of the main memory. The idea that a PC has to have a GPU that is exactly a divided mhz of its CPU is laughable to be honest.


Lol whut?

Me: The cube gpu used an older technology called fixed function pipelines that were rapidly being replaced by gpus with programmable shader pipelines, like the one in the xbox around the time the cube was released.

You: OMG!111!1!!!!!!! Teh bias and misinformation! How can you say the cubes gpu is more advanced1!!!q1q1!q!

Your fantasy world is out of control dude. You arent responding to things that were actually said anymore You are actively reimagining reality in your off kilter fantasy world, and rewriting what ive3 said in a manner so far removed the its the exact opposite of what the person you are responding too actually said.

Im not responding to this until you rewrite it to take into account the things that were actually said, instead of what the non existant, non sensical dopplegangers in your mind said.

banner1_zpsb47e46d2.png

 


#27 Desert Punk

Desert Punk

    Chain Chomp

  • Members
  • 656 posts

Posted 26 July 2012 - 11:12 AM

Your statement;

The biggest difference between the cube and the box is that the cube was just designed so much better as a gaming machine... While the xbox had a certain trump card. And it wasnt anything remotely close to higher clocked processors.

Clearly a nonsense statement by any standard. Both are designed as gaming machines and the xbox is far more advanced and part of this difference is due to a gpu and cpu that are running 50% faster approx and are also more advanced.

Your statement;

The Xbox had more ram than the cube. At 64 MB to the cubes 40 MB. Unfortunately, this didnt mean much since the ram was poor quality ddr sdram clocked at 200 MHz. The cpu was 733 mhz, and the gpu was 233mhz. This alone is very poor design. Bottleneck city.

You aren't giving the Gamecube enough credit it has 43MB in total. Poor quality DDR sdram? What does that even mean its just DDR memory which is fast and there is no important relationship between the cpu being 733mhz and the gpu being 233mhz except it might require an additional timing crystal on the circuit board. Its not a poor design at all. It can be argued that the Gamecube has the poor design because it has only 24 megabytes of fast memory and 16meg of very slow memory. So in many ways games need to keep to that 24meg of memory to run at a good speed. The 16meg of memory is very, very slow FACT. Effectively the Gamecube has 27MEG of total effective fast memory for games and a buffered/cached optical drive where as the Xbox has 64MEG of fast memory and 768MEG of buffered optical drive using the hard drive as virtual memory. Thats actually how much each game gets to cache critical data with the xbox. It has 3 caches of data at any one time on the hard drive.

Your statement;

See, hz is the measure of cycles performed a second. Since all three of these important components are clocked differently, One component could be ready for data to crunch for a new cycle, but the other component it needs is busy in the middle of one of its cycles.... And so cycles are wasted.

Clearly you have no understanding that the cpu and gpu has cache memory and so only adds in new data from main memory as it works through the cache. The bandwidth of the xbox's shared memory is 6.4GB/s compared to 2.1GB/s on Gamecube. Clearly the Xbox has some bandwidth overhead and has cache memory anyway. If the Xbox didn't have good memory bandwidth it wouldn't be able to achieve graphics like this at 720p or 1080i which require memory bandwidth that neither the gamecube or wii or capable of because they only have a 2MEG frame buffer and 720p or 1080i in 32bit colour require more memory than that.



There are other statements you have made but I can't be bothered to answer all of them.

^Really like your Xbox, huh? :laugh:


I like just about all consoles because I'm into gaming. To argue for factual accuracy doesn't make someone biased. There are some truly fantastic gaming experiences on the Gamecube that the Xbox never had like Zelda Windwaker which make the console worth owning but facts are still facts.

#28 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 26 July 2012 - 03:23 PM

1. Sorry guy, those are all facts.

2. You dont know what a frame buffer is. The xbox isnt more powerful because it has a bigger frame buffer. It can simply store a higher resolution image for the animation frame. You arent going to get more polygons, higher resolution textures, better embm performance, more simultaneous animation or ANYTHING ive mentioned of which, not a single word has been about resolution, from a frame buffer. Frame buffer doesnt even have ANYTHING to do with RENDERING in hd. Its simply where you keep rasterized images on their way to the screen. The ps2 could output in 1080i. Are you going to start making some asinine argument about the ps2 being more powerful than the cube?

3. Clearly its the term buffer you dont understand because everytime you mention it you screw something up. A cache thats called a 'buffer' isnt a magical peice of happiness that means you dont have to worry about component interfacing. It means you have a buffer before the next thing that doesnt go right sends you to hell in a handbasket. Your buffer space can be used up RAPIDLY, particularly if you dont have good i/o between components that NEED the data. And then you have a thrashed cache and a bottleneck. Though since you brought it up. I might as well tell you the cubes L2 cache on its cpu is 2x the Xbox's.... Oh noes!

You are are incorrect about the bandwidth. What you stated was the peak bandwidth. Which is completely and absolutely worthless information to anyone whos actually done any of this. Peak bandwidth is nothing more than a theoretical maximum, and it has never, EVER EEEEVVVVEEEERRRR been reached in ANY system. The only reason the box has a peak bandwidth of 6GB is because of its capacity.

What reality needs is something called SUSTAINABLE bandwidth, and how close you can get your sustainable bandwidth to your peak bandwidth is what your performance is going to be(Hint, xbox wasnt remotely close)

And one of the many things thats important for SUSTAINABLE bandwidth is not so much HOW MUCH ram you have, but WHAT KIND of ram you have. Which is why Nintendo opted for Mosys expensive 1tsram for its main memory.

Which has a sustainable latency of 6.2 ns.

The xbox used gddr1 sdram.
At 200mhz thats around 20-30 ns sustainable latency.

Uh oh sparky, all the double pumping in the worlds not gonna save that one. Better start slashing off HUGE chunks from that peak theoretical bandwidth. Especially for lag inducing things like non repeating textures, which just is decimating to keeping low latency. You know, in games like Metroid prime, where Retro went on record stating no 2 rooms in MP reused the same texture.

And while you are trying to erroneously call people out about misinformation, why dont you bring up how you lied through your teeth about every xbox game being at bare minimum 480p. Or I can snap a pick of the backs of some of my xbox game cases with a conspicous missing 'x' from the box for the 'hdtv 480p' tab.

Or why dont you tell people how theres only around 7 whopping titles out of the entire xbox library that supported 1080i, featuring such graphical stunners as the atari flashback anthology, and the pre rendered backgrounds syberia.


Being designed to be a gaming machine and being WELL designed gaming machine are entirely different things

I love situations like this. I have the Nintendo guy calling me an Xbox fan and you calling me a nintendo fan.

Lets me know Im right.

Edited by 3Dude, 26 July 2012 - 03:26 PM.

banner1_zpsb47e46d2.png

 


#29 Dwarphkin

Dwarphkin

    Red Koopa Troopa

  • Members
  • 62 posts

Posted 27 July 2012 - 07:25 AM

The Original Xbox was designed like a PC though, the GameCube wasn't. Also Mhz doesn't matter either, being that the two systems use two different architectures on the CPU. The Cube was PowerPC based and the Xbox was Intel based. So I dont even know why Mhz where even compared. Also, ya'll talk about game worlds being small on Xbox, ever heard of a game called San Andreas. I pretty sure (If you have a modded xbox, like me) you could spawn hundreds of pedestrians.There were no loading between areas of the map on San Andreas for Xbox (aside from entering and exiting buildings). It was essentially the PC version tbh. Which I'm pretty sure couldn't even run on the GameCube. Another thing, the GameCube had 1.5 GB discs, the radio stations, take up twice that by itself. Plus EVEN If Nintendo went with regular sized discs, Rockstar probably would've had to completely redesign the control scheme.

Edited by Dwarphkin, 28 July 2012 - 06:17 PM.

Posted Image
I sometimes wish life was like Fallout 3 ^-^

#30 Desert Punk

Desert Punk

    Chain Chomp

  • Members
  • 656 posts

Posted 30 July 2012 - 04:56 AM

@3Dude

The frame buffer is where the image is stored its the location where gpu renders the image before output. This is the restriction that prevents the gamecube/wii from going above 480p.

There may not be many 720p or 1080i xbox games but it has infinitely more than the Gamecube and wii which have 0. If all 720p and 1080i games were pre-rendered you'd have a point but games like Soul Caliber 2 and the ATV games etc are full action games so your comments may little sense and show complete bias.

Yes the ps2 is more powerful than the Gamecube in many ways that is reality but taken overall its weaker. The idea that one console exceeds in every area against another is incredibly stupid. Most consoles are both good and bad in specification when compared in the same generation. That is reality, deal with it. The ps2 has 5.1 sound support, 32bit colour, very high polygon output (but rubbish texturing), 4meg video memory not 3meg, 1080i support plus some eccentric but capable support processors that take the burden of its rather weak mips cpu.

The latency of the wii/gamecube memory is included in the superior memory bandwidth which I've already stated the wii/gamecube has but remember the reason it has superior memory bandwidth is because video memory and main memory are seperated. The xbox shares its main memory so creates higher latency. As long as you keep to the 24meg of main memory of the wii/gamecube and 3 meg of video memory you have high bandwidth. As soon as you start moving data between main memory and gpu it will clearly slows down considerably. The cache is there to prevent latency being an issue and it works.

What is the point of pretending the Gamecube is more powerful than it is? Clearly it has no hard drive, limited memory, small capacity optical discs, 2 channel sound, 24bit colour and low resolution output. I just don't get the need to pretend something is more powerful than it is. Then by direct comparision of gpu and cpu etc its still much weaker.

#31 3Dude

3Dude

    Whomp

  • Section Mods
  • 5,482 posts

Posted 30 July 2012 - 08:35 AM

@3Dude

The frame buffer is where the image is stored its the location where gpu renders the image before output. This is the restriction that prevents the gamecube/wii from going above 480p.

There may not be many 720p or 1080i xbox games but it has infinitely more than the Gamecube and wii which have 0. If all 720p and 1080i games were pre-rendered you'd have a point but games like Soul Caliber 2 and the ATV games etc are full action games so your comments may little sense and show complete bias.

Yes the ps2 is more powerful than the Gamecube in many ways that is reality but taken overall its weaker. The idea that one console exceeds in every area against another is incredibly stupid. Most consoles are both good and bad in specification when compared in the same generation. That is reality, deal with it. The ps2 has 5.1 sound support, 32bit colour, very high polygon output (but rubbish texturing), 4meg video memory not 3meg, 1080i support plus some eccentric but capable support processors that take the burden of its rather weak mips cpu.

The latency of the wii/gamecube memory is included in the superior memory bandwidth which I've already stated the wii/gamecube has but remember the reason it has superior memory bandwidth is because video memory and main memory are seperated. The xbox shares its main memory so creates higher latency. As long as you keep to the 24meg of main memory of the wii/gamecube and 3 meg of video memory you have high bandwidth. As soon as you start moving data between main memory and gpu it will clearly slows down considerably. The cache is there to prevent latency being an issue and it works.

What is the point of pretending the Gamecube is more powerful than it is? Clearly it has no hard drive, limited memory, small capacity optical discs, 2 channel sound, 24bit colour and low resolution output. I just don't get the need to pretend something is more powerful than it is. Then by direct comparision of gpu and cpu etc its still much weaker.

@3Dude

The frame buffer is where the image is stored its the location where gpu renders the image before output. This is the restriction that prevents the gamecube/wii from going above 480p.

There may not be many 720p or 1080i xbox games but it has infinitely more than the Gamecube and wii which have 0. If all 720p and 1080i games were pre-rendered you'd have a point but games like Soul Caliber 2 and the ATV games etc are full action games so your comments may little sense and show complete bias.

Yes the ps2 is more powerful than the Gamecube in many ways that is reality but taken overall its weaker. The idea that one console exceeds in every area against another is incredibly stupid. Most consoles are both good and bad in specification when compared in the same generation. That is reality, deal with it. The ps2 has 5.1 sound support, 32bit colour, very high polygon output (but rubbish texturing), 4meg video memory not 3meg, 1080i support plus some eccentric but capable support processors that take the burden of its rather weak mips cpu.

The latency of the wii/gamecube memory is included in the superior memory bandwidth which I've already stated the wii/gamecube has but remember the reason it has superior memory bandwidth is because video memory and main memory are seperated. The xbox shares its main memory so creates higher latency. As long as you keep to the 24meg of main memory of the wii/gamecube and 3 meg of video memory you have high bandwidth. As soon as you start moving data between main memory and gpu it will clearly slows down considerably. The cache is there to prevent latency being an issue and it works.


1. Incorrect, the frame buffer is where an image that has been RASTERIZED is put to to, not rendered to, no rendering anywhere. The images in a frame buffer are LITERALLY nothing more than a bitmap. The frame buffer simply has to be big enough to hold a bitmap of the specified resolution. Has near NOTHING to do with system power

2. Soul calibur is a fighting game with 2 characters in a tiny arena. Its also indistiguishable from its counterparts in assets, meaning it doesnt even attempt to make use of the xbox's power. msx also has miniscule overhead compared to xboxs best looking games like riddick or doom, which feature a unified lighting and shadowing engine, tiny resource overhead is a trait common in ALL 720-1080 box games.

3. Read up on the difference between theoretical peak performance, and sustainable performance, as its very clear this is not in your knowledge base, as made incredibly obvious by your completely erroneous guess that the xbox bandwidth you mentioned is sustainable latency, when in reality it is simply using the formula to aquire peak bandwidth from raw specs. The cube badwidth you supplied is ALSO peak theoretical bandwidth, so sustainable bandwidth with the cube is ALSO lower than the fantasy bandwidth you posted, although, since the cube has 90% ram effeciency, its not much lower, unlike the xbox which loses entire GIGABYTES worth of bandwidth from its theoretical peak. On the other hand, 6.2 ns IS the SUSTAINABLE latency of the 1tsram of the cube. That IS its slowest performance under the most DEMANDING of circumstances, its PEAK burst latency is 2 ns. Yeah, it was THAT FAST, almost like its single transistor ram or something. Probably why the cubes 24MB of ram costs more than xbox's 64MB.

4. LMAO, dont even TRY to pull that card. Before you even BEGAN responding to me I was busy telling your Nintendofan counterpart to stop cherry picking bad xbox screens, and that no matter what rouge squadron does, the cube will never be able to match the box's programmable shader environment. Something, hilariously, you have NEVER mentioned. How can you not mention that?

Though since you made it a point to showcase your faux impartiality, maybe you should let people know the vast majority of ps2 games can only perform 5.1 surround during fmv cutscenes, and not during gameplay, and the few that do utilize one of the vector processors, destroying even more of the ps2's poly performance on TOP of the lack of hardware s3 texture decompression already destroying ps2 poly throughput.

Or that the cube has a dedicated processor to handle sound, so whatever resources taken away from ps2 and the box for sound, the cube gets to keep and use.

Maybe you should also let people know the 3mb of embedded video memory in the cubes gpu is ALSO 1tsram... Meaning it smokes the ever living crap out of the ps4's 4MB stock edram, in every category but capacity.

The only person playing checks and balances before this post of yours was me. This tells me youve been plugging the stuff ive been saying into google and cant get around it.

The only person here who is pretending things are what they arent is you.

I already mentioned the cubes lack of rom space in my argument about what the box has the cube doesnt. Which by the way, doesnt have anything to do with system power.

I already mentioned the hard drive in the same argument about what the box has the cube doesnt, which again, has nothing to do with system power.

I already mentioned the box's ability to render in 720 and above. Unlike you, I didnt purposefully leave out that the increase of image quality comes at the expense of graphical fidelity.

The cube has 32bit rgba color. It has no problems with transparancy whatsoever, which should be incredibly obvious since embm is a hardware supplied effect for the system and extensively used.... Something you should have seriously considered before peddling that lie.

And the best looking most demanding cube games are ALL 480p, the exact same resolution as ALL of the best looking most demanding xbox games. So once again, you are caught trying to peddle another lie... Humourosly, after being forced to admit that only a small handful of xbox games render above 480p (and even less than that actually RENDER hd, like the 360, most are rendered sub hd and simply upscaled.)

And of those best looking games, ALL the cube games come out on top in polycount, texture resolution, simultaneous character counts, scale, unique texture tiles, and texture layers (cube supports 8, to the box's 6).

What does the Xbox have? programmable shaders.

Real time lighting, from movable lightsources, unified with real time shadowing, interacting with dot3 normal mapping.

What does this do? Well its a real slap you in the face kind of graphical power.

For a direct comparison, in metroid prime shots 'light up' the area and characters around them in a localized sphere around the projectile.

All this does is use THIRTY TWO BIT COLOR to provide alpha blending with the color of the projectile light, blending the color of everything in the localized area with the color of the projected light. It just tints everything it comes across a transparent yellow or purple or whatever color your beam is. No highlights, no shadows, (environment shadows and highlights are all baked in prime).

On the box, in say, doom III when an imp launches a fireball the same concept is in play. However, thanks to programmable shaders, the player is treated to a spectacular display.

From the moment the imp begins growing a fireball in hands, real time shadows begin dancing around the area (Thats because, all the lighting is in real time and not baked). Real time lighting radiates out from the fireball casting shadows from the imp, its hand, fingers head, the environment, barrel shadows, other enemies, the player, everything, casts a shadow from this real time light source, onto EVERYTHING, which casts shadows themselves on EVERYTHING ELSE.

Then there is the per pixel lighting, also emanating from the glow of the fireball, not just real time vertex lighting, but per pixel lighting of dot 3 normal maps, lighting up every bumpy skin pore, scar, wart, scale in real time, highlights rising across the surface of a texture, rippling over normal map bumps, shadows following suit.

And then it LAUNCHES the fireball, and shadows and lights grow and fade and dance around as the projectile races towards the player.

THATS what the xbox has that the cube, and wii, could never do.

Something, you have, somehow, spectacularly failed to mention throughout this entire thread.

You never mentioned it.

NOT ONCE. :

Instead youve been championing something as stupid as an increase in resolution in a scant handful of games that dont even have the graphical fidelity to even CARE about in comparison.

Just stop trying to make the Xbox look good. Because you arent. And as someone who remembers the Xbox fondly, I ask you to stop dragging it through the mud already.

Edited by 3Dude, 30 July 2012 - 08:37 AM.

banner1_zpsb47e46d2.png

 





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users

Anti-Spam Bots!