Graphics seem pretty good. I really love the vibrant colors lol.
Project Cars Wii U Features Particle Shadows - Multi Threaded Shadowing
#21
Posted 09 October 2013 - 11:53 AM
#23
Posted 21 October 2013 - 02:57 PM
Hey guys, just registered to hopefully be able to give you some info about the Wii U version as I'm involved as an pre-alpha tester, investor and community member at pCARS since over a year now.
I'm guessing that they are far from pushing the max the CPU do to it not auto-delegating tasks, which is good. The games on the Wii U are already showing higher performance than their last gen counterparts, so the results that devs can get from further optimizations are starting to look promising.
I was initially expecting Project CARS to just be another unoptimized port. I'm keeping my eyes peeled now.
The Wii U surely got more CPU power than I've expected but in the current state the game's able to make i5 processors sweat. Above all this is due to ongoing physics (especially tyre) testing but in the end I guess at least slight compromises have to be made when it comes to physics calculations on the Wii U. Not because of poor porting (as far as I've seen the devs are working really hard on the Wii U version since a while now, the engine is smoothly scalable and Nintendo seems to do their part on supporting us very well too) but because of the lack of sheer processing power. On the other hand optimizations haven't been really started yet...
It would be nice if true. There was one photo released of the Wii U version months ago by an insider. It did not look pretty. Of course, they have clearly made substantial imporvements since then according to the changelogs.
Would be nice if they would show some console version footage.
I don't know that screenshot/photo for myself even if I'm totally active in the community there - sure it was real?
There is a line in th is that has me confused.
Can anyone make head's or tales of this?
Adding to the earlier explanations the mothership was a related bug looking like an Independence Day mothership, basically a big black thing occuring on specific date/time/weather settings like a huge shadow
I'm just ready to see some specific Wii U vids and screens for this game. The developers seem to be very willing to support the Wii U and make this game the "Forza" of the system. It looks great so far, but I want to see some specific Wii U stuff. If it ends up being as good as it sounds, I'll definitely get it next year. I just hope a lot more people get it as well. It could really be one of the big titles for the system in the near future.
They ARE very willing to make the Wii U version as awesome as possible as there's no real competition, pCARS could become THE racing game on the platform and console gamers (aswell as PC gamers when it comes to certain games) are maily pissed off because of poor ports! As far as I know you'll have to wait some more time until we got proper ingame footage specifically for the Wii U version tho.
Regards
- Rockodoodle likes this
#24
Posted 22 October 2013 - 06:58 AM
Just when I thought nerd speak couldn't get any less interesting.......
#25
Posted 22 October 2013 - 09:44 AM
Sounds like a cool game. There's at least 15 titles that I will pick up over the next year and this will be one of them. Thanks for posting.
Hey guys, just registered to hopefully be able to give you some info about the Wii U version as I'm involved as an pre-alpha tester, investor and community member at pCARS since over a year now.
The Wii U surely got more CPU power than I've expected but in the current state the game's able to make i5 processors sweat. Above all this is due to ongoing physics (especially tyre) testing but in the end I guess at least slight compromises have to be made when it comes to physics calculations on the Wii U. Not because of poor porting (as far as I've seen the devs are working really hard on the Wii U version since a while now, the engine is smoothly scalable and Nintendo seems to do their part on supporting us very well too) but because of the lack of sheer processing power. On the other hand optimizations haven't been really started yet...
I don't know that screenshot/photo for myself even if I'm totally active in the community there - sure it was real?
Adding to the earlier explanations the mothership was a related bug looking like an Independence Day mothership, basically a big black thing occuring on specific date/time/weather settings like a huge shadow
They ARE very willing to make the Wii U version as awesome as possible as there's no real competition, pCARS could become THE racing game on the platform and console gamers (aswell as PC gamers when it comes to certain games) are maily pissed off because of poor ports! As far as I know you'll have to wait some more time until we got proper ingame footage specifically for the Wii U version tho.
Regards
#26
Posted 22 October 2013 - 10:45 AM
Yeah, a lot of people are being surprised by the little cpu that could. Particularly since its completely lacking in any manner of modern simd... But yeah, Its certainly no I5. From where im looking, it looks like it handles very complex routines much better than a lot of simple ones.Hey guys, just registered to hopefully be able to give you some info about the Wii U version as I'm involved as an pre-alpha tester, investor and community member at pCARS since over a year now.
The Wii U surely got more CPU power than I've expected but in the current state the game's able to make i5 processors sweat. Above all this is due to ongoing physics (especially tyre) testing but in the end I guess at least slight compromises have to be made when it comes to physics calculations on the Wii U. Not because of poor porting (as far as I've seen the devs are working really hard on the Wii U version since a while now, the engine is smoothly scalable and Nintendo seems to do their part on supporting us very well too) but because of the lack of sheer processing power. On the other hand optimizations haven't been really started yet...
I don't know that screenshot/photo for myself even if I'm totally active in the community there - sure it was real?
Adding to the earlier explanations the mothership was a related bug looking like an Independence Day mothership, basically a big black thing occuring on specific date/time/weather settings like a huge shadow
They ARE very willing to make the Wii U version as awesome as possible as there's no real competition, pCARS could become THE racing game on the platform and console gamers (aswell as PC gamers when it comes to certain games) are maily pissed off because of poor ports! As far as I know you'll have to wait some more time until we got proper ingame footage specifically for the Wii U version tho.
Regards
Id imagine finding unused gpu resources for things that can benefit from concurrency would yield much better results for most middleware physics solutions, instead of making espresso jump through hoops.
Hope gx2 has an opencl equivilant on board.
- NintendoReport likes this
#27
Posted 22 October 2013 - 11:42 AM
Shouldnt the short pipeline of the PPC processor actually make it a great performer when it comes to crunching out lots of simple routines? I had always heard that longer pipelines are better for more complex instructions. Also, when I here the Planetside 2 developer talking about the difficulties of multi threading their game engine to make it work on the PS4 CPU I would have to assume that most console games arent truly multi threaded. Not saying cores are sitting idle, but the most demanding workload has typically been kept on a single core.
#28
Posted 22 October 2013 - 12:55 PM
Shouldnt the short pipeline of the PPC processor actually make it a great performer when it comes to crunching out lots of simple routines? I had always heard that longer pipelines are better for more complex instructions. Also, when I here the Planetside 2 developer talking about the difficulties of multi threading their game engine to make it work on the PS4 CPU I would have to assume that most console games arent truly multi threaded. Not saying cores are sitting idle, but the most demanding workload has typically been kept on a single core.
Id guess the short pipelines are one of the major reasons its surprising so many with its ability to handle simd workloads that it 'shouldnt' be able too, like bink 2.
Long pipelines are used to disguise latency, they are great for grinding away at complex floating point data structures, where everything is set up just so and ready to go... But if something unexpected happens, like you throw an unpredictable entity into the mix like a user, you are quite literally throwing a wrench into the works.
However, 3 shortpiped cores do not make a good paralell compute system for simple operations that benefit massively from concurrency.
Simd is weak per unit. an espresso core, in comparison, is very very very strong. Simd individual units are even shorter than 4 stages, at literally just single stage units, they are very very simple arithmitic units, they are literally just doing one small operation, except there are tons of them all chipping away at a larger task. This is very very beneficial when you are adding (or subtracting) the same number from a large table of data points. Simd units can be adressed in large blocks, a single processor/thread cant.
Simd can say, get all values for matrix 'n', a single core/thread would need to look up each individual adress of each value one after the other.
The biggest problem is, anything that espresso core can handle in one slice takes the same amount of time, a 'big'/'perfect' operation that uses every resource at its disposal, and a tiny operation that uses only a fraction of what could be done, both use up the same slice of time alotted, anything you didnt use when you had your slice of time, is wasted. Which is why using espresso for simd is a bad idea. Its not designed for todays simd workloads. It can impressively trudge through them, but any which way you look at it, you are wasting the processors time doing anything that can benefit from concurrency.
simd was literally created to handle stupid little operations that were wasting time because they had to be performed in bulk.
However, the gpu is FILLED with plenty of tiny little units crunching numbers in paralell. And you will NEVER use every last one on graphics. Its just not possible unless you employ the lawnmower man, finding the units that arent being used that go, and putting them to work on those simd operations instead of espresso, would be very worthwhile. And espresso and the gpu are definately set up to communicate fast enough to make it worthwhile, they are physically extremely close, and connected by custom extra wide bus. They communicate so well, shin en uses the gpu memory for a cpu scratchpad.
#30
Posted 22 October 2013 - 02:07 PM
So in essence the developer either needs to move over all the simd code to the GPGPU, or if it can be done and makes more sense, rewrite that code in a manner which allows the cpu to maximize its cycles. That is one area where the expresso is limited no matter how you look at it. Its 1.25Ghz, and if your feeding it a teaspoon per cycle when it can handle a tablespoon per cycle then its really wasted potential. I have never really thought about it that way. Good stuff 3Dude.
#31
Posted 22 October 2013 - 02:46 PM
So in essence the developer either needs to move over all the simd code to the GPGPU, or if it can be done and makes more sense, rewrite that code in a manner which allows the cpu to maximize its cycles. That is one area where the expresso is limited no matter how you look at it. Its 1.25Ghz, and if your feeding it a teaspoon per cycle when it can handle a tablespoon per cycle then its really wasted potential. I have never really thought about it that way. Good stuff 3Dude.
Yes, although, often, there is no good replacement for concurrency as pure simple and easy to use as modern simd.... Just like simd is never going to be very good for code that doesnt gain much from paralellization. (but really, sometimes you have to ask yourself... is it REALLY doing you any good? Skyrim had a very complicated 'ai' routine constantly checking various things, to try and give the appearence of variotions, but the 'radiant ai's variations were irrelevent and added little to the experience, and a lot to the processor load, while xenoblade used a simple finite state machine to weave a web of relationships to discover, costing very little, and even though its always the same, it was far more interesting and involving than any 'emergence' radiant ai produced.)
Each espresso core does have the 64bit paired single 'simd' extensions ibm and nintendo came up with in the cube era (they rocked some socks back then) and while its impressive just how much they can do with modern applications with heavy simd requirements like bink 2.... But thats not to say by any means it has impressive simd performance by todays standards.
Id guess it would be best to use as much as paired singles will give you, offload onto unused gpu logic, and if you still need more youll just have to scale back until optimization clear up more overhead to add more.
#32
Posted 22 October 2013 - 04:11 PM
From a performance standpoint, what should the expresso be better at than the 360 cpu? How do you leverage a short pipeline cpu like the expresso to make the most of it?
#33
Posted 22 October 2013 - 04:38 PM
From a performance standpoint, what should the expresso be better at than the 360 cpu? How do you leverage a short pipeline cpu like the expresso to make the most of it?
Well, at the low end of the spectrum, its paired singles are apparantly enough to handle 360 ports with little optimizations done by small inexperienced port teams.
Branchy, dynamic, unpredictable code seems to work rather well. Really nice user dependant stuff seems to work fast and solid. Id expect user to environment and even user to animated object interaction to be considerably above the pre rendered tedxture and asset swaps/stamps often used on ps360... From nintendo games of course. Take a look at how in depth the effects of the sword interaction on enemy cloth is in the zeldaland attraction. Nothing is canned, its all done realtime on the fly. Id expect the effect the user has on the environment around them in zelda u to be very in depth.... Which makes me REAAAAALLLLLY hope they are keeping motion controls for it.
#34
Posted 23 October 2013 - 06:01 AM
When I really take a look and at what "floating point units" are and what they do, it really does look like the GPU can do those task without breaking a sweat.
CPU + GPU is a powerful combination because CPUs consist of a few cores optimized for serial processing, while GPUs consist of thousands of smaller, more efficient cores designed for parallel performance. Serial portions of the code run on the CPU while parallel portions run on the GPU.
Nvidia actually has a lot of good information on this stuff at their website. This wasnt possible on the PS3 and 360 because their GPU's werent fully programmable. With the Wii U, the game developer can split the workload of things like AI and Physics over both the CPU and the GPU. The CPU can delegate the the parallel work over on to the GPU while it maintains the serial part of the code. Compared to the workload of the graphics processing, the parallel workload of things like physics and AI would be negligible. Essentially, by efficiently combining the CPU and GPU, and allowing each to do only what it does best should allow for some pretty impressive results. Its hard to know where some of these current gen ports are really choking the Wii U. I would have guessed that it was from trying to throw to much SIMD code at the CPU, but Criterion had stated that they were actually having trouble getting the GPU up to speed prior to the updated dev kits and information from Nintendo.
Come on Shin'en, show us your games and then give us the nitty gritty details.
#35
Posted 23 October 2013 - 08:35 AM
When I really take a look and at what "floating point units" are and what they do, it really does look like the GPU can do those task without breaking a sweat.
Nvidia actually has a lot of good information on this stuff at their website. This wasnt possible on the PS3 and 360 because their GPU's werent fully programmable. With the Wii U, the game developer can split the workload of things like AI and Physics over both the CPU and the GPU. The CPU can delegate the the parallel work over on to the GPU while it maintains the serial part of the code. Compared to the workload of the graphics processing, the parallel workload of things like physics and AI would be negligible. Essentially, by efficiently combining the CPU and GPU, and allowing each to do only what it does best should allow for some pretty impressive results. Its hard to know where some of these current gen ports are really choking the Wii U. I would have guessed that it was from trying to throw to much SIMD code at the CPU, but Criterion had stated that they were actually having trouble getting the GPU up to speed prior to the updated dev kits and information from Nintendo.
Come on Shin'en, show us your games and then give us the nitty gritty details.
Yeah, that seems to be what im gathering. Which is why I REALLY hope nintendo has provided gx2 with SOME manner of equivilant open cl, ESPECIALLY considering they have a cpu, gpu and audio dsp all on the same darn mcm substrate.... If for whatever loony nintendoism they didnt... Devs would need to write their own compute language to bridge all 3 coherently.... Knowing nintendo its probably some bizarre resurrection of close to metal or something.
#36
Posted 24 October 2013 - 05:55 PM
Within our discussion about the CPU's and what they do, and what they do well, it really started to get me thinking. What does the PS3 and 360 CPU both excel in that the expresso cpu does not? Flops performance comes to mind immediately. Its more well known that the PS3 uses the cell processor to handle graphics processing, but what I think people have overlooked is that their is a very high probability that the X360 does as well. Its tri core cpu is also very good (for a cpu) in the flop category, and honestly when you look at what the console has done with what is essentially a HD2*** equivalent GPU, its a bit of a stretch to think that even in the console world that such a GPU could outperform its PC counterparts by such a large margin. Its really starting to make sense, the X360 and PS3 werent straining their CPU's for better physics and AI, but more so for graphics processing. When you have a GPU that is roughly 250 gflops, and then you have a CPU that can add another 25-100 Gflops of graphics rendering, it makes sense to work them together. The 360 and PS3 didnt exactly set the world on fire with highly advanced ai and physics, or game world player interaction, but instead focused on making everything as pretty as possible. Essentially we have 7 years of developers making use of the flops performance while avoiding any strenuous traditional cpu task. People will try to convince you that the PS3 and 360 are a lot better with AI and Physics than the Wii was, but was it really? The AI in Goldeneye wasnt exactly not good compared to your run of the mill shooter on the PS3/360. And physics havent change all that much in over a decade.. Honestly, being on ice feels about the same as it did back on the NES. I know thats a crude example, but you get the idea, there simply werent huge if any advancements in these areas.
#37
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users