OK, I have read enough about Amdahl's Law to potentially butcher a follow up a question (scratch potentially, insert probably).
If we take parallel processing, or the fact that there will be diminishing returns, or rather, some operations that will take X amount of time to complete despite the time saved on those tasks completed in parallel, is it safe to suggest that general code will generally face diminishing returns with added cores?
I cite the Battlefield 3 example, and the use of a Bulldozer or Sandy/Ivy Bridge i5/7 to play the campaign, and then moving on to multi-player. Given what Dice has shown about the engine, the campaign could care less until you get down to 1 core, yet, multiplayer generally does better on the I5 over the Bulldozer (or Piledriver, etc.).
My follow up question is now this:
Since Amdahl's Law will always be in effect, will there always be that one operation that brings diminishing returns to an increased core count? I realize how ridiculous this sounds, given the definition of a law, but is there a comparable example of a 1 core vs 2 core situation, or otherwise, where a program benefited from more cores/hardware-software hyperthreading?
All programs that can be split up into seperate parallel jobs will greatly benefit. Texturing, and shaders for example, fit very well into this group. So do 'psuedo physics' Like the old havok stuff popular on ps360, where everything, no matter size or weight flies around like a cardboard box.
These greatly benefit from more threads and cores, however, once you add so many, the overhead of keeping all these cores and threads synchronized and properly communicating will begin to erode the performance gains.... That is, if you are attempting to use it all on just one process, or application, like say a videogame.
For multitasking, doing something completely different at the same time, like processing who from your list is online, recieving messages, downloading a show, recording gameplay, mantaining other operations running so they can instantly be switched too....
More cores are great, and to that end, its likely why there are so many.
However for many general purposes tasks, like game code, ai, and things that arent easily predictable, or simply 'going through the motions'. Well, those can only be handled sequentially, so no number of cores will help speed them up.
Only powerful single thread/core performance will help.
Dice created frostbite to.... Completely avoid this as the ps360 sucked at single thread performance and excelled at paralellism.
It made sense, as battlefield was multiplayer only and no real need of ai or heavily structured game code at all.
It appears the rise of moronic bubblegame cinematic linear roller coaster games gave them the confidence to make bf a single player campaign no one wants. A very linear on the rails experience (not implying its an on rails game, but simply the same stuff will happen the same way, everytime you activate the event). But super cinemarrific. The kind of thing you dont really need strong general purpose processing for. Which is why dice says they no longer even need a real cpu.