Son of Sanguinius
Last Seen: Today
From: Somewhere in MA
Wow.... I'm surprised that there's still so much stupid going on here. I'm sorry that I never wound up coming back to clear things up, but unfortunately my internet died and I didn't feel like typing long posts on my iPhone because it takes forever and auto correct pwns computer acronyms.
Ok... first off, HTT speed means, for all intents and purposes, absolutely nothing. That bus is so fast anyway that 200MHz or even 500MHz isn't going to matter. Nothing comes close to saturating it. So, looking at absolute HTT speed (the speed the bus runs at, not the reference speeds that the clocks for the CPU and RAM are based off of) as a measure of anything is pointless. It's not going to have any effect at all.
Secondly, and this is something that's been bothring me in a LOT of threads lately, is this talk about CPU's bottlenecking high end GPU's when they didn't bottleneck a lower end GPU. This logic is, well, completely illogical. CPU's don't handel any graphical processing at all. Increasing graphical detal does not increase load on the CPU (by a measurable amount). Therefore, if a CPU can serve up enough data in a game to push 60FPS, it can push out enough data to push 60FPS. It doesn't matter if it's getting 60FPS on a 7200GS or crossfire 4870x2's. The amount of data that the CPU has to process before the GPU can do its thing is the same. CPU overhead increases SLIGHTLY with SLi/crossfire, but not nearly enough to notice in normal gaming situations.
You have to look at the time it takes to render a frame this way: Tf=Tc+Tg, where Tf= total frame time, Tc= time spend computing CPU tasks, and Tg= time spent computing GPU tasks. To get 60FPS (the max you can see, and the max most monitors can display anyway), Tf has to be less than .0167 seconds. Increasing the speed of the CPU reduces Tc, and increasing the speed of the GPU reduces Tg. So, increasing CPU speed does indeed always reduce Tf. However, in almost all modern games, Tc is MINISCULE compared to Tg in pretty much every situation due to the much smaller amount of data that needs to be processed by the CPU. Let's say for example that we start at 60FPS, and that Tc and Tg reduce linearly and directly proportional to GPU and GPU speeds (not true at all, but it illistrates my point). Let's also assume that in this situation, Tc=10% of Tf, and Tg=90% of Tf.
This means that initial Tc= .00167, and Tg= .015, if we assume that initial Tf=0167. Now, let's increase the GPU and CPU speeds and see what happens to Tf. If we increase the CPU power by 50%, Tc becomes .000833. If we leave the GPU at the same speed, our net Tf is now .0158 or 63FPS. That's a gain of 3FPS for a 50% increase in CPU performance. Now let's increase GPU speed by 50%, and put CPU speed back to the starting speed. Tg becomes .0075, so Tf becomes .0092, or 109FPS. That's a gain of 49FPS for a 50% increase in GPU speed.
increaseing CPU speed by 50% netted a 3FPS gain, or a 5% increase in FPS over the base value.
increasing GPU speed by 50% netted a 49FPS gain, or an 81% increase in FPS over the base value.
Now, this example has a LOT of assumptions and oversimplifications, but it's actually a pretty acurate model for gaming performance in the majority of games and situations. As you can see, the CPU speed is almost meaningless, as the GPU is by far the limiting factor on the FPS equation.
You see all of these tests these days showing CPU performance in games and the gains between various CPU's. But what do they all ahve in common? They're all run at as low of a resolution and with as few graphical details as possible, because this is the only way to make Tc the limiting factor in the equation and thus see actual, noticable gains between various CPU configurations. Once you actually put the processors into the real world, Tg once again becomes by far the limiting factor in the equation, and so even though your super fast CPU may be 40% better in games than your buddies, it's only 3% better when you play the game because that's all that 40% increase in speed effects the equation by.
Now, on to archectures... yeah, if you're oging to do a CPU comparison, they MUST be of the same architecture. You can't take a quad of one arch and a dual of another, drop out two cores, and directly compare them even if you make the clock speeds the same, because they have different clock per clock performance. And bus speed improvements, such as HTT bus speed increases, are NOT what we mean by architectural improvements, because as I ranted aboev, HTT is already so fast anyway that it doesn't matter. We're talking about the actual time it takes the CPU to process a given instruction, or how many instructions it can process per clock cycle.