k so I should go wit p3 ?
Naturally.
def go with p3 its much more efficient than a celeron.
ASssuming they are both P3 based Celerons (which is most likely the case), the P3 is not more efficient. It's faster but not more efficient. Why (not)? Because they both run off 10-stage pipes.
Both Pentium-II's and -III's ship with 512kB of secondary (L2) CPU instruction cache.
You need to make a distinction with P3s though. The Coppermines had on-die L2 and the Katmai's didnt. Your statement is also incorrect in that the Coppermine P3s only had 256K while the older Katmai's had 512K
The Celerons that Intel first introduced as a low-cost CPU alternative (266 & 300MHz versions) were basically just Pentium-II's without any L2 cache at all.
-Celeron Covington's had 0K of L2
-Celeron Mendocino's had 128K of on-die L2
-Celeron II's had 128K of L2 (this is the model Intel refered to by "Celeron" and then later replaced by the P4 version of this chip)
For a full-speed L2 in a Pentium design, you need to get into Intel's (much more expensive) Xeon line.
You do? Xeons are expensive because of their scaling capacity....
What Intel plays down-- but nearly everyone knows-- is that the full-speed, quarter-size Celeron cache gives them almost the same performance as the half-speed, full-size cache gives Pentiums.
Very true
however in a day an age where all L2 caches operate at full speed, and where the pipelines are getting (ridiculously long), the quarter cache is a major disadvantage
I should probably pre-empt the argument that 4-way is faster than 8-way set associative by explaining what 4/8-way set associative cache is
.
LOL you beat me to it
A set associative cache divides the cache into various sections, referred to as sets, with each set containing a number of cache lines. With a 4-way set associative L2 cache, each set contains 4 cache lines, and in an 8-way set associative L2 cache, each set contains 8 cache lines.
8-way has an increased hit rate over 4-way which makes it faster as the amount of system RAM increase.
While we're knee deep in cache-land, any idea why Intel chose to do inclusive L2 caching? (i.e., L1 is mirrored in L2)