AMD slower ghz?

Praetor said:
Oh and on a last note:

That videocard doesnt have a 256bit memory interface.

I didn't specify that it's a Geforce FX 5700LE OPTIMA. PNY Technologies. Check their site. Since the exact specifications of my card were not in question I didn't feel it necessary to specify that it's an optima. As far as I know, that's the only 256-bit Geforce FX 5700LE.
 
NeuromancerWGDD'U said:
I didn't specify that it's a Geforce FX 5700LE OPTIMA. PNY Technologies. Check their site. Since the exact specifications of my card were not in question I didn't feel it necessary to specify that it's an optima. As far as I know, that's the only 256-bit Geforce FX 5700LE.

from the web site

NVIDIA® CineFX™ 2.0 engine
256-bit graphics core
250MHz core clock
128-bit DDR memory interface
500MHz memory data rate
8.0GB/sec. memory bandwidth
188 million vertices/sec. setup
4 pixels per clock (peak)
16 textures per pixel(max)with 8 textures applied per clock
Dual 400MHz RAMDACs
Maximum display resolution 2048 x 1536 x 32 bpp at 85Hz
Flat Panel display support with resolutions up to 1600 x 1200
*Requires flat panel display with VGA input

from
http://www.pny.com/products/verto/mainstream/5700le.asp
 
Last edited:
Alright, obviously you could, on a whim, liquify me with your immense knowledge of processors. I had browsed through about 20 different sites, just looking for a simple way to, in least terms possible, help explain why operating frequencies between AMD and Intel aren't the best way to compare performance. The constant comparison that I found was "AMD, 9 operations per clock cycle, Intel, 6 operations per clock cycle." They are not my direct words, but a constant comparison that I found on 90% of the websites that I read. I'm not going to continue to argue because, stubborn as I may be, I'm not stupid. I admit, you know much more about this than I do, but I don't think that one (obviously 'one' who knows their stuff) can actually (thoroughly) answer a question about AMD to Intel performance comparison without confusing very nearly everybody that reads it. There's just too much involved. By simplifying it, you can give someone a general idea, even though this is at the cost of accuracy. I didn't simplify it (I don't even understand the unsimplified version), all I did was pass on the simplified version that is largely accepted.
 
NeuromancerWGDD'U said:
. By simplifying it, you can give someone a general idea, even though this is at the cost of accuracy. I didn't simplify it (I don't even understand the unsimplified version), all I did was pass on the simplified version that is largely accepted.

Simplifying is not the same thing as giving wrong information. simplifying is being able to say CORRECTLY information in terms everyone understand. Making something up to explain something is not simplifying it wrong. How does it help anyone if they going around spreading false information even if they now THINK they understand why.

Simplyfing should not come at the cost of accuracy.
 
By simplifying it, you can give someone a general idea, even though this is at the cost of accuracy. I didn't simplify it (I don't even understand the unsimplified version), all I did was pass on the simplified version that is largely accepted.
Absolutely and I agree entirely. Clock speed does NOT mean everything. And I have no issues with that. It's when you comment that 'because AMD has 9 of this and Intel only has 6 and this means AMD is better' -- thats what I challenge.

Losing precision is one thing and that is generally acceptable. Losing accuracy ... that tends to result in incorrect information being conveyed to the end user. Now as to what to buy, AMD or Intel ... that was already mentioned in a somewhat simplified and fair manner in the CPU 101.
 
...I'm going to stop posting until I get sleep, I'm just being stupid at this point. Originally the sig saying "256-bit memory interface" was a brain fart, what I was failing to notice was the emphasis on the fact that it's not a 256-bit MEMORY INTERFACE. I had assumed that Praetor had looked up a non-256-bit anything card due to the fact that I hadn't specified. I'm sorry, I'm being stupid and irrational at the moment, I'm gonna go to sleep now.
 
mgoldb2 said:
Making something up to explain something is not simplifying it wrong. How does it help anyone if they going around spreading false information even if they now THINK they understand why.
Now hold on! By my own words, I DID NOT simplify it; I looked up what was the general explanation, made sure that it wasn't just one source saying this (which, had it been, would have qualified it as spreading false information, and that I am thoroughly against!), then I proceeded to share what I had found.
 
I had assumed that Praetor had looked up a non-256-bit anything card due to the fact that I hadn't specified
Hehe hardly, I'm quite careful with my wording as you no doubt have noticed by now :)

Now hold on! By my own words, I DID NOT simplify it; I looked up what was the general explanation, made sure that it wasn't just one source saying this (which, had it been, would have qualified it as spreading false information, and that I am thoroughly against!), then I proceeded to share what I had found.
Hehe then if I hadnt challenged you on it, that woulda essentially been plagiarism heehee :P ;) But humor aside, do realize that just because numerous sources say something is one way doesnt make it accurate ... hell 90% of the computing websites out there from Intel to ASUS to Newegg don't know what the hell "FSB" really means and they think boards come with a 800MHz FSB.
 
NeuromancerWGDD'U said:
Now hold on! By my own words, I DID NOT simplify it; I looked up what was the general explanation, made sure that it wasn't just one source saying this (which, had it been, would have qualified it as spreading false information, and that I am thoroughly against!), then I proceeded to share what I had found.


Simplifying is not the same thing as giving wrong information. simplifying is being able to say CORRECTLY information in terms everyone understand. Making something up to explain something is not simplifying it wrong. How does it help anyone if they going around spreading false information even if they now THINK they understand why.

I never accuse you of doing anything in that statement


I was responding to
By simplifying it, you can give someone a general idea, even though this is at the cost of accuracy

As I said simplifying should not come at the cost of accuracy.
 
1. Drop it or take a hike.
2. "Sounds accusatory" and "making an accusation" are two different things.

Lets all get back on topic
 
Old thread from 6/2005

Well the simple fact that you seem to have confused the meaning of "operation" gives things away. Furthermore, just because you found some links here and there (mostly forum ones it seems) that corroberate doesnt mean much when you dont apply the facts being conveyed with a logical sense of reasoning. Sure the AthlonXPs can have 3 FP units ... I didnt even bother verifying that that (although didnt need to either) .... but just because they have three such units does not mean they are better processors .... and on the converse, just because intel doesnt have 3 such units does not make it a better processor. Since Ive dealt with ASM and I know for damn sure opcodes dont execute in fractions of a cycle I countered by asking for specific opcodes that may execute in 1/6 or 1/9 of a cycle (since, not being an ASM guru I dont know the entire opcode layouts off the top of my head). That was an opportunity to solidify your case. I see no opcodes.

Regarding the "applying the facts" comment, sure AMD can have 9 "whatever" per cycle and Intel only have 6 "whatevers" per cycle, but two things should come to mind:
1. Why the hell are all these processors so slow??? Sure we know its CISC technology so yer looking at a 300% bloat in opcode length but at say 18billion opcodes/sec -- it shouldnt matter
2. The fact that AMD has more "whatever" does not mean it is a better processor. And never will. The fact that Intels have fewer does not mean it is a worse processor and never will.

I think an overwhelming majority of informed users will concur that to say "AMD makes the best processors ever and all Intel Processors suck" (even if we limit the scope to current and last-gen processors) is an excessively broad and closeminded view. There is a reason why Intel and AMD exist and why there are so many threads on the internet about "Intel or AMD" -- thats because its not hands down clear.

----


1. Windows XP x64 is not [originally] for Intel systems but rather for AMD's K8 lineup. I think you have Windows XP x64 and Windows XP 64bit Edition mixed up?
2. The fact that Intel has the lions share of the consumer market was not why Microsoft released Windows XP 64bit edition -- t'is cuz Intel had their 64bit stuff up and running a lot sooner than AMD :)

----

Oh and on a last note:

That videocard doesnt have a 256bit memory interface.[/QUOTE]

I just came across this thread when searching for some information. I am comfused by what you had said in it praetor. One this is "t'is cuz intel had their 64bit stuff up and running a lot sooner than amd. Is this really true. According to wikipedia: "This move by AMD was well timed to take advantage of a product hole in Intel's roadmap, namely a Pentium-compatible CPU that can deal with the inevitable transition to 64 bits. Some viewed this transition as slightly premature; however, it helped AMD to snatch the standard away from Intel, and its quality 32-bit backwards compatibility made it a feasible chip even for home users. AMD's standard was adopted by Microsoft, Linux and even Sun Microsystems. This left Intel in a position where they were forced to make an agreement with AMD to use the AMD64 extensions for their own 64-bit (EM64T) processors. The K8 is also notable for its Direct Connect Architecture.(Amd) - Wikipedia, the free encyclopedia. Also, some of the things you had said about the cycles operations deal. I know this is old but if you can point me to where these ideas came from i would much appreciate it as this is contrary to what the guys here in development are saying.
 
Strictly speaking MIPS Technologies produced the first 64bit CPU, then DEC then Intel announced the IA-64 architecture. The next to release one was Sun with an UltraSPARC, followed by HP, IBM then Intel actually releasing the Itanic. Of the list only IBMs PowerPCs were used in home computers (Macs) and with good reason, try to find prices on some of those :)

What AMD did was add 64bit extensions to x86 before Intel did.
 
Yeah, i relized it was a year old when i posted(came across it on a search somehow) but was just messing around. Also, as we all know there is so much miss information around on the web and so many people that are never wrong even when they are. This makes getting proper information out to people that need it. Anyway, after viewing the site i have seen there are some very knowledgable people on here and i can browse the forum for ideas i haven't had. Now if i could just get these computers(and the good folks working with them) to do all my work without my presence i may be able to finish my car.
 
I just came across this thread when searching for some information. I am comfused by what you had said in it praetor. One this is "t'is cuz intel had their 64bit stuff up and running a lot sooner than amd. Is this really true
Yes and no. Between Intel and AMD it's true however as Cromewell points out in #33, if we include all processors its not true. What most people think of when they say "AMD came out with 64bit before Intel" is the consumer chips. Intel's Itanic lineup has been around for several years before AMD's K8 even began to breathe life.

As for the ressurection, shrug, oh well, it's not horrible as this thread serves a good purpse
 
Back
Top