What is the maximum clock rate given the state of today's technology?

Green Xenon

New Member
Hi:

What is the maximum physically-possible clock rate [measured in Hz] of a 1-bit-per-cycle, single-core, purely-serial processor?


Thank a bunch,

Green Xenon
 
You mean the highest overclocked cpu or stock clock cpu?

Higest overclock reached on a single core was around 8200 Mhz/ 8.2 ghz and that was on a single core celeron, but that would have been with LN2 for cooling so it is not something you'll use for everyday use.
 
You mean the highest overclocked cpu or stock clock cpu?

Higest overclock reached on a single core was around 8200 Mhz/ 8.2 ghz and that was on a single core celeron, but that would have been with LN2 for cooling so it is not something you'll use for everyday use.


Would an optical processor be able to run safely at a significantly higher-frequency than an electronic processor?

Lets say the optical CPU uses 400-nm-wavelength lasers in place of electronic signals, what is the max clock rate that can be performed by this theoretical CPU without experiencing any physical damage?
 
Would an optical processor be able to run safely at a significantly higher-frequency than an electronic processor?

Lets say the optical CPU uses 400-nm-wavelength lasers in place of electronic signals, what is the max clock rate that can be performed by this theoretical CPU without experiencing any physical damage?
Idk. I think that was the most scientific post on clock frequency ever made on CF. :D It's an interesting thought, though. I hear that Intel is working on optical computing, although it may only be used for bus transfer throughout the next few years. Still, we could still see it in CPUs sometime not too far from now.
 
Well that is not so easy answer as there are no real places to find the info as it is still research tech and not in the real world and hence it would be impossible to say what is the maximum clock rate for the said cpus, intel have said they made a chip that hit 40 Gbps, but this was in 2007, an israely company claimed they made a a optical cpu the operated at eight trillion operations per second, which would mean 8THz, but that was way back in 2003.
 
Well that is not so easy answer as there are no real places to find the info as it is still research tech and not in the real world and hence it would be impossible to say what is the maximum clock rate for the said cpus, intel have said they made a chip that hit 40 Gbps, but this was in 2007, an israely company claimed they made a a optical cpu the operated at eight trillion operations per second, which would mean 8THz, but that was way back in 2003.

Well, one must understand that "operations-per-second" and "cycles-per-second" are two totally different things. I'm curious about the latter, not so much the former.
 
I would argue that operations per second would be of more importance than pure clock cycles-- how many clocks does it take to perform an operation?

In these current generations of computers (with their multithreading polycore configurations), serial processing is rare, and judging a processor by clock speed alone is inaccurate at best.
 
Well, one must understand that "operations-per-second" and "cycles-per-second" are two totally different things. I'm curious about the latter, not so much the former.

fair enough i was just trying to use the data available to give some kind of response, but you are right that calculation is wrong, I had a feeling it might be.
Also if you could explain I would be grateful, as I am bit foggy on this area come tho come to think about it.

When typed into google operations per sec comes up as flops is that what you mean?, if you are referring to instructions per second then the above figure should be able translated into a frequency after a the correct calculation.
 
Last edited:
fair enough i was just trying to use the data available to give some kind of response, but you are right that calculation is wrong, I had a feeling it might be.
Also if you could explain I would be grateful, as I am bit foggy on this area come tho come to think about it.

When typed into google operations per sec comes up as flops is that what you mean?, if you are referring to instructions per second then the above figure should be able translated into a frequency after a the correct calculation.


Clock rate is measured in cycles per second or Hz. Not flops, not operations-per-second, not instructors-per-second, and certainly not bits-per-second. Just a pure measurement of the clock-signal's frequency.

It is very important to note that "processor speed" and "clock rate" are two totally different entities.
 
Clock rate is measured in cycles per second or Hz. Not flops, not operations-per-second, not instructors-per-second, and certainly not bits-per-second. Just a pure measurement of the clock-signal's frequency.

It is very important to note that "processor speed" and "clock rate" are two totally different entities.

yes I am aware that processor speed and and clock rate a different things and I knew about clock rate,cycles and Hz, I actaully knew the difference between the two (flops and Hz)and after a bit of reading refreshed my brain and that was way off, the calculation that is, I think i'm becoming lazy ;lol.

since this is still going why do you want to know the max clock rate of todays tech since you, Dngrsone, and I (and many others as well) all acknowledge that it is not an accurate way to measure performance?
Or is it that within optical computing it is more relevent?
 
Last edited:
since this is still going why do you want to know the max clock rate of todays tech since you, Dngrsone, and I (and many others as well) all acknowledge that it is not an accurate way to measure performance?
Or is it that within optical computing it is more relevent?

Good point. Optical computing, if and when it does take off, is likely to b an entirely different animal than today's or even tomorrow's processor technology.

I think it's still a tossup whether optical or quantum logic will first hit practical application.

Does a positronic brain have a clock?
 
since this is still going why do you want to know the max clock rate of todays tech since you, Dngrsone, and I (and many others as well) all acknowledge that it is not an accurate way to measure performance?


I'm just in it for the science. No application.
 
Good point. Optical computing, if and when it does take off, is likely to b an entirely different animal than today's or even tomorrow's processor technology.

I think it's still a tossup whether optical or quantum logic will first hit practical application.

Does a positronic brain have a clock?

From what i hear quantum computing will abositely blow away todays tech, prob more so than optical tech.
 
Idk. I think that was the most scientific post on clock frequency ever made on CF. :D It's an interesting thought, though. I hear that Intel is working on optical computing, although it may only be used for bus transfer throughout the next few years. Still, we could still see it in CPUs sometime not too far from now.

Yeah, they're working on Light Peak. It's like compact fiber optics in a way...
http://www.intel.com/technology/io/thunderbolt/index.htm
 
From what i hear quantum computing will abositely blow away todays tech, prob more so than optical tech.

I have heard conflicting statements about this. With the first working prototype using phototonics which IBM made a few months back, it was a hell of a lot smaller in terms of transistor count (I am aware it isn't actually transistors as such because it uses light rather than electronic signals) but they claim it was, in terms of size, thousands of times more powerful than a standard electronic CPU, which is understandable because you are using the speed of light and no resistance, compared to electronic signals and the resistance of the wire.

But then I've heard similar things from quantum computers.

Until both become a viable option and you can actually see a full, working prototype computer which houses one of these, it is all talk though really because all companies funding the R&D will keep the actual results and research on a need to know basis. After all, the first to exploit such technologies will be a very, very rich company...

For the original question though, the max clock speed would be, theoretically, infinite, within the bounds of the laws of physics, so you couldn't make more electrons flow than would be physically possible down the size of the wire, you wouldn't be able to give them infinite amounts of energy etc. It is because of these, also with the factor of hardware limitations, so the distance signals have to travel around the mobo to other components, maximum power thresholds, even down to impurities in the copper used, will pin you back.

You look at some CPU's and they will run fine with 1V, but then for others you will have them sat at 1.6V at the same clock speed because of the different architectures. In an ideal world, the architecture would allow for massive clock speed, a huge amount of cores and very little power to be put in, but the chips we have are actually very inefficient, but they are the best we have so far
 
I'm just in it for the science. No application.

Okay, the science is this-- the size of the structures in the processor (in other words, the individual transistors and such) is dictated by the "die" used by the manufacturer. Currently the die size is 65nm, which is pretty much the smallest current technology can achieve.

This is due to several factors, among which is the mask used to etch the silicon wafer and the wavelength of the light that shines through that mask. The light used now is in the ultraviolet range; moving up to x-rays would require an entirely different form of mask and the technology for developing that type of mask has not been perfected.

Additionally, bringing the die size down any further pushes the processor into the realm of quantum physics, which is entirely not compatible with microelectronics and current computing style. This is also part of the problem with the higher clock-speeds. The higher frequencies (we're in the microwave range here) travel on the surface of the metal and start to create unintended effects like crosstalk and spot-heating.

Speaking of heat, that is another issue we get with the higher clocks. You need the smaller die sizes to get GHz clock speed to begin with, otherwise it would take too long to propagate data from one side of a chip to the other. But even so, pushing electrons and potentials back and forth at billions of times per second creates a lot of heat. Heat increases the quantum effect, does damage to the structures of the processor, and eventually will melt the silicon. Piping that heat away is a serious issue, and building heat-pipes or other cooling structures into the silicon puts distance between structures which reduces allowable clock speeds.
 
Okay, the science is this-- the size of the structures in the processor (in other words, the individual transistors and such) is dictated by the "die" used by the manufacturer. Currently the die size is 65nm, which is pretty much the smallest current technology can achieve.

This is due to several factors, among which is the mask used to etch the silicon wafer and the wavelength of the light that shines through that mask. The light used now is in the ultraviolet range; moving up to x-rays would require an entirely different form of mask and the technology for developing that type of mask has not been perfected.

Additionally, bringing the die size down any further pushes the processor into the realm of quantum physics, which is entirely not compatible with microelectronics and current computing style. This is also part of the problem with the higher clock-speeds. The higher frequencies (we're in the microwave range here) travel on the surface of the metal and start to create unintended effects like crosstalk and spot-heating.

Speaking of heat, that is another issue we get with the higher clocks. You need the smaller die sizes to get GHz clock speed to begin with, otherwise it would take too long to propagate data from one side of a chip to the other. But even so, pushing electrons and potentials back and forth at billions of times per second creates a lot of heat. Heat increases the quantum effect, does damage to the structures of the processor, and eventually will melt the silicon. Piping that heat away is a serious issue, and building heat-pipes or other cooling structures into the silicon puts distance between structures which reduces allowable clock speeds.


The size of the die is usually in mm not nm is the size of the trasistors is measured in nm, a nm die size would be very small for example The 45nm Core i7 die size is 263 mm2, we have now moved passed 65 nm and are currently on 32nm with intel will be moving to 22nm sometime this year with ivy bridge and plans to move to 16nm when rockwell is released, there are plans to go even smaller than that I believe.

I fail to see the relation between die shrink of todays tech and quantum computing as they are incomparable technolgies since they way the process data is very different.
 
Last edited:
The size of the die is usually in mm not nm the size of the trasistors is measured in nm, a nm die size would be very small for example The 45nm Core i7 die size is 263 mm2, we have now moved passed 65 nm and are currently on 32nm with intel will be moving to 22nm sometime this year with sandy bridge and plans to move to 16nm when rockwell is released, there are plans to go even smaller than that I believe.

I fail to see the relation between die shrink of todays tech and quantum computing as they are incomparable technolgies since they way the process data is very different.

It isn't so much crossing quantum computing with your standard electronic signal computer, it is when you start to play with stuff that small where each transistor is only a few dozen atoms across. We have made single atom transistors before, and have made several from just a few atoms, however each was made by hand (by which I mean not automated) and with it comes problems on the quantum level (when things go very fast or when they are very small)
 
It isn't so much crossing quantum computing with your standard electronic signal computer, it is when you start to play with stuff that small where each transistor is only a few dozen atoms across. We have made single atom transistors before, and have made several from just a few atoms, however each was made by hand (by which I mean not automated) and with it comes problems on the quantum level (when things go very fast or when they are very small)

Ah right got ya, see what Dngrsone was saying now.
 
Back
Top