How do they keep making faster CPUs?

Robert P

Member
How are they able to continually have more and more powerful machines? Are design engineers continually breaking new ground or did they have it all mapped out 40 years ago and they dole it out a little at a time to keep the revenue machine churning?
 

Okedokey

Well-Known Member
By power I will assume you mean computation throughput. Mostly this has come down to the miniaturisation of the transistors and other components involved in logic processing allowing for higher increased instructions per clock (IPC) at heat and power limits for a given package ('die').

Screenshot 2022-08-03 191351.png

If you want to know the technical details of this, look up cpu lithography (photolithography). Essentially it is the way the circuit is 'etched' into the cpu 'die' to create the individual transistors and circuits.

Photons (were used but) are much too big, as the lithography method must be smaller in particle size than the smallest node (circuit component). Anything bigger results in too lower resolution. Therefore other wavelengths of light are used (higher frequency = higher resolution). From there you'll understand that it is this technology and the related technologies that have need co-advancement. Super-multidisciplinary and expensive. Thus the time. Also look into Moore's Law (not a law).

In the last decade and extra 3rd dimension has been added so that nodes are stacked to increase computational density for the given lithographic size.
 
Last edited:

beers

Moderator
Staff member
Most of it is the shrinking of transistors. New stuff coming out is using the TSMC N3 process, with a "3nm" branding. Older CPUs even like the Core 2 Duo were based on a 45nm process for 'Wolfdale' like the E8400. An additional benefit is that there's less power required to energize the same amount of transistors when you're at a smaller scale, of which you can also 'pack more' into the same physical space on the CPU.

There's also advancements as to 'how' to manage data effectively, such as the Ryzen Infinity Fabric being a superior mindset compared to other implementations like the Sandy Bridge QPI 'ring bus' type of topology.

How you encode data is also a beacon for a lot of gains, such as 802.11n vs 802.11ax for wifi as another example. A more CPU specific example would be adding AES instruction sets to your CPU, so you have precomputed algorithms that can be conducted similar to a hardware level that net you large gains for a given implementation.

'Scaling out' as opposed to relying on a single core for performance has also been the developing story since dual core CPUs became common on the market. If you have a parallel workload then you can simply can have an improvement by adding 'moar cores'. IPC advancements are also 'a thing' as for doing more 'work' with the same amount of clock cycles, by analyzing existing designs and providing improvements on how to manipulate whatever amount of data you're trying to process.
 
Top