What happens to general tech progress if

Silicon limit and optimization limit are fast approaching. Used to be that you just made the transistors switch faster and made some minor optimizations here and there and things kept getting faster. Now we've been stuck at a wall behind 5Ghz for years and years, we can't shrink the production process much further, and we have been optimizing behind that 5Ghz wall for so long that the engineers are running out of ideas.

GPUs continue to advance by leaps and bounds. CPUs not so much.

Thanks for being pretty useful and honest unlike a lot of the others! When gpus nm level reach that of cpus will they be facing the same limit do you think? thanks :D
 
:/ oh great, so we are about to see a huge stall in those too then!

If we dont have fairly decent virtual reality by the time im all alone in a nursing home in 4 decades then I dont know what ill do!!
 

Darren

Moderator
Staff member
:/ oh great, so we are about to see a huge stall in those too then!

If we dont have fairly decent virtual reality by the time im all alone in a nursing home in 4 decades then I dont know what ill do!!
I'd classify the Vive as already "fairly decent". VR is probably going to be old and retro tech in 4 decades.
 
Is anyone actually able to bring up an argument to suggest anything other than that cpu/gpu progress for home computers is going to stall for a good 5 years\ ?
 

Darren

Moderator
Staff member
Is anyone actually able to bring up an argument to suggest anything other than that cpu/gpu progress for home computers is going to stall for a good 5 years\ ?
Do you even pay attention to modern hardware news?

Ryzen is a huge leap for AMD and will drive Intel to actually innovate for the first time in 6 years. GPU's are always on the move too.
 

Intel_man

VIP Member
Ryzen is a huge leap for AMD and will drive Intel to actually innovate for the first time in 6 years.
At best, it's going to lower Intel's prices. Things are locked in for the next few years when it comes to what Intel is releasing already. Cannonlake is their next "big" thing with a targeted >15% increase in performance vs Kaby.

Things might change, but we'll see. Going to a 10nm architecture is probably causing them problems right now. Kaby was the stopgap release because the 10nm cannonlake got delayed.
 

Darren

Moderator
Staff member
At best, it's going to lower Intel's prices. Things are locked in for the next few years when it comes to what Intel is releasing already. Cannonlake is their next "big" thing with a targeted >15% increase in performance vs Kaby.

Things might change, but we'll see. Going to a 10nm architecture is probably causing them problems right now. Kaby was the stopgap release because the 10nm cannonlake got delayed.
Well I didn't necessarily mean right this second but they're definitely going to have to kick it up a notch over the next couple fiscal years.
 

Darren

Moderator
Staff member
Are they actually CAPABLE of doing that much to innovate though.
How the heck would I or anybody else around here give you a definitive answer about what a large company will do in the future? You seem to be expecting answers to questions that can't really be answered with any true certainty.
 

mistersprinkles

Active Member
My suspicions are as follows:

1) We will hit a process wall inside of 10 years when we can't shrink the litho process anymore.

2) When that happens we will only have optimization and increase in size (ie, bigger chip=more transistors) to save us until something replaces silicon

3) Increase in size will be 3 dimensional, much like what has happened with NAND. Instead of huge square chips we will have not so huge cube shaped chips.

We have already hit some walls along the way in the past few years but they have been smashed through by industry innovation. Look at how much trouble they had finding a way to keep the electrons from flying away from the chip when they went below 65nm. Hafnium was the answer. electrons were flying off again below 32nm. The answer was 3D transistors and gates.

Will we make it to 1nm lithography? I don't know.
 

TrainTrackHack

VIP Member
They'll just solve it in software. It's only with the rise of mobile computing that devs have started again paying serious attention to how demanding their software is. Outside of few specialised fields, it used to be that performance was a lesser concern, so as hardware got more powerful, programmers just came up with easier and quicker ways to do things. Great for programmers, but over the years it's had the widely noted odd effect of software having grown much more demanding while still doing more or less the same things they did a decade ago, just with a snazzier UI (this might not be entirely fair, but it has some truth to it). In the past, it even used to be an unofficial policy at Microsoft that you don't care how well your software performs in development - by the time it's ready to ship, hardware has gotten fast enough.

Most of the time this is actually good practice, because it's comparatively easy to write maintainable, easy-to-understand code if you don't have to care about performance too much, but if hardware stops getting more powerful, we'll just find ways to make software run more efficiently.
 

TrainTrackHack

VIP Member
Except software isn't getting faster. Even in mobile.
Well, you're not wrong - I didn't mean to say that it's getting faster. it's just that with smartphones/tablets there's (for obvious reasons) been a big push for software that works with little memory, on crappy CPUs, and minds power consumption, and APIs/frameworks on them are designed accordingly. In effect, there's just been a big baseline shift in how demanding software can be - as mobile devices have gotten more powerful, mobile software has followed the same cance^Wexponential growth that we've seen in the PC world.
 
Top