Doesn't really matter.
I mean its cool and all that we make smaller and smaller and cram more and more, but has mentioned, you can make more and more dies too, matter of fact, you can go back to more specialized separe chips working together in the motherboard. Yes theres difficulties in that too, but it all works out to circumvent the need to make dies with smaller in size, yet bigger in number circuits. To me moores 'law' is a bit of a silly observation to talk about, but, if we are going keep talking about it, I think if you take this law and applied it system wide (however big you want the system, a desktop, a server rack so on, it will probably apply in some forms for a long long time. Also, it may be that silicon will hit a very hard wall in terms of nm, thats fine, why not make chips the size of my face? Factory investment will mature, and going for acreage when its the only thing you have left to do will not be a big deal. I guess a rethink of semiconductors is also possible, new materials so on.
I think the near future is going to have more and more designs like Apple's M1 and M2 and Nvidias 4090.
Let me explain.
Overclockers have always known that boosting voltages can allow for higher clocks, but that it doesn't scale well. As you reach the limit of the chip you start creating a ton of extra heat and power consumption for small marginal improvements in performance.
Over the last several years the strategy has been to boost per core clocks as much as possible through a combination of process yields and voltage management to be able to boost clocks when needed.
The M1, M2 and 4090 seem to take a different approach. They are HUGE chips and because they are huge, they don't need massive voltages to create high clocks, they can instead utilize their high transistor count and lower voltages for better scaling and produce performance that way.
This would be a little bit tricky to accomplish on the AMD/Intel CPU side while still maintaining x86 compatibility, but for GPU's it is much easier, since they are essentially massively parallelized devices, so the sky is the limit to how many "cores" you add. Just add more of them, and lower the voltage to where it maximizes performance per watt.
But this is what I think we will see more and more of over the next few years, at least in GPU's Core counts massively increasing, running at relatively low clocks to maximize perf/watt, and getting their performance increases that way.
The 4090 accomplishes this with an insanely large unified chip, which probably has
horrible yields and helps contribute to its high price, but AMD may have an advantage here with chiplets, if they can just make the interconnects work. If they can make the GPU a modularized set of core chips that communicate over an interconnect, they can theoretically just keep adding modules, and pushing the core count sky high, while still having reasonable yields and keeping the costs slightly more reasonable.
As far as Moores law goes, people forget that mores law was really just counting transistors
not performance. A lot of people have read way more into Moores law over the years than they really should have.