NVIDIA CEO Comments on High Pricing of GeForce RTX 40 Series Graphics Cards, says “Moore’s Law Is Dead”

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,075
Points
83
NVIDIA CEO Jensen Huang is aware that gamers and other enthusiasts aren't thrilled with the pricing of the GeForce RTX 40 Series, but they should probably get used to it because prices will only continue to increase, according to remarks that the executive made during a recent conference call Q&A with reporters.

Go to post
 
Moore's Law may be dead, but that doesn't necessarily mean that consumers will throw more money at it.

Even Wall Street is worried nVidia screwed this one up -- if these cards are just too expensive, then rather than just sucking it up and paying more (which is the message JSH is sending with his comments), consumers may just wait until something becomes affordable.

For a company that plays more to Wall Street than their consumers, JSH screwed that one up royally. He tried to assuage his shareholders by reassuring them that nVidia had set the right price despite customer sentiment but Wall Street appears to be reading between the lines and the stock has dropped significantly since the announcement, although most analysts remain bullish on the overall nVidia outlook.

1663956514018.png
 
Last edited:
Well, Dgpus cards could be priced in multiples of current gen consoles... So the price argument is sus. Ps5 and xbox are plenty capable... And they have graphics, cpus, power supply, memory, motherboard, a fast ssd, a case, and a controller. So yeah. Sure nvidias are top of the line silicon, but still, i think they are squeezing the market a bit much. I wonder if pc gaming still growing or shrinking or what. I am thinking in units, not revenue, then again even.in units, who knows what is crypto and what isn't... Just die already crypto. Is there any reliable way of measuring the trajectory of pc gaming community? Steam maybe?
 
with the energy prices in Europe beeing what they are (ie x12 or even more) I see little profit in mining over here so there is that silver lining.

Ofc people are having to choose between heating or eating so that's a lot less interesting (news quated a family who paid 50€ month for gas last year who had to pay 530€ at the end for the year which is not uncommon, are now paying 1.500€/month) so sure we are having 1.500€ for a 4080 16GB that's 2/3rd's core wise of what is should be for double the price of previous gen, suuure.
 
Moore's law is dead (in its traditional misinterpreted interpretation*) and indeed, manufacturing and supplier pricing has gone up both due to the pandemic, and due to the difficulty of manufacturing silicon chips with small gate sizes due to the inevitable current leakage problems.

That said, this had already been factored in. ~9xx gen pricing adjusted for inflation (which helps account for the cost increases due to the pandemic) should be reflective of actual current costs. Anything above that is really just chip makers manipulating things to try to get higher profit margins.

So, the 4090 should be at ~Titan pricing of ~$1199
The 16GB 4080 should be at about xx80 pricing at about $749
And the 2GB 4080 should be at ~xx70 pricing at ~$450

Any amount above that is "just because they can and they want the money".

Both the GPU and CPU industries are broken. There need to be 3-5 viable competitors in a market for them to work. We had this in the early 2000's. We don't today.

As I've said before, these ****ers need a Microsoft-style date with the DOJ.

*Moore's actual comment was based on the number of transistors doubling every - what was it, 9 months? Not on performance increasing at that level, so it technically mostly stands with the adding of cores and other things.
 
*Moore's actual comment was based on the number of transistors doubling every - what was it, 9 months? Not on performance increasing at that level, so it technically mostly stands with the adding of cores and other things.
Generally speaking the more paths in a core the more transistors it will have. There is some fluxuation with modern instruction sets reducing the number needed to cover a wider gambit. The real key here is variable instruction set programming and hardware. Again that is where GPU's shine. Thousands of cores that are in essence programmable.

So if your program only calls a subset of instructions running that through the GPU where you get to specify on the fly how many cores are doing what instruction for the general compute units. Then you get better performance.. That was the big hurdle that's led to the modern GPU's being so good at so many tasks.

It's bonkers that a video card costs as much as a mid range laptop. That will always be bonkers.
 
Moore's first law never said anything about price though. It's strictly about technical ability.

In fact, there was a "second" law (also called Rock's Law) that did deal with price -- the price to build the fab for manufacturing doubles with every other generation (every 3-4 years).

I don't know if the first law is done or not - it's very obvious that performance gains have slowed, but process gains continue to march on (at least for most people except Intel). But 1/2 of an already small number is a small number, so the magnitude of those gains goes down each generation... and engineers are figuring clever ways around strict process node improvements: chiplets, etc.

Now, maybe we have hit the intersection of those two laws - where we haven't hit a dead end on the technical side, but it's just gotten to the point that we don't have enough volume / economy to make it economical.
 
Last edited:
Moore's law never said anything about price though. It's strictly about technical ability.

In fact, there was a "second" law (also called Rock's Law) that did deal with price -- the price to build the fab for manufacturing doubles with every other generation (every 3-4 years).

I don't know if the first law is done or not - it's very obvious that performance gains have slowed, but process gains continue to march on (at least for most people except Intel). But 1/2 of an already small number is a small number, so the magnitude of those gains goes down each generation... and engineers are figuring clever ways around strict process node improvements: chiplets, etc.

Now, maybe we have hit the intersection of those two laws - where we haven't hit a dead end on the technical side, but it's just gotten to the point that we don't have enough volume / economy to make it economical.
That's a good point. Exactly how much compute and GPU power does modern gaming need. If we max out 4k at 120hz for the most demanding of games... short of scientific modeling what would need that kind of computer/power that would run locally?

We all know AI subscription service will be a thing but the cost of the compute needed to drive AI will keep it in a hosted solution as opposed to something running locally... (short of those of us that will get into that arena of writing our own.)
 
That's a good point. Exactly how much compute and GPU power does modern gaming need. If we max out 4k at 120hz for the most demanding of games... short of scientific modeling what would need that kind of computer/power that would run locally?

Game development has an insatiable appetite for GPU and CPU capacity. If you are drastically outperforming current games at 4k120, just wait, that won't be the case for long, next gen games will adapt and crank up the quality/whatever knob.
 
Game development has an insatiable appetite for GPU and CPU capacity. If you are drastically outperforming current games at 4k120, just wait, that won't be the case for long, next gen games will adapt and crank up the quality/whatever knob.
Unfortunately this is true. Even if you do get to the point of photographic fidelity - there's still the curious fact that given sufficient resources programmers will just get lazy and start cranking out inefficient code.
 
Unfortunately this is true. Even if you do get to the point of photographic fidelity - there's still the curious fact that given sufficient resources programmers will just get lazy and start cranking out inefficient code.

Yep. We aren't quite there yet for games, but this is happening all across the software industry.

The conclusion is that high level low effort languages and tools safe a metric ton of money in programmer hours, testing and other QA.

As the saying goes, programmers are expensive, CPU cycles and RAM are relatively cheap.

On the one hand I find this kind of offends my old school sensibilities of doing the most with limited hardware.

I used to participate in the "Demo Scene" back in the day, when groups competed against each other when it came to who could make the most impressive real time audio-visual demonstration on a fixed piece of hardware. (This is where Mad Onion and later Futuremark came from as well, born out of the Finnish demo group Future Crew) Picture a convention center or sports stadium filled with rows of tables, like a LAN party, but no one is playing games. Everyone is coding or doing artwork for the demo competition at the end, with participants voting for the winner, and the winner taking the pot.

On the other hand - however - trhis probably allows a ton of software to be made, and enjoyed by users which otherwise would be too costly to bring to market.

So it's a tradeoff.

Some indie games already do this. You don't need a 16 core CPU or a GTX 4090 to play a sprite based platformer.

I think we are pretty far from this point in AAA titles though.
 
Unfortunately this is true. Even if you do get to the point of photographic fidelity - there's still the curious fact that given sufficient resources programmers will just get lazy and start cranking out inefficient code.
It’s not really that it’s lazy development. There are a few spins on the phrase, but in general “the last 20% take 80% of the time” holds true from my experience working as a performance engineer. And at least with my co-workers at the fortune 100 companies I’ve worked for, the stuff each individual developer does isn’t generally where the problems lie. It’s when you have multiple teams integrating code with different objectives (ex: the security ops team injects a feature or step that the application does not integrate well with) that stuff pops up and making it work well takes significant effort. If brute forcing the integration works, it can be worth it to save 1000 hours of developer time at the cost of 3fps (or wherever).
 
Become a Patron!
Back
Top