The move to 5nm TSMC (if true) will massively improve perf/watt over their current node.
NVIDIA will be going from 44.56 million transistors per mm² to 173 million transistors per mm²
This rumor is likely the A100 replacement and that already has a TDP of 400 watts so this is not really...
There are millions of 30 series GPUs being used by gamers as per the Steam survey.
Contracts with system integrators take priority over box sales and SIs have had backorders for months. That's where all the GPUs went.
Micron really needs to start making/supplying 2GB GDDR6X chips.
Still massively disappointed by the efficiency. I'm assuming these are all the 3080 mobile chips that didn't bin well.
For 20% more money, it should be reasonably better and just isn't.
The original press release for this said HDR1000 support but then was quickly edited out.
So it may end up having support for HDR but definitely not 1000 as it's an OLED.
I looked for a long time for a way to get current drivers to work and found a way --- https://www.techpowerup.com/nvcleanstall/
It won't allow DX12 to work (it's a hardware thing with WoW's DX12 implementation) but it significantly improved my FPS.
Elon and his friends buy a ton of crypto, pushes it on social media to raise the price, sells it, and then tweets about crypto being bad so the market drops for him to rebuy at a lower price and then repeat the cycle.
I wonder how well a 12 CU RDNA2 chip without the large cache will perform.
If it allows 1080p60 at medium or higher on current games, it will be massive for the market.
There will never be a situation that performance isn't really affected. That's like saying when a marathon is as easy to run as walking a few blocks. Faked lighting and screen space effects are much easier to process than ray tracing.
With the amount of time saved by developers and the...
This has nothing to do with price fixing. This is a tariff issue and a supply/demand issue.
If there was an abundance of graphics cards and they suddenly all (both NVIDIA & AMD) went up in price by $100, that would be cause for an investigation.
I can only go off of the technical data provided - I am just using peak processing power of the silicon. I thought I saw a comparison somewhere showing the frametime difference with and without tensor accelerated but I am not able to find it.
It could be run on anything but using the tensor cores processes it much faster and concurrently.
I mean the difference between adding 1.5ms to a rendered frame and 4ms is massive relatively.
Something that will work on Windows, Xbox, and PS5 with no tensor cores and not using DirectML (as the PS5 doesn't support it)? So maybe in a few years...
They used the x16 slot so it means nothing as M.2 performance will likely be different.
No one is going to have an M.2 hooked up like that with a 3090 on anything lower than a TR.
I hope that the miners realize that they are literally shooting themselves in the foot by bypassing it.
If a software solution doesn't work the next step is a hardware solution which I have no doubt NVIDIA will have no problem doing.
https://twitter.com/bdelrizzo/status/1362619264423747590
"Hi Ryan. It's not just a driver thing. There is a secure handshake between the driver, the RTX 3060 silicon, and the BIOS (firmware) that prevents removal of the hash rate limiter."