NVIDIA Is Following Up on Reports of Melted RTX 4090 16-Pin Power Adapters

While my case is overly spacious I am glad Nvidia is addressing this. By the time I'm ready to get a 40 series card they will have most likely resolved this early adopter issue with their cards.
Seems like another new world issue with Nvidia and their top of stack hardware.
 
Too little to say for sure but I was reading through the Reddit threads and a number of people are wondering if it matters whether or not to use 3 or all 4 pin connectors. It could be that all 4 are just too much, for now, but that is highly speculative. These could also just be isolated cases due to either user error or product defects. Hopefully, we'll get answers soon before the rumors get out of hand.
 
J2C made some good comments about it. And I think he right. It's such a small connector, small pins, with a rather stiff cable coming out it. With these cards being so massive you can't avoid having to bend that cable to fit it in a case. The bend is putting pressure on the connector which is causing some of the pins to lose full contact. When that happens a lot of current is being drawn through a minimal contact and it heats up, melts.

This whole 12+4 design is poor. 35mm of standoff before being able to bend the cable is ridiculous. You'd need a seriously wide case to make that work, or vertical mount the GPU. But then you'd still have this ugly cable sticking up out of the card like a fountain.
 
...a number of people are wondering if it matters whether or not to use 3 or all 4 pin connectors.
I saw in a few reviews that if you plan to leave the card stock you just use 3x 8-pin connectors for 450W mode, but if you use 4x connectors the card can be used in 600W mode for overclocking.

I dunno about nVidia's custom smart adapter, but apparently the ATX 3.0 16-pin adapter should NOT be bent: https://cablemod.com/12vhpwr/
Maybe the nVidia smart/active adapter is the same way.
 
I saw in a few reviews that if you plan to leave the card stock you just use 3x 8-pin connectors for 450W mode, but if you use 4x connectors the card can be used in 600W mode for overclocking.

I dunno about nVidia's custom smart adapter, but apparently the ATX 3.0 16-pin adapter should NOT be bent: https://cablemod.com/12vhpwr/
Maybe the nVidia smart/active adapter is the same way.
Dang, if the native 16 pin plug can't be bent either, they should make it a 90 degree connector stock. Not sure I could fit a 4090 in my fractal Define 7 XL and still have 35mm of room before bending, and this case is pretty wide..
 
Dang, if the native 16 pin plug can't be bent either, they should make it a 90 degree connector stock. Not sure I could fit a 4090 in my fractal Define 7 XL and still have 35mm of room before bending, and this case is pretty wide..
Realistically is like 60mm, since you need 35mm before it can start to be bent.
 
I have doubts bending is the issue. That would make more sense in like a multi pin, where the pins are looser, and each is naked and not in their own plastic little channel, drastically reducing movement. More likely is just plainly too much power for the pins, such as they are bending or not.
 
The pin contact area should be enough assuming there is good contact. The problem here is bad contact caused by the small connector not providing enough stiffness. And when the contact gets loose things heat up. I've had molex connectors burn in the early days when GPUs used those for power.

Overall bad design, and a cutting of corners. I'd not be comfortable buying a 4xxx card until they completely replace the connector, preferably with 3x8pin PCIE connectors. I mean a pcie 8 pin connector is only rated at 150W, but I bet anyone they can handle more power before melting than this supposedly 600W connector.
 
I have a feeling it's poor design on that cable putting WAY too much power through that one pin.
 
So, speaking technically

In high power applications (and in my industry, that's upwards of 5,000A) - you almost always have to go parallel cables, because getting one conductor to carry that much power is just impractical short of engineered solid buswork.

There are always two things you have to pay particular attention to, if you don't have some sort of active mechanism monitoring current per line:
a) The cables all have to be the same size and type and length -- if there is any difference, cable resistances won't be equal, and the current won't flow equally

b) the connectors all have to be identical, and torqued to the correct torque values -- again, same thing as with cable, if one of them is loose or not the same size or what have you, the resistances won't be equal and the current won't flow equally.

I think it's a no-brainer that what we are seeing is (b) == the failures are all happening at a connection point, it will heat up the most at any point of high resistance. So something is not proper in the connector - it could be a bend putting stress on the connector, causing the pins to not all engage equally. It could be a faulty connector, and the female pins just don't have enough friction to adequately engage. It could be faulty connectors on the card and the male pins aren't big enough (I hear that one often from wives). It could be the connectors aren't getting secured / the click retention mechanism isn't holding them in, and after a few thermal cycles they can wiggle themselves loose. Lots of things it could be.

Now, it would also be possible to adjust that with an active mechanism: you could monitor current per line and dynamically adjust the current to balance current coming through parallel connectors. That said, it wouldn't be cheap - but nothing else about these cards is cheap either, and the only way it can really adjust current is by clamping down on a high current line -- which could result in unintentional power throttling on the GPU -- which I would argue is better than physical damage and at least is a symptom that points you to go look at something to correct it before it fails. But I don't know what kind of size that would add to the card (or PSU, it could be added on that end), let along the expense. Just that it's technically possible.

The better solution - and I know we joked about it - would be positive engagement on a single large gauge cable. It probably doesn't need to be automotive posts with 00 wire, but a healthy #8 or #10 size wire, with some sort of positive capture -- a retaining screw, or a lever nut, or something. Now, that's bigger diameter than most power cables use in a computer, but it wouldn't be much bigger than, say, a SATA cable or an HDMI cable, and one round cable would be easier to route and manage than 3 or 4 separate bundles of cabling.

Something like this (these are test leads, and I can't find a good image of a retaining mechanism, but it gives the idea):
1666886917747.png

That said - almost 0 chance they do this. Because 18 and 20 and 22 gauge is just so pervasive in the computer ecosystem.
 
Become a Patron!
Back
Top