So, speaking technically
In high power applications (and in my industry, that's upwards of 5,000A) - you almost always have to go parallel cables, because getting one conductor to carry that much power is just impractical short of engineered solid buswork.
There are always two things you have to pay particular attention to, if you don't have some sort of active mechanism monitoring current per line:
a) The cables all have to be the same size and type and length -- if there is any difference, cable resistances won't be equal, and the current won't flow equally
b) the connectors all have to be identical, and torqued to the correct torque values -- again, same thing as with cable, if one of them is loose or not the same size or what have you, the resistances won't be equal and the current won't flow equally.
I think it's a no-brainer that what we are seeing is (b) == the failures are all happening at a connection point, it will heat up the most at any point of high resistance. So something is not proper in the connector - it could be a bend putting stress on the connector, causing the pins to not all engage equally. It could be a faulty connector, and the female pins just don't have enough friction to adequately engage. It could be faulty connectors on the card and the male pins aren't big enough (I hear that one often from wives). It could be the connectors aren't getting secured / the click retention mechanism isn't holding them in, and after a few thermal cycles they can wiggle themselves loose. Lots of things it could be.
Now, it would also be possible to adjust that with an active mechanism: you could monitor current per line and dynamically adjust the current to balance current coming through parallel connectors. That said, it wouldn't be cheap - but nothing else about these cards is cheap either, and the only way it can really adjust current is by clamping down on a high current line -- which could result in unintentional power throttling on the GPU -- which I would argue is better than physical damage and at least is a symptom that points you to go look at something to correct it before it fails. But I don't know what kind of size that would add to the card (or PSU, it could be added on that end), let along the expense. Just that it's technically possible.
The better solution - and I know we joked about it - would be positive engagement on a single large gauge cable. It probably doesn't need to be automotive posts with 00 wire, but a healthy #8 or #10 size wire, with some sort of positive capture -- a retaining screw, or a lever nut, or something. Now, that's bigger diameter than most power cables use in a computer, but it wouldn't be much bigger than, say, a SATA cable or an HDMI cable, and one round cable would be easier to route and manage than 3 or 4 separate bundles of cabling.
Something like this (these are test leads, and I can't find a good image of a retaining mechanism, but it gives the idea):
View attachment 1995
That said - almost 0 chance they do this. Because 18 and 20 and 22 gauge is just so pervasive in the computer ecosystem.