CableMod Announces 12VHPWR Right-Angle Adapter for GeForce RTX 40 Series Graphics Cards

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,634
Points
113
CableMod has announced its 12VHPWR right-angle adapter, an accessory that might be of interest to early GeForce RTX 4090 adopters who are seeking a cleaner-looking profile.

Go to post
 
Seems to alleviate the cable strain issues some would suffer otherwise.
 
Seems to alleviate the cable strain issues some would suffer otherwise.

Agreed.

I still think that is the dumbest power connector location I've ever seen.

If I had my druthers, I'd want the power connector somewhere where I can easily route it behind the motherboard tray through a passthrough without it being in the way.

I'm thinking bottom of the back of the card. Here:

1666132348905.png

Then you could just poke it through one of the cable passthroughs and route it mostly unseen.
 
If I had my druthers, I'd want the power connector somewhere where I can easily route it behind the motherboard tray through a passthrough without it being in the way.
I've found that I prefer the 180 degree adapters for the eight-pin PCIe connectors. You wind up running the cables right down the backside of the PCB, typically straight into a cable management cutout.

While I applaud CableMod for coming up with a solution here, the above is what I would have preferred.

Of course, if you mount the card vertically - that is, parallel to the motherboard using a riser - the same effect could be achieved with CableMods' right-angle adapter here.
 
Agreed.

I still think that is the dumbest power connector location I've ever seen.

If I had my druthers, I'd want the power connector somewhere where I can easily route it behind the motherboard tray through a passthrough without it being in the way.

I'm thinking bottom of the back of the card. Here:

View attachment 1973

Then you could just poke it through one of the cable passthroughs and route it mostly unseen.

I think EVGA had a model like that, and while I don't disagree, there is no PCB at that side which makes it not so trivial to put it there and also make the card easily servicable or watercoolable.
 
there is no PCB at that side
Only because the PCBs were designed assuming the power connection would come in on top. Moving the power connection isn't trivial, it would require a lot of things to be moved around, but if you were looking at a high level at how the card would be used and installed; you'd think engineers would consider things like this and design towards better solutions.

I think the only reason power cables come in along top right now is inertia -- I can't find any photos but I seem to recall when cards started requiring extra power, most of them did have connections along the rear edge, where Zath points to: there weren't many that did though, apart from GPUs, but I can recall a couple compatibility cards and some RAID cards and the like. But when it came to GPU cards, they tended to be longer than most other cards. So they came in along top because there was more likely to be room for the cable otherwise you would get hung up in the drive bays or HD cages.

And then to make it more frustrating, the cables were designed to come straight in, rather than a 90' bend - because if there were a card next door (and back when this was first kicking off - SLI / CF / whatever 3dfx called it was a new and growing thing), then the 90' bend wouldn't fit. Whereas a straight insertion would - provided you have a case deep enough to allow it.

So I can see where the current design we have all be living with came from. It just kinda evolved that way.

But today things are different. When you are looking at 3 and 4 slot cards now, maybe it's time to re-evaluate how power gets routed.
 
whatever 3dfx called it
Same acronym as nVidia's, but it stood for "Scan Line Interleave" instead of nVidia's "Scalable Link Interface."

I can't find any photos but I seem to recall when cards started requiring extra power, most of them did have connections along the rear edge
My EVGA GeForce 7900 GT was like that (and the GeForce 7950 GT that replaced it via RMA), as well as my EVGA GeForce 8800 GT AKIMBO. It was annoying because in those days with the midtower I had (Thermaltake Tsunami Dream), I had to snake the PCIe power cables through the HDD cage, between two HDDs. There was just barely enough room to plug in the cables without the HDD cage's frame interfering.

EDIT: Some of my older cards have a floppy drive power connector or I think even a molex connector. I think my Radeon 9500 Pro has a floppy drive power connector on the rear edge, I don't recall about my Radeon 9800 Pro.
 
Last edited:
Same acronym as nVidia's, but it stood for "Scan Line Interleave" instead of nVidia's "Scalable Link Interface."

Yep. And as can be assessed by the name, the 3dfx implementation just rendered every other horizontal line of pixels on every other GPU.

Nvidia has supported multiple methods over the years, including:
  • Alternate Frame Rendering (AFR), where every other GPU renders every other frame in its entirety
  • Split Frame Rendering (SFR) which is just a really complicated version of 3dfx's approach where it dynamically changes the percent of each frame rendered at the same time on each GPU,
  • And there was also a AA version of SLI, where both GPU's render the same frames but slightly offset, and they are then combined for antialiasing, which resulted in very good antialiasing, at roughly single GPU performance.
Splitting the frames between the GPU's (whether using 3dfx's original every other line approach or Nvidia's dynamic frame splitting approach) results in the best input lag, but doesn't scale as well from a framerate perspective.

AFR scales much better, but there are serious input lag penalties, as can be seen in this Toms Hardware Guide Illustration from their review of the original dual chip board, the Radeon RAGE Fury MAXX review back in 1999. (It used two RAGE 128 Pro chips and would only run in AFR mode)

1666205225676.png

It's a tradeoff. I always preferred the split frame render modes. Despite their lesser scaling, the performance felt more fluid to me. People often get really picky about input lag caused by their monitors, but then completely ignore it when it happens in their render pipeline for some reason.

Unfortunately most titles were designed with just the one render mode in mind, and it was built into game profiles, and that was AFR. You could force the mode in settings if you preferred SFR, but it was unpredictable and could result in crashing, graphics bugs and other problems, because the drivers had been optimized for each particular game with the mode built into the profile in mind.

IMHO, SFR (or the original every other line method) were the technically better solutions, but because kids are whiny and don't understand the tech, and complain that they don't get even close to double the performance with twice the GPU's, game devs and driver programmers (both AMD and Nvidia) almost always selected AFR as the default when collaborating on making multi GPU work for their titles.

It's a crying shame.

This is one of the big reasons why I abandoned SLI and Crossfire way back, and decided to just take a "single fastest GPU I can afford" approach. It just wasn't worth the trouble, and there were too many disappointments.
 
Last edited:
Unfortunately most titles were designed with just the one render mode in mind, and it was built into game profiles, and that was AFR. You could force the mode in settings if you preferred SFR, but it was unpredictable and could result in crashing, graphics bugs and other problems, because the drivers had been optimized for each particular game with the mode built into the profile in mind.
Which we saw plenty of at my last job at a freelance software QA company where we tested nVidia drivers and GPUs. nVidia never cared much for SFR.
 
Become a Patron!
Back
Top