Is Nvidia Working on mGPU Support?

Peter_Brosdahl

Moderator
Staff member
Joined
May 28, 2019
Messages
8,893
Points
113
Nvidia-Logo-1024x576.jpg


SLI is dead. Or is it? A lot of us have been saying that it is, and with good reason for a number of years now. My first i7 build used a pair of 560 TI’s, followed by 970’s, and then I finally stopped after a pair of 1080’s. Along with many enthusiasts I saw the rise, and fall, of multi-gpu support on the consumer level. At it’s best we could buy 2 lesser cards for just under the highest tier and actually have better performance. At it’s worst it simply doesn’t work.

3DCenter.org has reported that Nvidia hasn’t quite completely let go and may actually have plans for the current generation that now uses NVLink technology. If you’re not familiar with 3DCenter, and use multiple cards still, I recommend checking them out. They’ve been a great resource for custom SLI bits over the years that I’ve had a lot of success with in the past. If you don’t use a browser that translates then can you also find more details of this story at TechPowerUp.

They’re reporting that rather than using the more traditional AFR(alternate frame rendering), Nvidia might be working with CFR(checkerboard frame rendering). The difference between the two is that AFR represents entire frames being separated to multiple cards while CFR divides each frame into a checkerboard tiles and splits the tiles among the cards. They say that this currently is limited to Direct X and only Turing cards. Nvidia has not really provided much documentation about this and has actually only been discovered in drivers and can be seen in the popular tool Nvidia Profile Inspector.

At a time when both games and displays can easily demand more than even the most powerful consumer cards on the planet can provide having mgpu could be beneficial for all. A big thing that has held it back is that it heavily relies on developers to implement which in turn increases their overhead costs. Enthusiasts see pro’s and con’s as well. On one hand we can have alternative means of performance gains. On the other it can complicate things such as cooling, power, space, cabling, and support. One can easily argue, in many situations, to gain perhaps 20-30 performance by going to the next performance tier or maybe 30-50 percent in having to lesser cards and paying slightly more than that higher tier card. Many of us have walked away from multiple gpus and bit the bullet with the higher tier card approach but ultimately we all still hit the same performance limitations.
 
BTW, for what it's worth, the i7 I mention in the article is the 2600k/Z68 motherboard combo in my signature. The 1080's were put in the 4930k/X79 build. Both had their SLI counterparts replaced by single x80 TI cards.
 
I'm certainly interested in multi-GPU making a comeback. My wallet however, is not. Honestly, buying two cheaper cards almost always proved to be a worse solution for one reason or another. A single higher end card has almost always been the way to go. However, sometimes we run into situations where the fastest card on the planet just isn't fast enough. At 4K, a single RTX 2080 Ti is **** near the minimum standard. Even then, you aren't going to be driving games at 144FPS, etc. at that resolution. Multi-GPU would make that possible.

SLI still does work, but the cases where it does are fewer and further between each generation.
 
SLI still does work, but the cases where it does are fewer and further between each generation.

It's somewhat of a value play - consider my current rig as an example. I bought the first 1080 GTX in it when they were fairly new to market, spending around $500 or so. I bought the second one about a year ago for about $300 giving me a big boost in games that supported SLI. The incremental $300 bought me far more performance (in SLI supported games) than selling the 1080 and buying either a 1080Ti, and far more value than selling the 1080 and buying a 2080Ti.

Of course, that value is diminished now that SLI is being supported less and less - BF5, for example, requires the NVIDIA Profile Inspector tool to be used to force it to work which makes replacing the pair of 1080 GTX's with a single 2080Ti a fairly appealing swap, even though with SLI properly working they're fairly close in performance.
 
Every time I used SLI it was a painful experience. Not only dealing with profiles, but stuttering issues and sometimes degraded performance. I don't think I'll ever go back no matter how they implement it. The time and headaches wasn't worth the couple hundred bucks saved.
 
Every time I used SLI it was a painful experience. Not only dealing with profiles, but stuttering issues and sometimes degraded performance. I don't think I'll ever go back no matter how they implement it. The time and headaches wasn't worth the couple hundred bucks saved.

I've been using multi-GPU systems from the beginning. The machine I have now is literally the first one I've ever had which didn't have two video cards in it since it became a thing. most of my experiences have been good. Most of the bad ones were either ironed out early on or were on the ATi / AMD side. I never saved money with SLI and frankly, that's not what it was for. I know people tend to have that impression in their heads because NVIDIA marketed the thing as "buy one card now, by a second later." Really, SLI was always at its best when pushing the envelope on the high end and giving you next generation performance right now. It allowed me to push game visuals and frame rates far ahead of what was possible on the high end using a single card.

I never experienced a lot of the stutter or other issues doing it this way. I think these issues were more apparent chasing raw frame rates rather than pushing image quality and eeking out 60FPS on titles at higher resolutions. I had always pushed the resolution boundaries. I was at 2560x1600 back in 2007. A few years later, I was running three Dell 30" 3007WFP-HC's at 7680x1600. At the time I was using each of these, it simply wasn't possible to get a good experience off of a single card in the newest games. Hell, 7680x1600 is a tall order even with today's cards.

After that, I switched to 4K and the RTX 2080 Ti is the first card that could generally do it passably by itself. Even then, I wanted more performance. I finally abandoned 4K because SLI and multi-GPU was DOA for this generation. I'm at 3440x1400, which is demanding now but I'm getting 120FPS+ most of the time with dips into the 90's on a few titles from time to time. I'm a maximum quality kind of guy and gladly take a frame rate hit for better visuals. At 4K, it takes a 9900K@5.0GHz and an overclocked RTX 2080 Ti to maintain 60FPS in Destiny 2. Sure, it jumps up to 90FPS or higher at times but it can also drop into the 30's and worse depending on your processor and GPU. I saw this repeatedly with my Threadripper 2920X.

Also, people talking about SLI profile issues are pretty much talking out of their *****. I'm not saying there weren't issues, but most games received SLI profiles in short order with drives taking a week or two at most to become available. You could also manually create an SLI profile which wasn't hard to do. They weren't perfect, but they would get you through. It was even easier and more effective if the game was based on an engine that already had a profile. You could copy that profile and make use of it. That generally worked well. I had to do this for Mass Effect Andromeda's beta / demo. If I did it for the retail game, this was an issue for a week at most. These days game ready drivers are available even faster, so it would be less of a problem now.

People claiming that there were a mountain of problems with SLI probably didn't run it for very long. When people claimed not to like it, they often stated that they sold their second card off or returned it to wherever they got it from almost immediately. I'm sure some people had problems over a longer period, or at every try, but I literally ran SLI for over a decade and for the most part, the experience was good. I had many more problems with ATi/AMD's Crossfire, with probably half the configurations I ran being unusable. I can get into the technical details if anyone is interesting, but the reason I had problems was due to the nature of my configuration being something ATi / AMD never accounted for and something of an outlier as far as issues went.
 
Last edited:
I have to say that for the most part when I experienced stutter that I would re-test on a single GPU rig and usually see the same thing happen which told me it was likely a driver/os/engine related type issue and so much as SLI. I won't that was always the case probably 98% of the time.

I still have one SLI machine in the house and ironically it's a laptop with a pair of 980m's. For chuckles I'll run the bench for SOTTR and even with max settings, including AA maxed, it can render mostly 60 fps 1080p. Funny to watch it use almost all their 8GB of vram though. I was able to use some custom SLI bits for Metro Exodus after watching someone's youtube vid that gave favorable results but that is probably the end of the road for it.

I'd be happy to see it make a comeback. Like Dan said, it's not always about saving money even though as David mentioned there are scenario's where you can. At the time I got my 560 TI's I don't think I spent more than $500 for both because they were on sale. For my 970's it ended up cheaper than a heavily OC'd 980TI with occasional better performance. Those were my best experience though. Both motherboards I have now were purchased with mGpu in mind. The x79 in particular as at first had 2x970's and a 780 PhysX. It was pretty wild to watch the 2-3 games that supported all that play at 45-55 fps 4k back in the day. I tried experiments when I got my 2080TI with using a 1080 for PhysX with the same games and unfortunately even though the indicator shows GPU I can verify little to nothing happened. Wasn't worth it and took it back out.

There's some insane video's out there of people using multiple RTX Titan's or 2080 TI's for 8k. I say insane because the performance is crap usually but it's funny to watch the vram for the Titan's spike to 16GB or more. 8k is pretty much useless for us all but it'd be neat if this CFR thing comes to fruition and we would see either 4k/120-144hz. If only NV could make it so the developers could easily implement.
 
The first to invisible mgpu wins.
Keep wishful thinking AMD figured it out/ will see it in big navi
 
This announcement really makes me wonder if Nvidia is reaching the end of their ability to innovate / push graphics processing forward. Or is this just a stop gap measure to further leverage their current lineup and allow two of their lower/higher end cards to crush anything AMD comes up with? My money is on the latter. That is... based on the assumption that big Navi actually comes out and is a "2080Ti Killer"
 
This announcement really makes me wonder if Nvidia is reaching the end of their ability to innovate / push graphics processing forward. Or is this just a stop gap measure to further leverage their current lineup and allow two of their lower/higher end cards to crush anything AMD comes up with? My money is on the latter. That is... based on the assumption that big Navi actually comes out and is a "2080Ti Killer"
I think a part of it is that both AMD and Nvidia are exploring chiplet designs and along the way they have to examine some amount of mGpu metrics as part of the process since some concepts will apply to both.
 
SLI is dead... LONG LIVE SLI!!!

I've never run an SLI setup, so I can't comment on how good/bad it is. I've always settled for lower graphics settings for a single-card solution..
 
All of this could be fixed if they rolled out GX2 cards as part of the standard line-up
 
SLI is dead... LONG LIVE SLI!!!

I've never run an SLI setup, so I can't comment on how good/bad it is. I've always settled for lower graphics settings for a single-card solution..
When it works, it works well. The last setup I had that was SLI were a pair of GTX970's. However, I switched to single card solutions with the advent of the 1080Ti. I am sick of reducing my graphics settings, I could survive when I was younger because you really don't notice it as much when you're running your *** off to live in FPS games (that had almost no detail back in the day). Now as an old bastard, I like all the eye candy I can get and kick back and enjoy my Turn Based old dude games ;). Jedi the Fallen Order looked stunning with all settings maxed out.
 
The last mGPU setup I ran was a pair of Asus GTX 780s.
eVGA GeForce 7600 GT CO's prior to those.
GTX 6600 GT before those.
...3Dfx V2 12MB SLI was my first mGPU, and I ran a single GPU up until the 6600s.

Thought about replacing my 780s with a pair of 970s, but glad I opted for a single 980Ti.

Don't think I will ever run an mGPU setup again.
 
I guess the question is what is mgpu? I know one thinks of it as cards... But thinking about it, making a gpu modular would kind of be a form of invisible mgpu... Want more power, just add on modules in a big cluster with a big socket or whatever.... Don't know just talking out of me butt.
 
I guess the question is what is mgpu? I know one thinks of it as cards... But thinking about it, making a gpu modular would kind of be a form of invisible mgpu... Want more power, just add on modules in a big cluster with a big socket or whatever.... Don't know just talking out of me butt.
There was a piece of tech that came out a number of months ago that was capable of seamlessly combining multiple video cards together. I think the application was for high level rendering. The tech is already available to combine rendering power (it's just damned expensive to do so on a hardware level). Seamless mGPU would definitely be something, if we ever see it.
 
I guess the question is what is mgpu? I know one thinks of it as cards... But thinking about it, making a gpu modular would kind of be a form of invisible mgpu... Want more power, just add on modules in a big cluster with a big socket or whatever.... Don't know just talking out of me butt.
There was a piece of tech that came out a number of months ago that was capable of seamlessly combining multiple video cards together. I think the application was for high level rendering. The tech is already available to combine rendering power (it's just damned expensive to do so on a hardware level). Seamless mGPU would definitely be something, if we ever see it.
mGPu support on a commercial rendering level is quite different. Not going to say I know a lot about but generally it is supported much differently. From what I've read the NVLink implemented for the RTX line is different than the NVLink used for quadro's. On a consumer/commercial level it's supported simply through multiple cards of same type on a motherboard, often connected to each other via a bridge. When DX12 came out it had rumors of mGpu support that wouldn't need a bridge and allow mixed cards. Problem is that it's a headache for dev's to implement in games and neither Nvidia or AMD or Microsoft really carried the ball to a point to make it easier for devs to finish the job.
 
mGPu support on a commercial rendering level is quite different. Not going to say I know a lot about but generally it is supported much differently. From what I've read the NVLink implemented for the RTX line is different than the NVLink used for quadro's. On a consumer/commercial level it's supported simply through multiple cards of same type on a motherboard, often connected to each other via a bridge. When DX12 came out it had rumors of mGpu support that wouldn't need a bridge and allow mixed cards. Problem is that it's a headache for dev's to implement in games and neither Nvidia or AMD or Microsoft really carried the ball to a point to make it easier for devs to finish the job.
Yeah, the commercial implementation I was referring to essentially made all video cards in the array appear as if they were one, "uber", video card. We can only hope that one of these days they figure out how to merge the rendering capabilities of video cards (I was under the impression that DX12 had something like this in it's DNA). It would be nice if the cards will eventually be recognized as a rendering farm(not certain that is the correct term) /one **** video card, and the frame buffer is shared.

That way, essentially, the devs of games only need to program it for one unit.
 
The first to invisible mgpu wins.

This. If the devs don’t have to monkey with it specifically to at least get some baseline level of support it will skyrocket

The fact that it has always been opt in with required driver profiles has kept it marginalized.
 
I never did get into SLI/Crossfire personally. However, I built my first pc with the plan of adding a 2nd 6800ultra in SLI, but never did. Since then I've always just been a one card show, buying the best single card I could afford. If SLI or Crossfire come back around I'd like to explore it for all the good/bad. I always thought it'd be baller to have 2 discrete GPU's in one PC.
 
Become a Patron!
Back
Top