I've gotten used to using MFG, but in a really specific way. I use MFG x3 combined with either DLSS Quality or DLAA (I always smile a bit more when I can use this) at 4K and then limit FPS to 70-90 (depending on the game).
Doesn't this essentially force you into a native framerate of (divides by four) ~18-23 fps? It might look smooth due to frame gen, but that input lag out to be atrocious.
We are talking like 45-60ms (excluding monitor)? (1/framerate * 1000) I'm not sure I could play my games with that kind of input lag.
Or maybe that doesn't matter in the titles you play?
Generally, while I don't like the concept of frame generation, I'll use if I am getting 60fps minimums without it. That way my worst case input lag (excluding monitor) is 16.67ms.
I'm happy to play at native frames at my minimum of 60fps, but I'm OK with adding 1x frame gen in this case, but I usually don't.
The downside is that my LG C3's max refresh rate is 120hz, so if I enable v-sync on top of G-sync (which I usually do) that means that it won't render above 120fps, and when 1x framegen is involved, that means the native underlying framerate winds up pinned at 60fps.
In most cases, I would prefer a native framerate with good input lag that rises above 60fps, and only dips down to 60fps at the minimums rather than one where the input lag is pretty much fixed at 60fps levels (16.67ms). This means I can play my title at 90fps most of the time, so most of the time I have an input lag of 11.1ms
I'd much rather do a mild upscale (like DLSS 3 Quality in the old CNN model, but maybe even Performance in the new 4.5 Transformer model) than I would enable frame gen, because input lag is the top priority to me, not frame smoothness. On a full scene I find the frame smoothness difference between 60fps and 120fps to be minimal, but I find the input lag difference between those two to be rather significant.