Cyberpunk 2077 Gets Tested at 8K

Peter_Brosdahl

Moderator
Staff member
Joined
May 28, 2019
Messages
7,960
Points
113
Cyberpunk-2077-Back-Alley-in-rain-1024x576.jpg
Image: CD PROJEKT RED



Cyberpunk 2077 is a demanding game capable of bringing even the most powerful rig to its knees. PC users are hard at work exploring all that the game has to offer and its plethora of settings. For those wondering what current tech can do in 8K, TweakTown has done just that. Here’s the test rig that was provided by MSI.



Test PC Specifications



Processor: Intel Core i9-10900KCPU Cooler: MSI MAG CORELIQUID 360R AIO RGBMotherboard: MSI MEG Z490 UnifyMemory: G. SKILL Trident Z Royal Gold DDR4 32 GB (2 x 16 GB) 4000 MHz (CL 19)GPU: MSI GeForce RTX 3090 SUPRIM XStorage: Sabrent Rocket Q 8 TB m.2 2280...

Continue reading...


 
But the lawsuits surely they are the evil and nobody should ever support them. *rolls eyes*.

Amazing looking game requires hardware above the best available today to be fully enjoyed at top tier resolutions. Amazing.

I wonder if two 3090's in SLI could do it.
 
But the lawsuits surely they are the evil and nobody should ever support them. *rolls eyes*.

Amazing looking game requires hardware above the best available today to be fully enjoyed at top tier resolutions. Amazing.

I wonder if two 3090's in SLI could do it.
Was looking into that yesterday and unfortunately no. Seems they didn't get mGpu going in this version of Red Engine. A shame. W3 had great SLI support back in the day.
 
But the lawsuits surely they are the evil and nobody should ever support them. *rolls eyes*.

Amazing looking game requires hardware above the best available today to be fully enjoyed at top tier resolutions. Amazing.

I wonder if two 3090's in SLI could do it.

SLI would have to be worth a **** for that to help. I seriously doubt it given that the game likely wasn't designed to take advantage of it.
 
SLI would have to be worth a **** for that to help. I seriously doubt it given that the game likely wasn't designed to take advantage of it.
It's a DX12 title, so the developers would have had to implement it all themselves. They may have cared had they not worried about releasing a console version.
 
It's a DX12 title, so the developers would have had to implement it all themselves. They may have cared had they not worried about releasing a console version.

As much as I like Cyberpunk 2077 (and I really do), even I wouldn't buy two RTX 3090's just to run it even if they had. It's just one game and an extra $1,500 expense for one game just doesn't make sense to me.
 
I've built servers that required dual 1200 watt power supplies before and 4 for redundancy.... so 3000 watts of power isn't a big shock. ;) I mean it IS but you know....
Let's hope they make some major power efficiency gains in the next 2 to 3 gens because the ways things are going it is starting to look scary even for one high-end card.
 
I've built servers that required dual 1200 watt power supplies before and 4 for redundancy.... so 3000 watts of power isn't a big shock. ;) I mean it IS but you know....

Sure, but in servers, those power supplies are redundant. Only meant to spread the load and even power everything on one (or 2). They don't combine the wattage into Voltron er I mean mega voltage. But I got what you meant.

Honestly I don't know why Nvidia left SLI intact on the GPUs least likely to use it.

All moot anyway.
 
Sure, but in servers, those power supplies are redundant. Only meant to spread the load and even power everything on one (or 2). They don't combine the wattage into Voltron er I mean mega voltage. But I got what you meant.

Honestly I don't know why Nvidia left SLI intact on the GPUs least likely to use it.

All moot anyway.

Actually our SQL servers required 2 1200 watt power supplies just to run. Older tech and a lot of ram. RAM is surprisingly power hungry when you doing 100's of gigs worth. 4 would give us redundancy. But yea I know what you meant. Thankfully with newer ram tech we haven't had to do that.
 
I don’t get why 4 PSUs... it isn’t like Raid 1. I get two for total power, but at least in actual power generation, you would usually only look to n+1. Are you assuming that both of those go at the same time?
 
Each 3090 is, what, 350-375 watts? A 5950 is another 125. All told you’re looking at “only” about 900 watts, which means you have some headroom for the rest of your system even on a 1kw power supply.

it could be done, but it would really require the engine developers to put serious effort into sli. That’s not gonna happen
 
I don’t get why 4 PSUs... it isn’t like Raid 1. I get two for total power, but at least in actual power generation, you would usually only look to n+1. Are you assuming that both of those go at the same time?

The UCS-B blade chassis we use at work have 4. It can run full capacity 2 and some form of reduced capacity with 1. Double redundancy for critical systems. I have never seen 2 fail at the same time, but I have seen 1 fail right after another one failed and was replaced.
 
The UCS-B blade chassis we use at work have 4. It can run full capacity 2 and some form of reduced capacity with 1. Double redundancy for critical systems. I have never seen 2 fail at the same time, but I have seen 1 fail right after another one failed and was replaced.
At my job I once saw a backup fail within days after replacing one on a really old server. I was glad I keep them on hand but it really felt like a facepalm "you gotta be kidding me" kind of moment. I don't remember the model offhand but its pretty ancient.
 
The UCS-B blade chassis we use at work have 4. It can run full capacity 2 and some form of reduced capacity with 1. Double redundancy for critical systems. I have never seen 2 fail at the same time, but I have seen 1 fail right after another one failed and was replaced.
Saw double redundancy (ex: 4x power supplies) on blade chassis at a previous job across multiple vendors. You really don't want a whole chassis of blades going down due to a chassis failure. Of course, you also divide your high availability zones for something like that into different DC zones as well, so even if you were to lose a whole chassis (or even whole section of your data center), another server in a different chassis in a differet zone could just keep chugging along.
 
At my job I once saw a backup fail within days after replacing one on a really old server. I was glad I keep them on hand but it really felt like a facepalm "you gotta be kidding me" kind of moment. I don't remember the model offhand but its pretty ancient.

I have had repeatedly bad luck with old servers just waiting for the opportune time to croak. Like when I walk up and touch it :)

More times than I would have liked, a routine part replacement leads to another, and another... or sometimes just an excuse for the whole thing to drop dead.

I get a little unhappy when I have to go and do something "simple" to an ancient POS Dell/HP/IBM etc server.... will today be the day it wants to die?

LOL good times
 
I have had repeatedly bad luck with old servers just waiting for the opportune time to croak. Like when I walk up and touch it :)

More times than I would have liked, a routine part replacement leads to another, and another... or sometimes just an excuse for the whole thing to drop dead.

I get a little unhappy when I have to go and do something "simple" to an ancient POS Dell/HP/IBM etc server.... will today be the day it wants to die?

LOL good times
I've got a fleet of old HP workstations I feel the same about. Around 30-50 between two offices. Great work horses but all pushing 10 years(USB 2.0/SATA II-no less, duo-cores) I've been nursing along for awhile(PSUs, fans, memory, drive replacements). They've all been on 24/7 and I've let our board know that sooner or later . . .plus our servers are about the same but we might be able to take of that soon. I've let them know that when I mechanic tells you your car is about to die you have the choice of a bus or something. With this stuff, one day will come when there's no alternative to putting off or its consequences and then what.
 
I don’t get why 4 PSUs... it isn’t like Raid 1. I get two for total power, but at least in actual power generation, you would usually only look to n+1. Are you assuming that both of those go at the same time?

Think power sources. The servers have pairs of PSU's needing 2 to run. So you can have 2 fail and still run. So only 2 power supplies are plugged into a single power source so if you loose a power source you are still up. Yes both are on UPS. But there are times where one side will go down for maintenance and such.
 
I've got a fleet of old HP workstations I feel the same about. Around 30-50 between two offices. Great work horses but all pushing 10 years(USB 2.0/SATA II-no less, duo-cores) I've been nursing along for awhile(PSUs, fans, memory, drive replacements). They've all been on 24/7 and I've let our board know that sooner or later . . .plus our servers are about the same but we might be able to take of that soon. I've let them know that when I mechanic tells you your car is about to die you have the choice of a bus or something. With this stuff, one day will come when there's no alternative to putting off or its consequences and then what.

Yeah some places will blow money on all kinds of random dumb stuff... but OMG if you even mention technology upgrades everyone starts hissing at you. My wife's workplace is like this... all rocking 10+ year old laptops with dead batteries and terrible old 20" monitors. Some people still have CRT monitors. They have infinite money to make logo branded grocery bags, t-shirts, coffee cups and worthless bluetooth speakers and all kinds of other useless trinket junk. But new computers? Never! Raises? Never!

Mine is the opposite... I've had my laptop 3 years and they were trying to make me take a new one. Then Covid happened.... they ended up buying a TON of laptops and were just handing them to people as needed to WFH. I don't believe they are leased, either... at least that would make some sense.
 
Become a Patron!
Back
Top