Amazon’s New World MMO Is Still Bricking NVIDIA Graphics Cards, According to Some Users

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,595
Points
113
new-world-key-art-1024x577.jpg
Image: Amazon Games



Amazon’s new open-world MMO, New World, officially launched this Tuesday, but reports continue to roll in from players who claim that the game is capable of bricking expensive NVIDIA graphics cards. They include various complaints on the r/newworldgame subreddit, as well as reports from German hardware sites such as WinFuture, which revealed in a recent article that New World ended up bricking one of its GIGABYTE GeForce RTX 3090 GPUs. This supposedly occurred after the author merely closed the game.



From WinFuture:



Apparently, however, there are still various triggers that put such a strain on the vulnerable graphics cards from EVGA and Gigabyte that a hardware failure can be the result. From our immediate environment, we learned of a significant increase in load when changing the graphics settings, while in our case simply closing the game resulted in a...

Continue reading...


 
No I'm sorry this doesn't ONLY mean the people didn't play the Beta. It also means that the MMO from Amazon has crap code in it and needs to be stopped and fixed before another person looses a piece of hardware or worse to their dangerously bad code. That's what it means.
 
This game just doesn't seem right. I can't remember a game that would take out hardware like this one.
 
This game just doesn't seem right. I can't remember a game that would take out hardware like this one.
It's either code trying to exploit hardware in an harmful way or hardware that simply isn't up to snuff and doesn't have proper chokes built in. Or even both. I can't imagine the coil wine in an old 1070 with this. :)
 
What a load of crap.

It is IMPOSSIBLE for software, no matter how heavy the load, to be the cause of bricking a GPU.

This reminds me of when Sid Meier's Civilization V launched in 2010 and people were blaming the game for killing their 9800 GT's despite Nvidia's well publicized solder problem on that whole generation.

If something like this happens, it is ALWAYS the fault of the GPU.
 
It is IMPOSSIBLE for software, no matter how heavy the load, to be the cause of bricking a GPU.
Well, I would word it a bit different - as software ~can~ brick hardware. I would agree and say it’s the responsibility of the hardware to protect itself against any code or condition that may put the hardware in a position to get damaged though, and that as cases arise are discovered it’s on the hardware side to respond and mitigate - at the very least for future hardware revisions.
 
Well, I would word it a bit different - as software ~can~ brick hardware. I would agree and say it’s the responsibility of the hardware to protect itself against any code or condition that may put the hardware in a position to get damaged though, and that as cases arise are discovered it’s on the hardware side to respond and mitigate - at the very least for future hardware revisions.

All I am saying is that unless you do something that messes with the firmware or something like that, a GPU (or CPU) should take anything you throw at it in stride.

If it crashes (at stock settings) there is either a problem with the firmware that controls clocks/voltages or there is inadequate cooling, and either way it is inadequately tested.

If it is enough to actually damage the card then there is something seriously wrong.

Heck, even if you overclock, the most that should happen is a crash. Damage should only occur from extreme overvolting, which these days is prevented in the firmware.

So one way or another, if a GPU is damaged while running s title, it is the GPU and ONLY the GPU's fault.
 
So one way or another, if a GPU is damaged while running s title, it is the GPU and ONLY the GPU's fault.
I agree entirely with you. All I'm saying is there will be corner cases that crop up from time to time that ~could~ see software damaging hardware. I still think it's the fault of the hardware. You are right, it shouldn't happen, and I still think it's the responsibility of the hardware vendor to provide a permanent solution to keep that from happening.

I'm just accounting for the fact that it's impossible to account for ~everything~. Sometimes you build the best you got, and then if/when you see a problem crop up, you address it.

I mean, hardware has moved a long way from fixed clocks and fixed core voltages -- that was the main thing that allowed a code string to become a power vampire. But that isn't to say that gaps may not exist in the boost algorithms that couldn't be exploited, accidentally or otherwise.
 
I agree entirely with you. All I'm saying is there will be corner cases that crop up from time to time that ~could~ see software damaging hardware. I still think it's the fault of the hardware. You are right, it shouldn't happen, and I still think it's the responsibility of the hardware vendor to provide a permanent solution to keep that from happening.

I'm just accounting for the fact that it's impossible to account for ~everything~. Sometimes you build the best you got, and then if/when you see a problem crop up, you address it.

I mean, hardware has moved a long way from fixed clocks and fixed core voltages -- that was the main thing that allowed a code string to become a power vampire. But that isn't to say that gaps may not exist in the boost algorithms that couldn't be exploited, accidentally or otherwise.

Agree.

But while it is more complicated than it used to be in the fixed clock, fixed voltage era, video card manufacturers still have a pretty good idea how much voltage the chips can handle at what clocks and temps, and all the safeguards already exist to automatically dial things back if you get too close to the limit.

Even if you see a surprising load that peaks temps higher than you would have expected - for instance - the GPU should dial back clocks and voltages in response to the higher than expected temps to save itself. If it doesn't, that is a design and/or validation flaw.

And I seriously hope this isn't the same old "bad low temp solder" issue rearing it's ugly head from the 7800 through 9800 era. They really should have learned that lesson by now.
 
Agree.

But while it is more complicated than it used to be in the fixed clock, fixed voltage era, video card manufacturers still have a pretty good idea how much voltage the chips can handle at what clocks and temps, and all the safeguards already exist to automatically dial things back if you get too close to the limit.

Even if you see a surprising load that peaks temps higher than you would have expected - for instance - the GPU should dial back clocks and voltages in response to the higher than expected temps to save itself. If it doesn't, that is a design and/or validation flaw.

And I seriously hope this isn't the same old "bad low temp solder" issue rearing it's ugly head from the 7800 through 9800 era. They really should have learned that lesson by now.
I think it's more unknown and untested vendors for parts and assembly as they have had to branch out to nok standard suppliers in the covid era markets.
 
I think it's more unknown and untested vendors for parts and assembly as they have had to branch out to nok standard suppliers in the covid era markets.

That sounds very plausible.

I wonder if the public will ever know the root cause.

I mean, Kyle would have taken it on like he did the RTX Space Invaders issue, but there are few tech journalists left who would be interested in taking that sort of thing on, and who have the resources to do so if they wanted to.
 
That sounds very plausible.

I wonder if the public will ever know the root cause.

I mean, Kyle would have taken it on like he did the RTX Space Invaders issue, but there are few tech journalists left who would be interested in taking that sort of thing on, and who have the resources to do so if they wanted to.

Yea problem is 2 fold. Cost and availability.
 
Frame limiter was patched after the first closed beta. Where frames would spike on loading screens and menus. Those areas are capped at 60 FPS now. So, outside of shoddy hardware or firmware I'm not sure what else in the game could be causing it.
 
Sounds like if you play this game you end to enable your own frame throttle. It will save your system regardless from undue wear and tear.
 
Become a Patron!
Back
Top