Linus Sebastian (LTT/LMG) Is Embroiled in Drama with Gamers Nexus over Ethics, Integrity, and Testing Practices

Oh no you're talking about a different kind of automation here. Redundancy and failover are automated. I'm talking about maintenance and things that require software installs, patching that sort of thing. We have a CRAZY high level of redundancy that lets us achieve the 5 9's we exceed regularly.

Between VMware clusters, redundant power redundant network cards on redundant switches on redundant UPS's on redundant power sources... that stuff we agree with and I agree with 100%.

Though I would presume on the power delivery and balance transmission (I've worked for a company that did power management software for a short time in my youth.) lines your level of tolerance is different and you often don't have unused capacity for fault tolerance.

Though I will say from the world of protected data... and lets not even talk about DOD and three letter agency stuff. Getting trusted management software with the limited external connections and everything else... is difficult. Easier with the establishment of specific clouds and such... sorry just being obtuse here because I don't want to over share. Those who know, know. ;)
 
IF you managed critical systems and infrastructure, even on an enterprise scale. Automation should SCARE THE **** OUT OF YOU. Not because it makes things simpler. Because ONE mistake can have VAST DRASTIC impact to the health of your most critical environments. Or in this case... review results.

I work for a specific group in a larger company that is involved in the safety and in many cases very lives of our customers. We impact millions if not tens of millions of lives every day. We do NOT want automation. Our server farm due to the data is smaller than you might think. Just over a couple hundred server entities. (virtual included) We do NOT automate the care and feeding outside of moving specific data where it can be accessible for speedier application once approved. We have a team dedicated to making sure we are taking care of our servers one at a time with human eyes. Because unlike with automation 1 mistake impacts 1 server. Where as if we do it with automation... 1 mistake takes out an entire class or role of server and that is much more detrimental to the health of our environment and our customers.

Others have suggested we use more automation than we do and we push back against it. Every time. Yes it's more expensive to do it the way we do it. But we are a small team that handles the full lifecycle of our environment. Because of that even when larger mistakes outside of our control take down more of our environment than we would have thought possible we are able to soldier on without mass interruption to our customers and their literal lives. It's worth it to do it right.

With automation being SO MUCH of a timesaver it is always a battle. Yet when we see doing it automatically destroy less critical parts of a business and they have to scramble to recover. We just nod to ourselves and keep plugging away doing it the way we feel is right.

As Mel Gibson said in the Patriot: Aim Small, Miss Small.
Yes you do want automation, you want it big time, but good managed automation, not just automation for the sake of it. I'm an IT manager and deal with server issues (urgencies, tragedies, cathastrophies, you know the usual) on a daily basis, without automation it would be a living hell, no amount of staff would be enough to cover everything that needs to be done. But you have to plan and plan and plan again, check and doublecheck and supervise, you just can't run "full auto" and expect everything to just work.

BTW isn't that quote from that Bradley Cooper sniper film?
 
Back on the subject.

Ever since I started reading tech/review sites, I've never trusted just a single source, specially if I intend to put my money on something. I do my research and draw my own conclusions.

That said, I've always considered LTT more of an "entertainment" tech site and I have questioned their info more than once, I don't really watch it that much anymore (will do now ;);)). GN may be more "professional", but that guy's monotone voice is a cure for insomnia, really I don't think I've ever watched a full episode, its THAT boring.
 
I just found this -

Philip DeFranco summarizes all the issues in 10 minutes, so if you want a quick summary of every single problem, watch that section.
 
Linus is a narcissistic chode. His ego won't allow him to be wrong, even when he is wrong. He deflects and places blame on others when he is the one in charge. Every time he talks, types, or responds to these allegations he buries himself deeper. Because he doesn't know how to accept responsibility. He refuses to accept responsibility. Coupled with the fact that everything they've put out has been a travesty in the tech community as far as factual data is concerned.

LMG is dying. And GN brought that to light. Good on Steve. He reminds me of the Kyle of old. Journalistic truth.
 
Linus is a narcissistic chode. His ego won't allow him to be wrong, even when he is wrong. He deflects and places blame on others when he is the one in charge. Every time he talks, types, or responds to these allegations he buries himself deeper. Because he doesn't know how to accept responsibility. He refuses to accept responsibility. Coupled with the fact that everything they've put out has been a travesty in the tech community as far as factual data is concerned.

LMG is dying. And GN brought that to light. Good on Steve. He reminds me of the Kyle of old. Journalistic truth.

I was just about to post that Kyle had a feud with the now defunct firing squad IIRC.
 
Is there anyone that takes LTT scientifically serious? I never have, I always see their things as informed opinions at most, and typically I think of them as a rough idea of of what whatever device does, features, rough idea of performance. Games nexus... Well, yeah he is a maniac, in a positve way, but a maniac, so yes, I take them to be more scientific.
Wow what a silly ' controversy'.
 
I would like to speak on the topic of automation of benchmarking.

Automation of benchmarking, naturally, can make things more efficient, speed up the process, and allow (at least in my opinion) to include more comparison data, which is always welcomed. The current method of manually doing everything reduces the comparisons one can simply do in each review, especially for launches.

That said, there are very serious inherent dangers in automation. The first is, settings can be set wrong by mistake, this very thing is what happened in a recent Linus review where a wrong setting in Cyberpunk (because of a patch change) caused his data to be simply wrong. Patches of games happen, very frequently, and sometimes how settings are applied, and what settings are applied, changes, and automation cannot keep up with this and DOES make mistakes.

The other danger of automation is simply being removed from the process itself, and missing problems and issues that arise from actual gameplay. If you are not playing the game, you are missing issues such as texture load-in, geometry, or detail load-in due to VRAM constraints. You are also missing the BIG picture of frametime and frame pacing smoothness in games. By removing yourself from the data collection, you are inherently removing yourself from the gameplay experience, and you are no longer relating the gameplay experience to the end user. The question then becomes, how relevant and informative is your review really?

I will emphasize, there are very big pro's for automation, very big, but also very big cons. I think both need to be understood and realized. Whatever method any outlet employs, should be 100% transparent to the reader in how that data is collected.
I don't think automation is a problem. It is a similar situation to AI, as long as you know the limitations, you can implement the necessary checks and QA to make the result as good as manual work. Yes it negates some of the time saved, but it will still be a lot faster than the fully manual option. You have to know what is worth automating and what isn't.

In my line of work full automation is the holy grail that many have claimed to have found, but in reality nobody came anywhere near it yet. I'm looking forward to when AI can have a crack at it.

As for being removed from the process I think that can even be beneficial for an objective review. When I'm looking at performance reviews I'm interested in raw numbers, not the opinion of the reviewer. I want to draw my own conclusions from the data. But I need to be able to trust the data.
 
Back on the subject.

Ever since I started reading tech/review sites, I've never trusted just a single source, specially if I intend to put my money on something. I do my research and draw my own conclusions.

That said, I've always considered LTT more of an "entertainment" tech site and I have questioned their info more than once, I don't really watch it that much anymore (will do now ;);)). GN may be more "professional", but that guy's monotone voice is a cure for insomnia, really I don't think I've ever watched a full episode, its THAT boring.
I agree. I think most, if not all YouTubers have nothing but dollar signs in their eyes when producing videos. Whether that's doing things the right way or not is up to them, but I personally do not fully trust them to be part of my process to buy something unless it's just a simple unboxing video. LTT got called out on their BS so hopefully others will learn from this or get called out as well. Kind of like a weeding out process. LTT is good for their $5,000 Intel/AMD upgrade videos just for the entertainment which I normally skip through anyway, but that's my extent of watching them. I turn to sites like here for reviews and whatnot to help with my decisions on hardware.
 
I turn to sites like here for reviews and whatnot to help with my decisions on hardware.
A well WRITTEN review is worth far more than any produced youtube number. I really don't care WHO produces it.

If GamersNexus had a written review for every video I'd gladly consume that. It FEELS like he's reading a written review anyway.
 
I totally agree and you can often see Steve looking at print when he's doing the videos but I too would prefer a printed review to go back to.
When I was a young man I had a PC Magazine subscription that went DEEP on CPU's and really any article it talked about. I miss that. I'd spend a solid week digesting that information back in the day. Now.... you're lucky to find someone talk a little bit about CPU design and layout then it's on to something more fluff in mere moments. The world has moved on from sharing of information to the masses before there were advanced and large degree programs for computer engineering and design. I'm lucky I came up in the industry before that was a thing.
 
When I was a young man I had a PC Magazine subscription that went DEEP on CPU's and really any article it talked about. I miss that. I'd spend a solid week digesting that information back in the day.
My favorites back in the day were Compute!, PC Magazine, Atari Age, and Science Digest. I too could spend hours reading them and returning to re-read them when I felt like it.
 
As for being removed from the process I think that can even be beneficial for an objective review. When I'm looking at performance reviews I'm interested in raw numbers, not the opinion of the reviewer. I want to draw my own conclusions from the data. But I need to be able to trust the data.

In response to this, the issue with being removed from the benchmarking or gameplay of the game is missing when things like textures/LOD etc... are not loading properly due to limitations, and thus the visual quality is different between runs, despite what the numbers show you. One game on a VRAM limited card can look different at the same settings compared to another video card. You will not catch this when running automation for benchmarking. This has come up a lot in Hardware Unboxed testing's, and they've covered these issues, and these would be totally missed in automation.


The fact is, games today are made in such a way that the level of detail changes, when limited by VRAM, rather than just pure performance or stuttering differences. The game will look visually different on GPUs, at the same settings because of this, textures will have lower quality, geometry will have less detail, and there will be less draw distance and more pop-in. This is tied directly to how games are made console first now, and on consoles, LOD is very dynamic. So that is what we are getting on new game releases now on the PC.

Automation will not catch this or show this, so anyone just running "numbers" will not see that the game looks very different at the same settings on two different GPUs.
 
IF you managed critical systems and infrastructure, even on an enterprise scale. Automation should SCARE THE **** OUT OF YOU. Not because it makes things simpler. Because ONE mistake can have VAST DRASTIC impact to the health of your most critical environments. Or in this case... review results.
I work in the IT industry and have done so for over two decades. I've worked at some very big companies and high profile entities. I've seen the dangers of automation and human error. Some incidents of this have made the news. Thankfully, I've never been directly involved in any of it. Though I have had to pull long shifts fixing things that went down as a result of such outages.

In this case, I have my own opinion on the topic at hand. Not the pissing contest between Linus and Gamer's Nexus but on the subject of automation and benchmarking. Yes, automation can improve consistency. However, automation also is a time investment to create, test, deploy and maintain. Furthermore, its also best in static environments that don't change much. What games and benchmarks we use change all the time. Lastly, as Brent correctly pointed out, divorcing yourself of the process of doing it yourself means that you may miss things that don't show up in benchmarks.

Not to rehash old stuff, but I found this out the hard way once when I discovered my CPU setup was truly awful for gaming despite the benchmarks saying otherwise. Automation would have completely masked this issue.

The benefit to automation is that you could theoretically keep the data fresh and more consistent every review. Every datapoint would have the exact same version of Windows, the same updates, drivers, etc. You wouldn't have as much configuration drift doing that. The down side that goes with that is having to keep those altered hardware configurations around indefinitely to do that. We do not have the luxury of that given that a lot of our hardware isn't sampled. Its bought. It needs to be sold to recoup costs and even when it doesn't, we often have a need for it with another reviewer's test setup or whatever.

Even during the [H] days we didn't have a lot of the luxuries and resources that Linus and Steve have today. I think there is validity to the approach of using automation to get fresh data every time but it does come with some notable downsides as well.
 
A well WRITTEN review is worth far more than any produced youtube number. I really don't care WHO produces it.

If GamersNexus had a written review for every video I'd gladly consume that. It FEELS like he's reading a written review anyway.
I totally agree with you here. I have a hard time not believing that YouTubers have some sort of deal worked out with the hardware manufacturer before doing their "review".
 
The fact is, games today are made in such a way that the level of detail changes, when limited by VRAM, rather than just pure performance or stuttering differences. The game will look visually different on GPUs, at the same settings because of this, textures will have lower quality, geometry will have less detail, and there will be less draw distance and more pop-in. This is tied directly to how games are made console first now, and on consoles, LOD is very dynamic. So that is what we are getting on new game releases now on the PC.

Automation will not catch this or show this, so anyone just running "numbers" will not see that the game looks very different at the same settings on two different GPUs.
The hard part is - this is a good thing, for gamers. It would be better if there were a way to know that this was happening, and to what extent, but on the playability side having games that scale back resource usage to maintain higher frametimes and game responsiveness is what should have been done all along. No one wants to play a game that has heavy stuttering episodes, right?

I don't envy trying to detect it without developers exposing tools; as stated, it takes a tester to visually inspect for obvious changes, and one would expect there to be a subjective manual tuning phase to find the failover point for any particular combination of hardware, drivers, and game settings.
 
I think the personal touch is especially important when trying to review things like DLSS / FSR - which will goose the numbers at the expense of things like latency or fidelity.
 
In response to this, the issue with being removed from the benchmarking or gameplay of the game is missing when things like textures/LOD etc... are not loading properly due to limitations, and thus the visual quality is different between runs, despite what the numbers show you.
To be frank I have not seen a visual difference between different video cards since running a RIVA128 25 years ago. And I have not seen textures or effects not loading since using an S3 Virge even before that.
One game on a VRAM limited card can look different at the same settings compared to another video card. You will not catch this when running automation for benchmarking. This has come up a lot in Hardware Unboxed testing's, and they've covered these issues, and these would be totally missed in automation.
Where is the visual difference using the same settings? I did not find anything while quickly skimming the video, and I'm not really interested in watching the whole 30 minutes.
Automation will not catch this or show this, so anyone just running "numbers" will not see that the game looks very different at the same settings on two different GPUs.
IF this is true, then even if the reviewer catches it it makes all benchmarking meaningless as there is no way for an apples to apples comparison. You must be able to lock LOD to a certain setting to even began a meaningful test. I'd prefer this problem to be killed in its infancy and force developers to add fixed LOD for PC. I sure as hell don't want dynamic LOD on my PC when playing a game.
 
To be frank I have not seen a visual difference between different video cards since running a RIVA128 25 years ago. And I have not seen textures or effects not loading since using an S3 Virge even before that.

Where is the visual difference using the same settings? I did not find anything while quickly skimming the video, and I'm not really interested in watching the whole 30 minutes.

IF this is true, then even if the reviewer catches it it makes all benchmarking meaningless as there is no way for an apples to apples comparison. You must be able to lock LOD to a certain setting to even began a meaningful test. I'd prefer this problem to be killed in its infancy and force developers to add fixed LOD for PC. I sure as hell don't want dynamic LOD on my PC when playing a game.

I provided the source material, the onus is on you to consume and research the information if you are interested in it, not for me to scrub through videos and find specific sections for you. In addition, there are written articles on the subject, if you'd like to research, as well as plenty of other videos on youtube on this subject, including podcasts with actual developers. I have done the research in all these areas, and therefore I am informed about the subject. I am not going to write a dissertation in a forum thread about it, there's plenty of material on the web.
 
Become a Patron!
Back
Top