Linus Sebastian (LTT/LMG) Is Embroiled in Drama with Gamers Nexus over Ethics, Integrity, and Testing Practices

Peter_Brosdahl

Moderator
Staff member
Joined
May 28, 2019
Messages
8,115
Points
113
A somewhat bizarre and unexpected drama has been unfolding over the last couple of weeks between Linus Sebastian (LTT/LMG) and Steve Burke at Gamers Nexus. The two individuals and their respective sites have been well-known PC hardware reviewers for many years now. However, an LTT/LMG employee was recorded during a recent tour commenting on how its reviews differ from other sites like Gamers Nexus and Hardware Unboxed by using fresh benchmark data for every review instead of using older GPU data. This, in turn, prompted a response from Gamers Nexus who posted a 44-minute video unveiling a number of grievances with LTT but also expanding onto another, still ongoing, situation with a prototype GPU cooling solution from Billet Labs.

See full article...
 
A lot of people especially Linus apologists are saying it's just drama for clicks, but I vehemently disagree. I'm really glad GN made that video, because it is great consumer advice: Do not trust LTT.

I understand not wanting to pick sides, but it is clear who is in the wrong here. This is a systemic issue for LTT, not just that they made a mistake. It is a policy of negligence and arrogance. Any credibility LTT had has just been utterly nullified.

Not because they made mistakes, everybody can, but their handling of said mistakes. Instead of apologizing and accepting blame they doubled down on their mistakes on multiple occasions. And they did not remove videos with egregious mistakes that could and possibly did mislead people, because they didn't want to loose any ad revenue.

There is no way back from here for Linus, he clearly stated he doesn't want to commit a few extra man hours to make sure what they show is good consumer advice. So there is no reason to take anything LTT publishes seriously. And even if they wow to change I wouldn't believe it, a zebra can't change its stripes.
 
There's a lot of tech tubers doing reaction videos to the news, where's "The FPS Review Reacts" video?? Gotta pump up those views!
 
Yea I have to agree. With all of the talk about LTT labs allowing super testing this is the result. And Linus with his kind of BS response was poorly delivered.

LTT needs to just own that mistakes wer made and with the rollout of LTT labs their goal is to be a trusted source. To that regard they will review policies and checks before publishing any data as they break new ground for LTT to have a labs and a entertainment division.

This is something the new CEO needs to get in front of and not Linus. Linus is a technically minded personality. He is not the most technically proficient resource and hasn't been for some time. I enjoy his enthusiasm and personality but some of his technical presentations especially as of late are cringe worthy.
 
We have truly moved into a new age. Shall we now say that America, perhaps the world, is living a Trumptopia, where information is a weapon and gas-lighting is expected.

I've old enough to remember the 60/70's when media sources were formal, apologized and worried about image. Not that those were perfect day, they wernt. Be so nifty if we could have the best of every decade.
 
There's a lot of tech tubers doing reaction videos to the news, where's "The FPS Review Reacts" video?? Gotta pump up those views!

Lol, I really wouldn't have anything to add to the conversation that hasn't already been stated. I think the best way to handle it, actually, is to talk more about our own methods for testing and ensure we are doing the best job with benchmarking. I will always continue to strive to do so.
 
Lol, I really wouldn't have anything to add to the conversation that hasn't already been stated. I think the best way to handle it, actually, is to talk more about our own methods for testing and ensure we are doing the best job with benchmarking. I will always continue to strive to do so.
I think they were being sarcastic. I would prefer you guys stay out of that mess. Even though the name of the site is thefpsreview I feel like things here are more professional and cordial. Even when madmummy and I tend to get snarky.

I look forward to content here. Your role as a provider of tech info is not to review other review sites. That's where I think gamers nexus went wrong. They should have just submitted concerns and cross posted them. But LTT threw down the gauntlet calling out gamers nexus and gamers nexus decided to pull a FAFO. Basically this all boils down to those in glass houses throwing stones.

If you want to be a source of good or great data for your target audience owning that is what you do. Getting embroiled in a pissing match between YouTube channels is stupid no matter who you are. While it's entertaining to observe from the outside. And we can commentate on the action on the field. It's a mess. It's distracting from what the channels are actually about. And it's doing little more than making LTT for the most part look like disengenuious fools. But it isn't being kind to Gamers Nexus either.
 
To add to the discussion, Hardware Unboxed has a lengthy podcast about this subject, and they mainly talk about benchmarking practices, it's a good watch, I watched the whole thing.

 
I think it's very important to comment on this to the effect of how things are handled by each outlet. LMG trying to be fully automated without what appears to be almost any oversight was a giant red flag already and then Linus's attitude towards a product he already decided was **** before 'reviewing' and then ****ing over that company and possibly giving that prototype to a competitor after promising to return it for a month.

LTT's response video was decent but they could have gone without the fake humor in it. This isn't a funny situation.
 
I would prefer you guys stay out of that mess. Even though the name of the site is thefpsreview I feel like things here are more professional and cordial.
Yep, I was hard-pressed for time with this one and had a few more mistakes than I normally might do but tried very hard to stay neutral in presenting the facts. I may not have been totally neutral but I aimed for that. It's a mess to be sure and I doubt that it's over yet.
 
This SHOULD be a flash in the pan. Something to say... this helped ALL review channels/sites review their data and make sure that they are properly testing and vetting the data that they deliver to customers. AND the whole practice of super rushed reviews should be MARKETED as super rushed with potentially inaccurate data due to lack of quality controls on the distributed data.

What I mean is.... Pick your channel: "Hi guys gals and others, the embargo has lifted on X company's new X. We have put together review data and promise we have done the best we can in the window of time we have been allotted. We are doing a deeper dive on this product with what we consider a more complete review coming! We will do this as long as we feel we can improve the quality of a review with a deeper dive post embargo lift date. If you find any issues with the data please let me know! ON TO THE REVIEW!"
 
I think it's very important to comment on this to the effect of how things are handled by each outlet. LMG trying to be fully automated without what appears to be almost any oversight was a giant red flag already and then Linus's attitude towards a product he already decided was **** before 'reviewing' and then ****ing over that company and possibly giving that prototype to a competitor after promising to return it for a month.

LTT's response video was decent but they could have gone without the fake humor in it. This isn't a funny situation.

I would like to speak on the topic of automation of benchmarking.

Automation of benchmarking, naturally, can make things more efficient, speed up the process, and allow (at least in my opinion) to include more comparison data, which is always welcomed. The current method of manually doing everything reduces the comparisons one can simply do in each review, especially for launches.

That said, there are very serious inherent dangers in automation. The first is, settings can be set wrong by mistake, this very thing is what happened in a recent Linus review where a wrong setting in Cyberpunk (because of a patch change) caused his data to be simply wrong. Patches of games happen, very frequently, and sometimes how settings are applied, and what settings are applied, changes, and automation cannot keep up with this and DOES make mistakes.

The other danger of automation is simply being removed from the process itself, and missing problems and issues that arise from actual gameplay. If you are not playing the game, you are missing issues such as texture load-in, geometry, or detail load-in due to VRAM constraints. You are also missing the BIG picture of frametime and frame pacing smoothness in games. By removing yourself from the data collection, you are inherently removing yourself from the gameplay experience, and you are no longer relating the gameplay experience to the end user. The question then becomes, how relevant and informative is your review really?

I will emphasize, there are very big pro's for automation, very big, but also very big cons. I think both need to be understood and realized. Whatever method any outlet employs, should be 100% transparent to the reader in how that data is collected.
 
This is the response that they should have led with:


I'm glad they got here in very short order.
 
That said, there are very serious inherent dangers in automation. The first is, settings can be set wrong by mistake, this very thing is what happened in a recent Linus review where a wrong setting in Cyberpunk (because of a patch change) caused his data to be simply wrong. Patches of games happen, very frequently, and sometimes how settings are applied, and what settings are applied, changes, and automation cannot keep up with this and DOES make mistakes.

The other danger of automation is simply being removed from the process itself, and missing problems and issues that arise from actual gameplay. If you are not playing the game, you are missing issues such as texture load-in, geometry, or detail load-in due to VRAM constraints. You are also missing the BIG picture of frametime and frame pacing smoothness in games. By removing yourself from the data collection, you are inherently removing yourself from the gameplay experience, and you are no longer relating the gameplay experience to the end user. The question then becomes, how relevant and informative is your review really?
All this 1,000 f*cking percent.
 
This is the response that they should have led with:


I'm glad they got here in very short order.
I think it's fixed by now but their apology video posted the unredacted $$ amount the block that Billet labs didn't want posted publicly. So they rushed that too. 🙃
 
I would like to speak on the topic of automation of benchmarking.

Automation of benchmarking, naturally, can make things more efficient, speed up the process, and allow (at least in my opinion) to include more comparison data, which is always welcomed. The current method of manually doing everything reduces the comparisons one can simply do in each review, especially for launches.

That said, there are very serious inherent dangers in automation. The first is, settings can be set wrong by mistake, this very thing is what happened in a recent Linus review where a wrong setting in Cyberpunk (because of a patch change) caused his data to be simply wrong. Patches of games happen, very frequently, and sometimes how settings are applied, and what settings are applied, changes, and automation cannot keep up with this and DOES make mistakes.

The other danger of automation is simply being removed from the process itself, and missing problems and issues that arise from actual gameplay. If you are not playing the game, you are missing issues such as texture load-in, geometry, or detail load-in due to VRAM constraints. You are also missing the BIG picture of frametime and frame pacing smoothness in games. By removing yourself from the data collection, you are inherently removing yourself from the gameplay experience, and you are no longer relating the gameplay experience to the end user. The question then becomes, how relevant and informative is your review really?

I will emphasize, there are very big pro's for automation, very big, but also very big cons. I think both need to be understood and realized. Whatever method any outlet employs, should be 100% transparent to the reader in how that data is collected.

IF you managed critical systems and infrastructure, even on an enterprise scale. Automation should SCARE THE **** OUT OF YOU. Not because it makes things simpler. Because ONE mistake can have VAST DRASTIC impact to the health of your most critical environments. Or in this case... review results.

I work for a specific group in a larger company that is involved in the safety and in many cases very lives of our customers. We impact millions if not tens of millions of lives every day. We do NOT want automation. Our server farm due to the data is smaller than you might think. Just over a couple hundred server entities. (virtual included) We do NOT automate the care and feeding outside of moving specific data where it can be accessible for speedier application once approved. We have a team dedicated to making sure we are taking care of our servers one at a time with human eyes. Because unlike with automation 1 mistake impacts 1 server. Where as if we do it with automation... 1 mistake takes out an entire class or role of server and that is much more detrimental to the health of our environment and our customers.

Others have suggested we use more automation than we do and we push back against it. Every time. Yes it's more expensive to do it the way we do it. But we are a small team that handles the full lifecycle of our environment. Because of that even when larger mistakes outside of our control take down more of our environment than we would have thought possible we are able to soldier on without mass interruption to our customers and their literal lives. It's worth it to do it right.

With automation being SO MUCH of a timesaver it is always a battle. Yet when we see doing it automatically destroy less critical parts of a business and they have to scramble to recover. We just nod to ourselves and keep plugging away doing it the way we feel is right.

As Mel Gibson said in the Patriot: Aim Small, Miss Small.
 
I think it's fixed by now but their apology video posted the unredacted $$ amount the block that Billet labs didn't want posted publicly. So they rushed that too. 🙃
Yea I saw the comment on that and it was fixed in VERY short order. I would say that shouldn't be the focus. They had to get this video out ASAP.
 
We have put together review data and promise we have done the best we can in the window of time we have been allotted. We are doing a deeper dive on this product with what we consider a more complete review coming! We will do this as long as we feel we can improve the quality of a review with a deeper dive post embargo lift date. If you find any issues with the data please let me know! ON TO THE REVIEW!"

Quite frankly, it's very very difficult to do it this way just for how the whole search engine optimization stuff works. Publishing two "reviews" of a single product becomes duplicate content (especially on the written side of things) in the eyes of Google/Bing and the two pieces will compete with each other and end up sinking both of them in the organic rankings. Alternatively, one could replace the initial review with the final review, but at that point you might as well just take the extra time to do it right the first time.

The other trend that I see with many of the manufacturers is that their jobs is to get early press exposure for whatever gets launched - if you can't deliver the reviews promptly, then you end up getting deprioritized on the review sample list - which is important for getting pre-launch samples for products.

Ultimately what ends up happening is looking at the amount of time that we have, the comparisons that we can reasonably make, and deciding the best way to present the launch content to our readers. You've probably noticed that sometimes our launch reviews are exhaustive with a lot of data and comparisons in them and other times they are closer to drive by - that's usually a function of the amount of time that we had multiplied by competing priorities.
 
We do NOT want automation.
This is interesting.

Not saying your wrong or that I do not agree, so don't take this that way.

The industry I work in - we ~want~ as much automation as possible, because the human factor is the most common error. When things go wrong on power infrastructure - it happens to fast that humans can't react in real time. Ideally, you prevent a fault from happening in the first place, and if/when one does occur - you let all the automated saftey and protection happen as it's designed to, then you go in a see where you are and pick up the pieces to recover. It can be hours - heck, it can be months or years - of absolutely nothing happening at all, and human eyeballs and reactions on something on that time scale just get complacent. So we automate as much as we safely and economically can - we still have boots on the ground every day doing daily checks, but almost all the operation is handled by an algorithm.

Not to say the algorithm can't make a mistake - it absolutely can. But it makes it once, then the problem gets identified and corrected so it won't happen again (at least in the same manner). With people... that isn't the case, and that's assuming the people are telling the truth about what they did and what happened, which isn't a given. Between the time it takes to train someone to be able to do the job, the potential of turnover -- the loss of all the accumulated knowledge of a previous operator and the time and investment it takes to train up a new one to an acceptable standard... it's crazy.

That said, I did a similar job in the military, and there absolutely ~nothing~ was automated except the most basic and coarse safety measures. And if a safety measure was actuated, people didn't just get in trouble, that would end entire careers. It created a pretty toxic work environment, but that was the way it had to be to ensure that highest level of personal accountability and reliability. The personnel turnover thing was a huge issue, even with very large bonuses and big benefits (relative to the rest of the military) - you still couldn't fill every position, and it would take more than 2 years to train up a new operator just to basic preliminary standards.

So I've seen both sides of the coin there. Both have big pros and cons. Where I work now, we lean on automation - it may not take every single variable or dynamic situation into account the way a human can, but once you identify a missing case you only have to update your program once and that particular problem is taken care of permanently, and you don't have issues of complacency or honesty (I can't count how many days/weeks of troubleshooting and investigation we've done only to find out someone lied about what happened to try to save their butt, but in the end the truth always comes out in the wash). The turnover / training issue is probably the biggest driver there though, because labor is so hard to come by and talent is hard to keep.
 
Become a Patron!
Back
Top