I would like to speak on the topic of automation of benchmarking.
Automation of benchmarking, naturally, can make things more efficient, speed up the process, and allow (at least in my opinion) to include more comparison data, which is always welcomed. The current method of manually doing everything reduces the comparisons one can simply do in each review, especially for launches.
That said, there are very serious inherent dangers in automation. The first is, settings can be set wrong by mistake, this very thing is what happened in a recent Linus review where a wrong setting in Cyberpunk (because of a patch change) caused his data to be simply wrong. Patches of games happen, very frequently, and sometimes how settings are applied, and what settings are applied, changes, and automation cannot keep up with this and DOES make mistakes.
The other danger of automation is simply being removed from the process itself, and missing problems and issues that arise from actual gameplay. If you are not playing the game, you are missing issues such as texture load-in, geometry, or detail load-in due to VRAM constraints. You are also missing the BIG picture of frametime and frame pacing smoothness in games. By removing yourself from the data collection, you are inherently removing yourself from the gameplay experience, and you are no longer relating the gameplay experience to the end user. The question then becomes, how relevant and informative is your review really?
I will emphasize, there are very big pro's for automation, very big, but also very big cons. I think both need to be understood and realized. Whatever method any outlet employs, should be 100% transparent to the reader in how that data is collected.