- Joined
- May 6, 2019
- Messages
- 12,595
- Points
- 113
Artificial intelligence appears to be making headway in ensuring that the online multiplayer experience is a heavily sanitized one. Within its first weeks, FACEIT and Google's Minerva AI banned 20,000 Counter-Strike: Global Offensive players after analyzing over 200,000,000 chat messages and deeming 7,000,000 of those to be "toxic." Toxic language has reportedly been reduced by as much as 20% since the AI's incorporation.
“If a message is perceived as toxic in the context of the conversation,” FACEIT explains in a blog post, “Minerva issues a warning for verbal abuse, while similar messages in a chat are flagged as spam. Minerva is able to take a decision just a few seconds after a match has ended: if an abuse is detected it sends a notification containing a warning or ban to the abuser.”
“If a message is perceived as toxic in the context of the conversation,” FACEIT explains in a blog post, “Minerva issues a warning for verbal abuse, while similar messages in a chat are flagged as spam. Minerva is able to take a decision just a few seconds after a match has ended: if an abuse is detected it sends a notification containing a warning or ban to the abuser.”