In 2019, Blizzard announced that it had begun using AI machine learning to moderate chat in Overwatch. Blizzard President J. Allen Brack revealed in a new video from the Fireside Chat series that the technology has led to a dramatic reduction in toxic behavior.
According to the company, games should create a positive atmosphere for everyone, regardless of their identity or background. Machine learning allows players to review abusive behavior reports faster than regular moderators would. The system has been around for a while in Overwatch and Heroes of the Storm.
AI allows us to impose penalties for behavior faster, and we’re seeing incredible reductions in not only chat toxicity, but an overall reduction in reoffending. A few months ago, we expanded this system to World of Warcraft’s public channels, we’ve already seen toxic behavior halving, and we’re continuing to improve the speed and accuracy of this system.
Blizzard also recently tightened penalties for toxic behavior in Overwatch and added more flexible profanity filters that offer three levels of “accepted language.” Each of them can be configured additionally.
These are small steps, but they can lead to major changes. Fighting violent behavior and promoting inclusiveness in all of our games and in our workplaces will always be important to us.
Overwatch 2, Diablo IV are currently being created within the walls of Blizzard, and work is underway on the Shadowlands expansion for World of Warcraft, which will be released on November 24 on PC.