Skip to main content

Try a free month of being a Eurogamer Supporter

Get ad-free browsing and exclusive content. Use code "EGMay24" at checkout.

If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Microsoft using AI to tackle community safety in voice chats, but there's still a long way to go

Reports improved behaviour in last year.

Valorant artwork showing multiple player characters posing against a red background
Image credit: Riot

Microsoft is utilising AI to assist with identifying harmful content and ensure the gaming community is kept safe when playing games online.

In a blog post for its Xbox Transparency Report, the company reiterated its "advancements in promoting player safety" and "commitment to transparency", as well as its "progressive approach to leveraging AI".

There are two specific ways in which AI is being used: auto labelling and image pattern matching.

Newscast: Is the closure of Hi-Fi Rush and Redfall's studios a sign the Xbox Game Pass publishing model is failing?Watch on YouTube

The former uses AI to identify harmful words or phrases from player reports, allowing community moderators to quickly sort out false reports and focus on the most critical instances.

The latter similarly uses image matching techniques to rapidly identify and remove harmful and toxic imagery.

The use of AI here follows last year's introduction of a new voice reporting feature, which allows Xbox players to capture video clips of in-game voice chat to submit to the Xbox Safety Team for review.

Xbox says it has captured 138k voice records since the launch of the feature. When those reports resulted in an enforcement strike, 98 percent of players showed improved behaviour and did not receive further enforcements.

Last year Xbox introduced an enforcement strike system, matching bad behaviour with a strike of appropriate severity. Since launch, 88 percent of players receiving a strike did not receive further enforcement.

It's a positive step, then, with AI reducing the human cost of reviewing toxic content.

Still, toxicity and misogyny remain widespread in gaming voice chat. A post from Twitch streamer Taylor Morgan went viral this week due to the harmful abuse she received while playing a match of Riot's Valorant (beware of the language in the clip below).

"The suspensions are not enough. Nothing will ever stop these men from acting this way until hardware bans go into play. They should never be able to play the game again," she posted on X, formerly Twitter.

In a follow-up post she revealed she received a penalty for leaving the game early.

Examples like this highlight there is still a long way to go to improve the safety of playing games online. And while Microsoft's use of AI is seemingly positive, it's worth remembering the company last year laid off its entire AI ethics and society team as part of 10,000 job losses across the company (as highlighted at the time by The Verge). Its Office of Responsible AI does remain, however, which published its Responsible AI Standard guide.

Indeed, Microsoft is certainly pushing the benefits of AI. In November last year it announced a "multi-year co-development partnership" with Inworld AI to provide Xbox's studios with AI tools such as a design copilot and character runtime engine.

Such generative AI tools have, previously, been met with "reverence and rage", as Eurogamer reported from last year's GDC event.

More recently, a report by Unity claimed that 62 percent of studios are already utilising AI, with animation the top usage.

Read this next