Twitch will act on “serious” off-platform crimes

Twitch is finally here fulfilling its responsibility as a micro-celebrity device that makes it king, not just a service or a platform. Today, the Amazon-owned company announced an official and public policy to investigate serious indiscretions of real-life streamers or on services such as Discord or Twitter.

In June last year, dozens of women filed allegations of sexual misconduct against major Twitch video game shows. On Twitter and other social networks, they shared terrible experiences of streamers who capitalize on their relative reputation to cross borders, resulting in serious personal and professional injuries. Twitch would ban or eventually suspend several accused streamers, some of whom were “partners” or could receive money through Twitch subscriptions. At the same time, Twitch’s #MeToo movement has raised bigger questions about what responsibility the service has for the actions of its most visible users, both inside and outside the stream.

During the investigation of those troubled users, Sara Clemens, COO Twitch, says WIRED, the Twitch law enforcement and moderation teams found out how difficult it is to review and make decisions based on the behavior of IRL users or other platforms such as Discord. “We realized that the lack of an out-of-service behavior policy creates a threat vector for our community that we have not addressed,” says Clemens. Today, Twitch announces its solution: an out-of-service policy. In partnership with a third-party law firm, Twitch will investigate reports of crimes such as sexual assault, extremist behavior and threats of violence that occur outside the stream.

“We’ve been working on this for a while,” says Clemens. “It’s definitely an unexplored space.”

Twitch is at the forefront of helping to ensure that not only the content, but also the people who create it are safe for the community. (The policy applies to everyone: partners, affiliates and even relatively unknown steamers). For years, sites that support digital celebrities have banned users from taking discretion off the platform. In 2017, PayPal discontinued a lot of white supremacists. In 2018, Patreon removed anti-feminist youtuber Carl Benjamin, known as Sargon of Akkad, for racist speech on YouTube. Meanwhile, sites that develop directly or are based on digital celebrities do not tend to rigorously check their most famous or influential users, especially when these users withdraw their problematic behavior to Discord servers or parts of the industry.

Despite never publishing a formal policy, confrontational services such as Twitch and YouTube have in the past unbalanced users who thought they were harming their communities for things they said or did elsewhere. . In late 2020, YouTube announced that it had temporarily demonetized the NELK prank channel after the creators sneaked into Illinois State University, when the social gathering limit was 10. Those actions and public statements about them, they are the exception rather than the rule.

“Platforms sometimes have special mechanisms for escalating this,” says Kat Lo, moderation leader at nonprofit technology literacy company Meedan, referring to the direct lines that high-profile users often have to company employees. She says that out-of-service moderation has been happening on the largest platforms for at least five years. But in general, she says, companies do not advertise or often formalize these processes. “Investigating off-platform behavior requires a large investigative capacity, finding verifiable evidence. It is difficult to standardize. ”

In the second half of 2020, Twitch received 7.4 million user reports for “all types of violations” and acted on reports 1.1 million times, according to its recent transparency report. During that time, Twitch acted on 61,200 cases of alleged hate speech, sexual harassment and harassment. It is a high weight. (Twitch acted on 67 cases of terrorism and extended 16 cases to law enforcement). Although they account for a large part of user reports, harassment and aggression are not included in the listed behaviors. Twitch will start investigating outside the platform, unless it also appears on Twitch. Off-duty behavior that will trigger investigations includes what Twitch’s blog post calls “serious crimes that pose a substantial security risk to the community”: deadly violence and violent extremism, explicit and credible threats of mass violence, membership in gangs. hatred and so on. Although harassment and harassment are not included now, Twitch says his new policy is being designed on a large scale.

.Source