2016-10-25



In its early years, Twitter refused to address harassment concerns on free speech grounds. But the company changed its tune a few years ago, and now it insists it’s doing everything it can to fight online harassment.

But as many using the platform know, little has come of it and enforcement of new policies had been terribly inconsistent.

The extreme levels of harassment on the site and the inconsistencies in policy enforcement gained national attention this July, when Ghostbusters star Leslie Jones threatened to leave Twitter after facing a barrage of racist and sexist tweets.

Twitter responded by permanently banning professional alt-right troll Milo Yiannopoulos, one of Jones’s worst harassers, and Twitter founder and CEO Jack Dorsey acknowledged that the site “needs to do better.” He announced a new formal process for anyone to get a blue check mark — a symbol that verifies that a Twitter account is authentic. In mid-August the platform announced “New Ways to Control Your Experience on Twitter,” which include changes to “notification” settings as well as a new “quality” filter.

Yet these steps don’t seem to have fixed the problem. In August, BuzzFeed published an exposé on the company’s “inaction and organizational disarray” when it came to dealing with abuse. And my conversations with Twitter users who have been subject to abuse tell me the new tools are as clumsy as ever. When they do work, they stop abuse but at the cost of cutting users off from the positive features of the site.

The fundamental problem, as one subject of harassment on Twitter told me, is that Twitter still doesn’t seem to see fighting abuse as a core part of the product.

“If you don’t build content moderation in from day one and you tack it on after the fact, you’re facing a losing battle,” says Susan, an active Twitter user who preferred to use a pseudonym because of the volume of online harassment she has received due to her political work. It’s a lesson Twitter still needs to learn.

Twitter’s current anti-abuse features are too clumsy

Twitter’s early refusal to deal with abuse derived from early attitudes that viewed the internet — and Twitter’s platform in particular — as a forum for unfettered free speech. That was the company’s policy until July 2013, when the platform met with a wave of criticism after British feminists received rape threats and hate speech through the site and no recourse was available to report or stop it.

The incident finally prompted Twitter to not only change its policy but also add a “Report Abuse” button alongside the “Report Spam” button. Previously, users wishing to “flag” offensive content were required to fill out an extensive web ticket for customer service.

As high-profile, and often sexist, abuse rose on Twitter, the platform continued to make changes to its policy in an effort “to reduce abuse, including prohibiting indirect threats and nonconsensual nude images.” But while Twitter talked a lot about its anti-harassment efforts, the company made few significant changes until the summer of 2016.

In mid-July, perhaps prompted by the Leslie Jones fiasco, Twitter announced an expansion of its blue check mark program. Previously, only celebrities and media professionals could get a blue check mark next to their name to verify that they were the people they purported to be. Now any Twitter user can apply to have their identity verified. Verification fights harassment by tying people’s real identities to their accounts, diminishing the risk of anonymous harassment and supposedly enhancing the quality of discussion.

Before July, verification was even more significant because only verified users had access to anti-abuse features. But about a month after announcing the “transparent” verification process, Twitter rolled out two powerful new features: a “quality” filter, which aimed to algorithmically hide spammy tweets, and a “notifications” filter, which allowed a user to disable notifications of mentions and replies that came from people they didn’t follow.

So how have those new changes to settings been working out? A little too well, perhaps.

“I turned on the blue check mark and turned off all of the notification settings, and all of a sudden things got very quiet,” says Susan. “Between notification settings turned on and the ‘low quality’ filter on, it’s impossible to tell what you’re seeing and not seeing.”

What Susan is not seeing is not just the deluge of her frequent harassers riling up other Twitter users to attack her. She (happily) doesn’t see the expletives and the trolling, or the comments about her looks. But she also doesn’t see good things that Twitter brought her: connections with like-minded people and colleagues, positive reactions about her work.

Jessica Valenti, who left all social media a few months ago after threats to her daughter on Instagram, doesn’t quite know what to make of the new Twitter features. Valenti, who also founded the foundational feminist site Feministing, is sadly an old hand at online harassment.

“It’s just not the same platform,” she said in a phone call in late September, speaking of her diminished positive engagement on the site. “I’m glad not to see all the hate every day, but it’s sanitized.” Twitter used to be a fire hose. Today, if you engage all the filters, it’s more of a trickle.

The loss in the functionality of Twitter as a means of connecting socially and professionally was one of the largest complaints from those I talked to. If you only get notifications from people you already follow, you can’t easily or naturally expand your network or hear smart new voices — which is a big part of what made Twitter appealing in the first place.

Twitter’s defenders resist further anti-harassment measures with the reasoning that abuse is simply an inevitable consequence of the openness of the platform. In contrast to more closed platforms like Facebook and Instagram, Twitter is designed to facilitate open-ended conversations among total strangers. Free speech purists insist that if you want to use a platform like that, you just have to take the good with the bad.

But it’s also possible that this apparent trade-off between harassment and censorship is actually a symptom of Twitter’s underinvestment in resources to create an anti-abuse infrastructure. Users face a choice between getting harassed and missing valuable interactions because Twitter has built clumsy tools that don’t give users better options. And there’s plenty of reason to think Twitter could be doing better.

Twitter needs to view fighting abuse as an essential feature

Facebook is a fundamentally different platform from Twitter in some ways, but it still offers a valuable example. Facebook has prioritized content moderation and harassment from early on in its history.

That means it has an enormous and well-developed set of policies and procedures, and a team of trained humans to keep it running smoothly. It’s not perfect, but by the standards of the internet, Facebook’s system is a biplane, while Twitter’s is a dirigible.

“As long as abuse is not core to a product, as long as it’s seen as a cost center instead of table stakes, then it’s always going to be a subpar experience,” Susan told me.

Thankfully, the vast majority of users aren’t victims of the kind of harassment faced by the women I talked to for this story. But harassment is still a big threat to the long-term health of Twitter’s platform. The issue isn’t just that Twitter will lose harassment victims themselves as users. Users who enjoy their work — and some of them have significant followings — will also miss out. More importantly, when trolls succeed in driving prominent women off Twitter, it has a toxic effect on the culture of the platform as a whole, making it a little less welcoming for women and minorities in general.

Anita Sarkeesian, founder of Feminist Frequency, a nonprofit organization that looks at pop culture from a feminist perspective, thinks about this a lot. In earlier days of harassment online or problems with her site, often her only recourse was to reach out to people she knew at these platforms, many of whom she got to know because she reported harassment so frequently.

But if you’re not best friends with someone at Twitter, a famous Hollywood actor, or president of the United States, how good is the site at dealing with abuse complaints?

Building resources that can better respond to bystander reports, or reports from non-elite Twitter users, is essential to keeping the democratic function of the platform.

One easy — though expensive — step is to simply hire more people to respond to harassment reports, verify users, and focus on breaking up troll communities. But for more boots on the ground to be a practical solution, it requires that those humans be assisted by algorithms. This is nothing new: There are patented systems that have existed for years to make this easier. Right now, however, it’s unclear precisely what human or software resources Twitter is using to deal with this problem.

Creating a better system to verify users for blue check marks is also essential — and it can help everyone, not just celebrities and the harassment “elite.” As of late this summer, only 0.061 percent of total daily active users have the elite mark. And while that’s better than before, if the site is verifying users at a rate of fewer than 300 a day, it’s going to be a long time for this feature to help the masses.

Twitter could provide users with more powerful anti-harassment tools

Even if Twitter doesn’t beef up its behind-the-scenes resources, at the very least it could offer better anti-harassment features directly to users, according to the harassment victims I talked to.

“Right now there’s only one tier available for filtering,” says Susan, who dismissed the new “quality” filter as inadequate. “You can either have the deluge or you can have a sanitized little community.”

Sarkeesian, who also consults with Twitter as part of the company’s new Trust and Safety Council, thinks both user- and content-based tools are the way forward to combat online harassment. She suggests the possibility of more content-based tools that allow you to mute conversations or threads, working in conjunction with filters for the users themselves.

“The question of users versus content — I think you have to do both,” she says. “There are people who skirt the terms of service, who are harassing and creating discord, but because they’re inside the terms of service, no action can be taken on them.” Sarkeesian gives the example of one of her harassers, who might tweet simply in reply to a new video that she “looks stupid with makeup on.”

“That specific tweet is not against the terms of service,” she explains, “but he has an army of people who follow him that all hate me, so my Twitter feed gets flooded in reply, and some of them are mean, some of them are threats, and some of them are against terms of service. And there are hundreds in a row — so that [original] user can send mobs my way or sow disinformation but never directly threaten me or use slurs against me — but their followers can.”

She also points out that even if you block people, they can still tag you in a tweet, so their legion of followers can easily attack. Simply removing the ability to hyperlink tag someone who has blocked you can create a natural barrier to mob harassment, without any censorship.

Renee Bracey Sherman, an abortion activist and co-author of the online safety guide Speak Up & Stay Safe(r), with Sarkeesian and activist Jacyln Friedman, points out that blocking users isn’t all a platform can do. “On Facebook, I have in my settings a list of words that are banned from my feed,” says Bracey Sherman. “So I can block ‘baby killer,’ or ‘you’re a murderer,’ and the post automatically won’t show up.”

Many victims of harassment have turned to outside applications to do the heavy lifting between content and user blocking. Bracey Sherman suggests Block Together, an app to help you use others’ block lists to avoid users and customize your settings to ban new accounts or accounts with the anonymous egg avatar. “These are all stopgaps that we put in place,” she says. “They do help, but they’re not the end.”

The fact that users are turning to third parties for this kind of capability is a clear sign that Twitter is falling down on the job. Twitter has access to vast amounts of data about its users, which should allow the company to build a much more powerful version of the Block Together feature — and more power to encourage its users to engage with it.

The early form of Twitter made it a remarkable and world-changing tool for communication and change. But failing to get its harassment problems under control could put its future in doubt.

Kate Klonick is a legal academic and resident fellow at Yale Law School’s Information Society Project. She writes about technology, psychology, and the law and is currently working on a project about content moderation.

Show more