Readers of this blog know that I have experience dealing with trolls and spammers on social media. One of the most popular posts on this blog is my how-to guide to preemptively blocking spammers using various Twitter clients (which is sadly in need of updating). I’ve also written on how to report suspicious emails.
Most notably, I’ve successfully helped get one user who personally threatened hundreds of people (including me) arrested by the police in Montreal – not once, but twice. (That case is still ongoing, and I may still have work ahead of me – including potentially testifying for the court case).
Back in the heyday of that person’s serial Twitter spamming, some of us would literally receive hundreds of tweets in a row from this man. Usually his account would be disabled, but he’d return with another one within minutes. Even simply hitting the block button to keep your mentions column clean became tedious because of his tenacity.
At one point Daniel Pope and I considered building an automated piece of software (often known as a bot) to automatically block his accounts on behalf of other users. That way the first person to notice each new account could block him, and then the new account would be preemptively blocked for anyone else who chose to opt in to this service. Ultimately we decided it would be too much effort aimed at just one person. I was also concerned it might run afoul of Twitter’s terms of service, and the work creating it would be for naught. Soon he was arrested and the point became moot.
Early this year something very similar to that proposed bot was actually built and deployed by an atheist in the UK. This week, it got major publicity in the UK news media, trumpeting it as a potential solution to an ongoing harassment problem on Twitter. I assume this has resulted in a flood of new users for the service.
I believe there are serious flaws in how this service is designed and operated that make it a poor solution for most Twitter users. The media, focused on the larger problem of harassment, are not covering these operational issues. I will detail them below.
The Harassment Problem
You’d have to be living under a rock to be unaware that women are routinely harassed online. The undeniable fact is that many women online and in the media are subjected to ridiculous torrents of abuse and threats via the Internet on a regular basis, for no real reason other than they are women. The media has thankfully picked up on this and made it an issue for public discussion of late.
Within the skeptic or atheist community there has been Rebecca Watson’s 2011 “elevator incident” and its fallout. Within the gaming community, Anita Sarkeesian was also subjected to harrassment because of a Kickstarter she launched in 2012. And most recently, a successful effort to get Jane Austen put onto the British 10-pound note resulted in petition organizer Caroline Criado-Perez becoming the target for harassment, and the subsequent arrest of the perpetrator. (Suffice it to say there are many other examples).
That last incident has recently catapulted the issue into the UK media, including New Scientist and the BBC web site. The author of the BBC article, Paul Mason, also filed this BBC TV report for the program Newsnight, which features the bot I’m discussing in this post:
The Block Bot
As you saw in the video above, the harassment of Rebecca Watson has inspired a technical solution similar to what Daniel Pope and I considered building during the Mabus affair. Early this year an atheist in the UK named James Billingham (known online as ool0n) built and published The Block Bot. It has a Twitter account and a web site that explains its operation. The computer source code that operates the bot is freely available for others to examine, modify or use.
The accounts being blocked are arranged into three tiers, which I will get into later. But only the “worst of the worst” (or “Level 1″) accounts are reported as spammers using the Twitter API, the other two levels are merely blocked. If you already follow someone on Twitter, they are never blocked for you.
The block bot’s website explains how it works, how you can opt in and opt out of the service, and even lists exactly which Twitter accounts are in the various levels of block list. The associated Twitter account regularly tweets updates to the bot’s behavior and so on. Between that and the fact that it is open source, that seems like a good level of transparency.
I have examined the source code of the bot, and without actually testing it myself, I can report that it does operate as described. There are no hidden features or undocumented quirks that I can find.
But There Are Problems…
Strong technical measures like this demand strong procedures around them, to guard against abuse. They also demand a deep understanding by all involved the full scope of the measures and how to deal with them. Despite the aforementioned transparency, I see a number of policies that are unclear and issues with how this bot is being operated.
The net effect is this bot could easily behave in ways new users don’t expect, and it could be abused. These problems are going to be exacerbated by any influx of new users via the media attention. I’ve been observing the operation of the bot for several months, and I’ve seen evidence that these things are in fact already happening.
Problem 1: What Are The Rules?
This is the entire answer to the Block Bot’s FAQ question “Who will be added to the block list?“
The short answer is anyone that a blocker defines as block list worthy. The general rule is if you are the type that would find yourself banned on a blog on Freethoughtblogs.com, Skepchick.org or from the A+ forum then you will likely end up in the list…
The first sentence is circular and the rest of it defers to guidelines which it does not link. It’s not clear there’s any enforceable standard here at all. It’s clear as mud.
The core problem here is this tool was developed for specific needs of a very specific community (namely, those who identify with “Atheism+”). Therefore the operators of the bot assume knowledge or attitudes on behalf of the user base that may not be held by the average Twitter user. Essentially, if you are good friends with Billingham and agree with him on most issues, the bot may well operate exactly the way you expect it to. This specificity of the bot to a particular community is completely glossed over in the BBC TV report.
But for those not so familiar, they may be wondering who is running the bot and what they are doing. So that leads to the next question:
Problem 2: Who Is In Control?
The answer to the FAQ question “How are people added to the blacklist?” describes some elaborate formatting for tweets that can be sent to the bot to cause it to add names to its lists. I’ve spoken with users who have read this answer and took it to mean that they could configure the bot for their own use using these tweets. That’s actually not the case at all.
Those commands are specifically reserved for an “authorized user” or “blocker” – a special user who is allowed to do this. That’s not everyone, those are the administrators of the service. (There is a special configuration file in the code that stores the names of these people).
For the operating bot, I could find no documented list of authorized blockers on the site. It’s clear that Billingham himself and an pseudonymous person who goes by the handle “Aratina Cage” (“a rat in a cage”) do much of the administration, judging from the Twitter chatter around the bot and the comments on the website. But a Twitter search shows that some other people have been using the commands to add accounts to the list (and occasionally have been overruled by Billingham).
Wondering about this, I asked Billingham on Twitter and he volunteered that the current list of blockers includes @aratina, @Hyperdeath, @ool0n, @SpokesGay, @VitaBrevi and @Xanthe_Cat. I indicated I think it would be better practice to list these somewhere.
And remember, because of the way the Twitter API and services like this work, when they make a decision to block someone, the actual block is happening using your account credentials – pretty much exactly as if you had pressed the button yourself. Suppose the operators of The Block Bot were to select a series of accounts to block that looked suspicious to Twitter HQ? Twitter might take action to suspend the application (i.e. turn off the block bot), or they might take action against you, since it was your account that did the blocking.
Bottom line: the average user can’t easily find out who has authority to direct the block bot. Therefore it’s not clear how well any policies are being enforced and so on. This carries with it risk.
Problem 3: There Is No Audit Trail
I do computer security work in my day job, much of which has to do with the actions of users on an online service. We live and die by our audit trail, as it is often the only way to determine what is going on.
The Block Bot has very little in the way of an audit trail. Nothing is recorded in the back-end database to indicate who told it to do something, when or why. When commands are sent via Twitter, that could leave some evidence scraps behind. But it can’t be totally trusted. For instance, a user could send a command to the bot, wait for it to be acted upon, then delete the tweet.
The lack of auditing means that someone could end up on the block list and there might be no good way to figure out why they were put there. To further exacerbate this, in order to not run afoul of Twitter’s anti-spamming rules, the block bot never notifies the people it is blocking directly. So the person affected would have no way to know about the mistake so they could call it to anyone’s attention. This combination is a recipe for disastrous abuse of the service.
(UPDATE 9:33pm h/t Jim Lippard) And it gets worse. If someone is removed from the block list, the bot cannot go around and unblock that user across the board, because it doesn’t know if a given subscriber had that user blocked on their own, or via the bot – again due to the lack of an audit trail. As a result, even if an effective appeal procedure is put in place, it can’t undo some of the damage done by the bot.
Problem 4: What Do These Levels Mean?
As I mentioned before, the accounts being blocked are arranged into three levels of severity, described on the website as:
Level 1 is sparsely populated with “worst of the worst” trolls, plus impersonators and stalkers. Level 2 (which we recommend for general use) includes those in Level 1, plus a wider selection of deeply unpleasant people. Level 3 goes beyond The Block Bot’s main purpose, and expands the list to include those who aren’t straight out haters, but can be tedious and obnoxious.
Like the previous issue with who gets blocked, this text is entirely unclear to me. Who defines “deeply unpleasant” or “tedious and obnoxious”?
There is a much more verbose description that appears only when you visit the page where you actually sign in to the service, that currently reads as follows:
→ Level 1 blocking: this blocks only the worst of the worst. These are the really nasty ones.
Both “sides” across the Deep Rifts™ will hopefully agree these need to be blocked.
Accounts that spam extremely abusive messages to people with the intent only of hurting them with not a hint of “disagreement”.
D0x’ers who want to drop information on fellow atheists in order to scare them off the internet or have real life effects on their well-being.
Stalkers that create sock-accounts to inject themselves into your time line to get a response from you or imposters pretending to be you.
→ Level 2 blocking: these are the abusive subset of anti-feminists, MRAs, or all-round assholes who think nothing of tweeting their much loved photoshopped pictures, memes and other wonderful media directly into your timeline to get attention (Listen to Meee!!1!).
This level also includes the “parody” accounts, if you have better things to do with your life than “disagree” on Twitter with a parody of yourself that seems to have suffered a frontal lobotomy.
Level 2 blocking includes all members of level 1.
→ Level 3 blocking: these are the merely annoying and irritating Twitterers who trot out the A+ arguments to avoid at a moment’s notice, and show no signs of giving them up until you pry them from their cold, dead hands.
Given that is not a practical option, how about blocking them and avoiding tedious exchanges?
This is the 100% frozen peach option… These from time to time leap to level 1/2 so why take the risk?
Level 3 blocking includes all members of levels 1 and 2.
This is a bit clearer, but still quite confusing. It uses a number of terms that are not defined here (such as “Deep Rifts”, D0X, MRA and frozen peach) and scare quotes on some terms to further confuse the matter. (Yes, I’m fully aware of what those terms mean, but is everyone?). Again here the tool suffers from having been coded for a specific community for whom this text probably makes more sense.
But I feel a general Twitter user will be confused here and probably make the wrong choice. I know I am not familiar with the norms of Atheism+, and I can’t fully interpret the above text. For instance, what do they mean by parody accounts? Some parody accounts on Twitter are the most entertaining on the service – surely they don’t mean those? The jargon does not help.
Also, do note this comment near the bottom:
These from time to time leap to level 1/2 so why take the risk?
The implication is that users of the bot would be best served by blocking all three levels. (That will become important later).
Problem 5: Blocks Have Consequences
Blocking and reporting for spam on Twitter absolutely have consequences for the reported account including potential suspension. The Twitter help on spam clearly indicates that suspension can result from reporting. In our campaign against Mabus’s spams, we definitely saw an effect on his accounts from people hitting block or report.
Last year when Twitter launched an aggressive anti-spam effort a number of atheists who were suspended erroneously decided they had been targeted in some sort of malicious campaign. As I wrote in that blog post, Twitter had recently written in their own blog that they had implemented new automatic technical measures against spammers which I explained had caught the atheists affected. Twitter also wrote in that post:
You can help out, too, by reporting and blocking spammers you encounter on Twitter.
This is further evidence that reporting and blocking feeds into Twitter’s automated algorithms. Twitter’s own response to the recent related #ReportAbuse campaign also mentioned automated algorithms:
While manually reviewing every Tweet is not possible due to Twitter’s global reach and level of activity, we use both automated and manual systems to evaluate reports of users potentially violating our Twitter Rules. These rules explicitly bar direct, specific threats of violence against others and use of our service for unlawful purposes, for which users may be suspended when reported.
In my day job, I work at a firm that sends millions of emails every week, and I work with anti-spam measures almost daily. Anti-spam algorithms are usually automated as much as possible, and never openly published, to avoid efforts by spammers to evade them. This is just the way it is done, as it is a constant escalating battle of techniques with spammers.
And so, nobody outside Twitter knows precisely what factors are used to decide to suspend an abusive or spammy account and how they are weighted.
Billingham has repeatedly stated that he has carefully designed the block bot so that it will not result in Twitter automatically suspending an account that has been put in Level 2 or Level 3. People challenge him on Twitter about this regularly, and his answer is always the same. But because he does not work for Twitter, all he can be basing this on are his own limited experiments.
The fact of the matter is Billingham cannot know for sure whether his efforts to avoid suspending anyone were successful. His bot may well be damaging the accounts of people reported in some internal Twitter scoring system, resulting in eventual action by Twitter.
But these are just anonymous trolls that deserve it, right? Well, some are. That leads to my final point:
Problem 5: Inappropriate Blocks, Especially for Anyone Unfamiliar with Atheism+
The final and most important problem with the bot is the end result of all of the above, as implemented. The community needs of this very specific group (“Atheism+”), combined with the lack of auditing and transparency of control, has resulted in some (in my opinion) very strange choices. I am familiar with many of the people in these communities. I know many of them in real life as well as online. Scanning the list of Level 2 and 3 blocks makes me repeatedly scratch my head in puzzlement.
I’m not going to get into exact names here, as I do not want to discuss the pros and cons of blocking particular people. That is not a productive line of discussion. The bottom line is that if the users of this bot (or any Twitter user) want to block these people, that is their right.
And I will agree with that piece of text quoted above that implies that most would agree that the people blocked in Level 1 deserve that status. I scanned some of these accounts, and some I have seen before, and they are pretty heinous offenders. No argument there.
However, just a casual scan down the list of Level 2 and Level 3 blocks reveals people, many of whom I know personally, who are deeply involved in the atheism, skepticism, secularism and humanism movements all around the world. They include:
A Research Fellow for a U.S. think-tank who is also deputy editor of a national magazine, and author of numerous books
A Consultant for Educational Programs for a U.S. national non-profit
A long-time volunteer for the same national non-profit
An organizer for a state-level skeptic group in the US
A past president of a state-level humanist group in the US
A former director of a state-level atheist group in the US
An Emmy and Golden Globe award winning comedian
A TED Fellow
Co-founder of a well known magazine of philosophy and author of several books
A philosopher, writer and critic who has authored several books
These are not anonymous trolls. They are not likely to be arrested anytime soon. Most of these people regularly speak at national conferences to audiences from several hundred to over a thousand people. Starting from the publicly available block list you can click the names to go directly to their Twitter feeds, I see little evidence that these people are attacking, threatening or spamming anyone.
Now I’m not dumb, I know that many of these people have had very public disagreements with people allied with “Atheism+” who use this bot. And let me reiterate: if people want to block others that they disagree with, that is their right.
But these well-respected people are being listed right alongside some vicious troll accounts, and not being clearly distinguished from them. And remember where I talked about consequences of blocking above? These people may suffer these consequences right alongside the vicious trolls.
None of the recent promotional items regarding The Block Bot (BBC, New Scientist) have made this distinction clear. In fact, Billingham smiles and agrees in the video (at 2:24 elapsed time) when the BBC journalist chooses only to block Level 1. Levels 2 and 3, although briefly seen on screen, are never described in the report. (Billingham says an explanation was filmed but cut).
Regardless, the report as run gives an impression that runs counter to the previously noted verbiage on the website which encourages users to go all the way to level 3. On Thursday, a short video update to the report ran on BBC where Gavin Esler and Paul Mason discuss the three levels and how some people feel they’ve been unfairly included in the level 3 list.
I would also point out that as I write this Level 1 contains 85 names while Levels 2 and 3 contain a total of 566 names. If vicious harassing trolls are the true purpose of this, why are they a comparative minority of the names listed?
One of the consequences of blocking that I didn’t mention earlier is the ability for these people to be discovered through Twitter retweets. If I like some content and retweet it to my followers, they get exposed not only to that content but to the identity of the user I’ve retweeted. It looks like this on the web:
Suppose you don’t yet follow @WhatsTheHarm but you do follow @VirtualSkeptics, and you see the above. That’s a great opportunity to learn about a useful Twitter feed you might want to follow. But if one of the bot operators had decided to Level 3 block @WhatsTheHarm because of some disagreement, you’d never get this opportunity. And it could be due to a disagreement of which there is no record and which you were not party to. That’s a loss of a powerful feature under very poor circumstances.
Conclusion
I cannot recommend this online tool for anyone who is not already very closely allied with the Atheism+ community and/or personal friends with Billingham, “Aratina Cage” or the other blockers. It is not suitable for general use, and I would recommend Twitter users avoid it.
If you have already used the service, because of the opportunity for abuse I would strongly recommend you go to the app settings page on Twitter and revoke its access to your account. If you must use it, only enable Level 1 blocking.
Problems that need attention include:
Improve the online documentation, remove the jargon and slang
Clearly differentiate the blocking levels on the site, and segregate the name lists so that people blocked on mere preference are not listed on the same page as vicious troll accounts
Consider removing the current Level 3 blocks entirely, or at least time-limiting them to reduce their effect
Document who the administrators of the bot are and provide ways to contact them (including ways outside Twitter)
Create clear procedures for adding/removing people from the bot and hold administrators to them
Have a clearly documented appeal procedure and procedures to deal with rogue administrators
Require administrators to supply a reason or piece of evidence (e.g. a tweet) for any add
Notify accounts that are being blocked so that they have an opportunity to appeal
Implement an audit log to support the above procedures
Because this code is open source, the opportunity exists for a third party to implement the above and to create competing bots. There may be some value in doing so and doing a better job of targeting the “Level 1″ type trolls. (The Block Bot currently only has found 85 of these, which seems like a very low number to me when you consider the number of accounts on Twitter and the breadth of the harassment problem).
Personal Comment
Just as I was eventually dubious of the value of an anti-Mabus bot, I’m dubious of the long-term prospects for The Block Bot (or derivatives thereof). I have three reasons for this pessimism.
First, Twitter has been notoriously fickle in changing policies related to various software which accesses their platform. This has clobbered skeptic projects before, like the anti-pseudoscience chatbot which got killed by spam rules. One of the reasons I’ve not written about this bot until now is that I’ve long been expecting Twitter to cut it off as a violation of their automation policies (specifically: mass unfollowing). It remains to be seen if the media attention causes Twitter to take action.
A second looming problem for The Block Bot is it may become a victim of its own success. If it attracts a large number of users based on this media coverage, it may not be able to keep up with the requests without running afoul of Twitter’s API rate limits. (Request: If you have read this far and plan to comment, incorporate the word “bananas” into your comment. Thank you.) Overcoming the rate limit problem would require a serious amount of engineering work to make the software more scalable. I’ve looked at the current code, and it runs with simple text files and a single thread. It is just not scalable to thousands of users and thousands of blocks without serious work.
A third looming problem is the #ReportAbuse campaign itself. If Twitter is successful in responding to the requests to report abusive users, there soon may be no need for The Block Bot. I do hope Twitter continues to address this with real solutions – we could have used it during the Mabus battle.
Please Note – Unless you are a frequent commenter here, you are likely to be moderated at first. Off-topic and uncivil posts will not be released. Posts about specific people who are listed by The Block Bot will not be released. Posts in which it is clear you skipped to the bottom to comment without reading the post will not be released. Please play nice, we’re all adults.