There are many groups of people on Twitter who I wish that I could block en masse. Off the top of my head, I would block everyone who is wearing shades in their avatar, anyone who lists their blocking policy in their profile, or anyone whose profile includes the phrases ‘SPORTING TEAM / NATIONALITY till I die’, ‘I want my country back’, or ‘thought leader’. I do not wish to ban any of these people from Twitter, I would just like never to encounter them. This is not a matter of right or wrong, censorship or human rights. Everyone has a right to speak but they have to earn and maintain an audience. And sometimes, you can spot a bellend on sight.

In recent days, controversy has raged in a corner of the internet about the Block Bot, a program that users can deploy to block automatically certain people on Twitter; their current webpage explains more. The Block Bot allows one group of people (who I will call the BlockBotters) to block another group of people (who I will call the Blocked) whose views they don’t like or agree with. The Blocked don’t like it. This is all I can say for certain. Many of the Blocked are active on blogs and on Twitter, angry at being blocked, at the apparent labels attached to the Blocked or both, and they plainly want to do something about it. Two popular avenues appear to be libel / defamation actions, and Data Protection.

The libel angle is easy to understand. Richard Dawkins was apparently listed as ‘racist’, ‘gross’, ‘rapeapologist’, ‘childabuseapologism’, ‘transphobia’, and ‘youradick’. Although everyone can surely agree to the last one on the basis that his name is Richard and he is a monumental dick, Dawkins may or may not be justified in suing for libel on some of the other labels. The problem here is that there’s no point haggling about whether the Block Bot has defamed anyone – unless one or more the Blocked sue for libel and win, the online discussion of it goes nowhere and doesn’t stop the BlockBotters from running.

I haven’t exhaustively researched it (the whole mess is exhausting), but I think that the Matthew Hopkins blog was the first place to raise the alternative possibility of the Block Bot breaching Data Protection. I’m not saying the blog’s author is completely wrong, but when I hear someone citing Britain’s “very strong Data Protection laws”, I wonder which country they live in. First things first: the UK Data Protection Act does not apply to the Block Bot unless one or more of the people determining the purposes for which personal data is being used does so in the UK. If the BlockBotters are outside the UK but within the EEA, another country’s version of the EU Data Protection Directive will likely apply, but the UK one won’t. If none of the BlockBotters are based in the UK, game over: the UK Information Commissioner will not bite. It doesn’t matter if the data is stored or processed outside the UK (or even outside Europe) as long as some or all of the decision makers are in the UK. If they’re not, the DPA argument is dead. Move on.

There is the perennial issue that much / all of the material used by the Block Bot is in the public domain, but as I have said before, public data is still personal data if it identifies a living individual. If you disagree, show me the section in the Data Protection Act that says public domain data is exempt from the DPA. You won’t find it, because it isn’t there. More on that topic here.

Assuming there is a UK connection, there are two questions:

  1. Can you run an auto-blocking blacklist that blocks large swathes of Twitter users at a stroke without breaching the DPA?
  2. Have the BlockBotters done that, and if people complain to the ICO, will they do anything?

First things first: yes, you can run a Twitter blocklist in the UK. Anyone who thinks you can’t is welcome to show me the section or DP principle that says you can’t. Certain types of discrimination are illegal, and certain types of blacklists used for discriminatory purposes are also therefore illegal. The phrase ‘blacklist’ has been used by those criticising the Block Bot, and depending on what you mean by ‘blacklist’, that activity is often very difficult to do legally. If by blacklisting, you mean ‘secret list used to unfairly discriminate and disadvantage the people on it’, then I agree that blacklisting is almost certainly illegal in nearly every context. But if you mean ‘operate a list that prevents people from doing something that they want to do’, blacklisting isn’t illegal. What matters here is what the Block Bot does. The Block Bot is not secret, which automatically makes it less likely to breach the DPA in principle. Moreover, making a list of people you dislike or object to and then making decisions about them isn’t a DPA breach: it happens all the time.

Many newspapers last weekend reported the story of Albert Carter, an 80-year-old who has been banned from every Sainsbury’s in the UK, after he collided with a shopper on his mobility scooter. You may argue with the morality of exiling Mr Carter to another supermarket, but a Sainsbury’s store is private property, and if Sainsbury’s want to ban him, they can. He has been blacklisted, his personal data has been processed in order to effect this ban, and the Data Protection Act has not been breached in the process. Many councils and other organisations run warning marker or flag systems – significant actions are taken as a result. Many pubs and clubs exclude punters under Pubwatch or similar schemes, and not always because of hard factual information. Blacklists exist, and they can be made to work. Blacklist is an unattractive word for something that can be completely illegal, entirely justifiable, or something in between.

The Matthew Hopkins blog links the Block Bot to the ICO’s action on the Consulting Association construction blacklist. This is an unhelpful and misleading comparison. For one thing, the BlockBot lets people block you on Twitter; it doesn’t blight your life for decades by making you unemployable. More importantly, the ICO did not take action against the Consulting Association because it was a blacklist; they took action because it was an unjustifiably secret blacklist. Lack of transparency was far from the only problem – Phil Chamberlain and Dave Smith’s magnificent book Blacklisted (which I cannot recommend highly enough) describes a sordid, illegal process that could never have satisfied the DP principles. Much of the data – about individuals’ union or political beliefs – would be classed as sensitive data, and the Consulting Association could not identify an appropriate DP condition for using such data. Much of the data was excessive or irrelevant for determining whether people were suitable for work, especially as individuals were prevented from working purely because they were involved in union activities or had made complaints about health and safety. It was impossible to legally run the Consulting Association blacklist, but running a list that excludes or bans people from certain activities is not a breach of the DPA.

The organisers of any Twitter block-list are processing personal data (the names of the blocked and any associated characterisations), so they must notify the Information Commissioner that they are doing so, they must inform individuals that they are on the list (because there is no exemption, they must correct inaccuracies (by which I mean blocking me when they meant to block you), set out a retention policy, answer subject access requests and keep data secure. Arguably they need an appeals process, and a review process to ensue that the reason for the block is still valid.

There is an argument that allowing people to block strangers en masse is unfair in the dictionary sense of the word, but Twitter already allows blocking, so if the Block Bot breaches the DPA because it facilitates blocking, so does Twitter. A blocklist that does not have some clear, coherent criteria for why people are blocked might be operating unfairly, but anyone who thinks the ICO is going adjudicate on this part of the process doesn’t really know the ICO.

It’s possible that the Block Bot’s organisers have breached the DPA by how they set it up.They haven’t notified the Commissioner (as far as I can see) and they aren’t exempt from notification, so that’s a criminal breach if they’re based in the UK. I can’t be sure whether the Blocked receive a direct notification that they’re on the blocked list, but if they don’t, that’s also a breach of the first principle. However, neither of these breaches kills the Block Bot. The ICO’s prosecution record for non-notification is haphazard – MPs, MEPs, elected members and others haven’t notified and the ICO has done nothing. Even when non-notification is brought to the ICO’s attention, they often just write to the organisation concerned and tell them to notify. If the ICO can find the organisers.

The same is true for a lack of fair processing (i.e. not telling people they are blocked). The most likely outcome of a complaint about the lack of a fair processing notice is that the ICO will tell the BlockBotters to inform the Blocked that they have been blocked. I think publishing the list of the Blocked online is unfair and excessive, breaching the first and third principles. The Block Bot would be more DPA compliant if it did not include this element of public naming and shaming. However, the data is not sensitive (in the Data Protection sense of the word), and given that the ICO has a track record of enforcing almost exclusively on security and surveillance, a decision to take action here is so far from the ICO comfort zone, it’s inconceivable.

I’m certain that falsely or libellously labelling people (if that happened / is happening) would make the Block Bot unfair. If someone successfully sued the Block Bot organisers for libel, that would make it easier for to argue that the Block Bot breached the first principle requirement for lawfulness. One could even argue that falsely accusing someone of rape apologism (if indeed the allegation can be proven to be false) is a breach of the fourth Data Protection principle on accuracy. However, there are two problems. Firstly, if the ICO sees successful libel actions, they will use that as an excuse not to act, rather than the other way around, because there is another remedy to the situation. More importantly, the Data Protection Act explicitly recognises the creation and processing of opinions as being part of the act, rather than something that is forbidden.

It’s one thing to expect the ICO to decide on factual inaccuracy, and there are a handful of enforcement actions based on those. It’s quite another to ask the ICO to decide what is an accurate opinion. I don’t think that Richard Dawkins is a rape apologist, but equally, I think at the very least some of his statements on rape have been extremely moronic, bordering on unpleasant and I understand why others might think he is. And that’s assuming that I properly understand what rape apology is. How can the Information Commissioner be expected to decide which opinion is right?

If I said that Richard Dawkins was a war criminal, this would plainly be untrue because he has never been involved in prosecuting a war. But if I said that Tony Blair was a war criminal, you might not agree, but could you say that it is factually inaccurate? More importantly, if I created a Twitter Blocklist of Notorious War Criminals and included Blair, George W. Bush, Dick Cheney, Donald Rumsfeld and so on, is it remotely likely that a case officer in Wilmslow is going to make a Compliance Unlikely Assessment because they’ve decided that these men aren’t notorious war criminals? And more importantly, is the ICO willing to enforce their decision using an Enforcement Notice, or issue me with a Civil Monetary Penalty?

Give me a break.

And this is where we are with Block Bot complaints about Data Protection. Readers who have made it this far are welcome to disagree with my views on whether Twitter blacklists breach the DPA. But on one thing, I know I am right: the ICO will not enforce on the Block Bot, even if the UK DPA applies. The ICO has ignored inconvenient decisions of the UK Court of Appeal (Durant) and the European Court of Justice (Lindqvist). They routinely – and wrongly – claim that blogs are exempt from Data Protection because of the domestic purposes exemption. It’s obviously open to the Blocked to litigate – on libel, or even on the damage / distress caused by being insulted or blocked under the DPA. On this, I have no view and make no predictions. That’s an argument between the BlockBotters and the Blocked, if it ever happens. The half-baked DP advice flying around may have had an effect on the text on the Block Bot website already, and who knows, maybe legal fears will have an effect in the future. But if anyone thinks that the ICO will close down the Block Bot, or even force the BlockBotters to be take anyone off the list, I am convinced they’re in for a disappointment, both because that’s not how Data Protection works, and more importantly, the ICO isn’t that bold or imaginative.


  1. Great read. Is there a possibility that the blocks violate the data protection rules on profiling, though?

    • There are much more specific rules about profiling in the proposed regulation, and I wouldn’t like to make a concrete statement either way on how they might apply. However, at the moment, I’m not convinced anything currently in force applies specifically enough.

  2. Hi, I run the servers for the block bot, not the service itself, and I rang the ICO about this. The helpline said because the servers are in the US it does not apply, but this is not clear from your blog post.

    There are a couple of UK people who can add to the list, I think, people are mostly anonymous. I can log into the server(s) from the UK to maintain them. All they do is send a tweet though, the data is entered into Twitter not into @TheBlockBot systems in any way. The server in the US reads these tweets, in fact all tweets to the account, and decides to take some recommendations for blocking and apply them. Some it just saves in the Storify’s. is also in the US and not a block bot system at all – effectively a blog that links to the tweets.

    It seems ridiculous to think this could possibly come under the DPA, all the people are doing is sending tweets on Twitter. Is Twitter subject to the DPA, cos the same information is stored on there. Even people’s block lists, I could create a public list on Twitter now called something unpleasant and add lots of accounts to it, is that against the DPA? Other services such as allow you to share your block list. Is this against the DPA if you live in the UK?

    Seems just on the basis that this would be a major pain for the ICO means they would not enforce anything. Regardless, we are fortunately going to move to a US team for managing it. This has been planned for a long while and not related to the baseless “libel” claims 🙂

    Couple of other things, we cannot notify people, we would be against Twitter’s ToS. It is not shaming anyone, the storify’s are not even indexed by Google, by design. They are linked to as reasons for blocking to be transparent and fair, used in the appeal process which does exist.

    • If decisions about processing data are made in the UK, it doesn’t matter where you store the data. Ringing the ICO helpline is not the same as understanding the Act, and having read your comment, one thing I am now convinced of is that you don’t understand it.

      I haven’t said that the Block Bot is ‘against the DPA’, just that every organisation based in the UK that decides to process personal data is subject to the Act, so if the Block Bot decisions are made by UK people, the Act applies. The worst thing about the DPA is the refusal of some people to understand it.

      • “Organisation” The block bot is an app I coded on a personal server. Am I part of an organisation? Also it really isn’t based in the UK, if it is an organisation at all. The people who run the service are based in the US and Australia, the admins. They get to decide who has access to add people to the list. Although it sounds like the people in the UK adding to it might make it part of the DPA, I can’t work out who is responsible, if anyone. I manage the server for them as I wrote the original app, but I stepped down as a person running it about a year ago. Will be interesting to see how me hosting the app, but not having access to add/remove/run it, effects if it applies to me personally or not. Interesting point you make there.

        I will admit to certainly not understanding it, which is why I rang the helpline. I have emailed them as well, hoping for a definitive view on if I need to register as a data controller. But now I’m even more confused than I was before. The answer from the ICO will hopefully be easy to understand!

        Thanks for the post, I’ll let you know when I get something back from them.

      • Thanks so much for writing this. It’s nice to get a more thoughtful treatment of the subject than what Sam Smith is ranting at us.

        I do not understand the DPA, and hopefully I will never have to subject myself to reading it. Obviously we’re going to take direction from the ICO and any consul in that regard.

        However, the whole situation seems strange to me because the decisions to send the information to Storify is made by blockers – but *anyone* using Storify is making such a decision. This issue appears to have very little to do with being part of the Block Bot – but with entering information into Storify?

        Storify recently changed it’s notification requirements because notifications from it’s service were (due to a few notorious obsessive storify trolls) amounting to harassment of those constantly receiving notifications.

        Any decisions that Storify makes regarding information displayed on their service completely trumps our “decisions about processing data”. For example, they have taken down tweets documenting threats to Sarkeesian that I’ve personally Storified because of their objectionable content.

        If someone has a problem with what is displayed in their Storify they can either ask one of the admins to make adjustments or petition Storify for their tweets to be deleted.

        But – whatever – you’re point is well-taken that it would be completely and utterly impractical for the ICO to do what some people seem to think it should do. The internet landscape would simply *look different* if that was how the law was applied.

        This is where the real issue is:

        “The Block Bot is not secret, which automatically makes it less likely to breach the DPA in principle.”


        “The Block Bot would be more DPA compliant if it did not include this element of public naming and shaming.”

        We can’t notify accounts directly that they have been put on BB because of Twitter TOS (nor would we want to because the point is to ignore accounts not to piss them off or create a confrontation) and we need to be transparent so we need to have that information available. However, public display of the BB list seems to annoy people too.

        So – here we are.

        PS: I hope you like my new Block Bot page revisions – I put so much qualifying language in there that I can’t read through it without sort of giggling to myself. And yeah – it’s true – I was planning on making those revisions a long time ago, I just never got to it. This is the price we pay for procrastination. Not that I think it would have slowed down Smith that much anyhow – but hey – day ending in “y”.

      • There are two points I was hoping people would take away – that the ICO isn’t bold enough to take action, but more importantly that the DPA applies. Data controllers who piggy-back onto US-based services like Storify or Twitter are responsible for squaring the legal difficulties their choices create.

      • Also – please excuse my cringe-worthy homophone replacements.

      • Understood – thanks again.

%d bloggers like this: