Low Profile

The use of personal data to advance political causes has never had as high a profile as it does now, thanks mainly to Brexit and the lurid tales of data manipulation usually bundled under the vague heading of the ‘Cambridge Analytica scandal’. Thanks to the efforts of certain journalists, the narrative is now fixed. Cambridge Analytica stole personal data from Facebook and used it to manipulate credulous voters to win the Brexit vote. It doesn’t matter that this didn’t happen (if you don’t believe me, read the ICO’s final monetary penalty on Facebook and their report into the political analytics investigation), this is what most people believe. When I ask people what they think Cambridge Analytica did, they usually don’t know or point to allegations that nobody has been able to prove, and when I tell them that CA didn’t work on the Brexit referendum, they often tell me to read something (Brittany Kaiser’s supposedly revelatory emails, for example) that they clearly haven’t read themselves. One of the most depressing things about all this is the number of supposedly intelligent people who rail against fake news, when they are as guilty of spreading it as anyone.

Nevertheless, if there is a good thing to come out of all this nonsense, it could be better scrutiny of how political parties and campaigns use personal data. The ICO says it has carried out audits of the major parties, though so far, nothing has come to light about what they’ve found. In the meantime, journalists have definitely started to look at political processing in more detail. An interesting example emerged today with Rowland Manthorpe’s story on Sky News of the Liberal Democrats’ use of profiling to understand voters. Using subject access, Manthorpe saw the wide range of different factors gathered and used by the LibDems to predict his likely voting intentions, and therefore inform whether and how they might approach him.

It’s very tempting to say ‘so what’? Any party that claims that they don’t do this, using data gleaned from Experian and other data brokers, is almost certainly lying. To make out that that the LibDems are doing something weird and creepy when it’s standard political practice is perhaps unfair. I did a subject access request to the Conservative Party earlier in the year, and I found an equally large amount of information – the Tories think that I have kids, read the Independent and was aged between 26 and 35 in 2017, but have now moved up to the 36 – 45 age bracket. If you seen me recently, you may wish to pause until you stop laughing. They’ve estimated my personal and household income and when I finished full-time education, and classify my household as “forward-thinking younger families who sought affordable homes in good suburbs which they may now be out-growing“. They know every time I have voted since 2014, although not who for.

What’s interesting about all of this is whether any of it is lawful. First off, it’s not transparent. The political parties have privacy policies that allude to some of this profiling but if you don’t support or vote for a party or a campaign, what reason would you ever have to read that policy? I am never going to vote Tory, so why would I look at the bit of their privacy policy that says that they’re going to buy my data from Experian in order to profile me, even if that section exists? And what of Experian, who have happily sold my data to the Tories – what transparency from them? Long story short, I think the transparency aspect of political profiling is fatal to its lawfulness. We don’t know this is happening, and the parties do very little proactively to communicate to voters that it’s going on.

Parking that, it’s worth considering the other aspects of GDPR and the Data Protection Act 2018 which are relevant to this question. To process any personal data, an organisation must have a lawful basis from Article 6 of the GDPR to do so. Several are automatically off the table for this kind of profiling – consent (because they haven’t asked), contract (there isn’t one), vital interests (nobody will die if the Tories don’t incorrectly guess that I have kids) and legal obligation are all gone. This leaves two – necessary for a task carried out in the public interest or necessary for a legitimate interest. Neither of these is automatically available. A task carried out in the public interest has to have some kind of statutory underpinning, which is apparently available via Section 8 of the DPA 2018, which specifies ‘an activity that supports or promotes democratic engagement‘ as a task carried out in the public interest. The explanatory notes to the DPA fleshes this out:

The term “democratic engagement” is intended to cover a wide range of political activities inside and outside election periods, including but not limited to: democratic representation; communicating with electors and interested parties; surveying and opinion gathering, campaigning activities; activities to increase voter turnout; supporting the work of elected representatives, prospective candidates and official candidates; and fundraising to support any of these activities

In order to rely on what many people call ‘public task’, political parties have to satisfy themselves (and potentially the ICO or the courts) that their profiling fits this definition, and that the best way to, for example, communicate with electors is first to profile them. I’m not saying that it’s impossible to clear that hurdle – necessary doesn’t mean the only way, just the most appropriate and proportionate way, but it’s for the LibDems (and every other party) to show that they have thought about this and considered the alternatives. Because this processing is likely to have been carried out automatically (I presume that they don’t have crowds of artisan psephologists doing it by candlelight), this could mean that a Data Protection Impact Assessment is required. I’m not certain of this because I’m not sure whether the profiling would have a significant legal or other effect on the person, but if you read the ICO’s code of practice on political campaigning, they bend over backwards to argue the case for political advertising having that effect. In any case, there are other criteria in the European Data Protection Board’s guidance which might well lead to a mandatory DPIA (for example, large scale innovative techniques, or depending on the data used, large scale processing of special categories).

Of course, they may choose to rely on legitimate interests, which again requires work. They have to demonstrate that they have balanced their legitimate interest in understanding voters against the rights and freedoms of those voters. This is must be *necessary*, and in my opinion, it is exceptionally difficult to make the case for legitimate interests where a person has not been informed of the processing.

Manthorpe’s story lays out another potential problem. The LibDems are creating special categories data (political opinion) and it’s not unknown for politicos to use profiling to infer other characteristics, like Zac Goldsmith’s apparent attempts to infer ethnicity from surnames in the 2016 London Mayoral Election. The use of special categories is technically prohibited, but one of the exemptions is the substantial public interest. The LibDems would have to demonstrate that it is in the substantial public interest for them to process the data, and as before, that it is necessary for them to process data in this way.

That isn’t enough on its own. The use of substantial public interest has to be underpinned by a specific legal authorisation, which can be found in the Schedules of the DPA 2018. The only one that political parties can rely on is paragraph 22, which allows parties to process political opinions where necessary (that word again) for the purposes of the organisation’s political activities. The GDPR’s demand for accountability means that all of this decision-making will need to be documented, and every party will have to show that they considered the proportionality and necessity of their actions. At this point, I think the DPIA question is clearly answered – because the process leads to the creation by inference of political opinions, the party is processing sensitive data on a large scale, hitting two of the criteria set out by the EDPB guidance. Two criteria means that processing is high risk and requires a DPIA; the processing is unlawful if they cannot demonstrate having carried out one.

Of course, all of this only applies to the processing, and both the GDPR and DPA make clear that they have to stop processing the data if the person requests it, even if they’ve done all of the work I’ve described above. There are no exceptions to this. Moreover, if the party wants to send a text or an email to any person, none of this helps; GDPR and DPA may allow the profiling (I don’t believe any party will have implemented the above rigorously enough to satisfy the law), but it does nothing about the rules for direct marketing in PECR. Even if they satisfy the GDPR requirements for processing special categories, that doesn’t help at all with PECR’s flat demand for GDPR-style consent when emailing individual subscribers (i.e. people using their own email addresses).

The LibDems claimed to Manthorpe that their privacy policy cures all ills:

The party complies with all relevant UK and European data protection legislation. We take the GDPR principle of transparency very seriously and state the ways we may use personal data clearly within the privacy policy on our website.

I don’t accept this for a moment. I’m a Data Protection nerd and I don’t go on random organisations’ websites to read their privacy policies just in case they might apply to me. The fact that contacting millions of people to tell them that they’re being profiled would be punishingly expensive isn’t GDPR’s problem – the sense of entitlement that political parties feel about data and how they use it should be secondary to the law. But even if you accept their argument, the fact that all parties are likely to have a file on every voter isn’t in our interests, it’s in theirs. They should be under pressure to show that platitudes like the statement above are backed up by the rigour and evidence demanded by the legislation. This should not be a story about the LibDems; this should be seen as a window into what all political parties do, and feel entitled to do. I have no faith in the ICO to sort this out, but scrutiny of what’s going on is in all of our interests.

 

ADVERT: I’m running GDPR courses across the UK until the end of 2019. In 2020, I’ll be running new courses on the DPA, Law Enforcement and Data Protection and Data Protection by Design. Take a look at my website for more: www.2040training.co.uk 

Mistaken Identity

Over the past week, numerous excited stories have covered a talk given by James Pavur, an Oxford University researcher and Rhodes Scholar, at the Blackhat Convention in Las Vegas. With his girlfriend’s consent, Pavur made 150 subject access requests in her name. In what the BBC called a ‘privacy hack’ until they were shamed into changing the headline, some of those that replied failed to carry out some kind of ID check. Pavur’s pitch is that GDPR is inherently flawed, allowing easy access for identity thieves. This idea has already got the IT vendors circling, and outraged GDPR-denier Roslyn Layton used the story to describe GDPR as a “cybersecurity/identity theft nightmare“. Pavur’s slides are available on the Blackhat website, but so is a more detailed whitepaper written by himself and his girlfriend Casey Knerr, and anyone who has pontificated about the pair’s revelations should really take a look at it.

Much has been made of Pavur’s credentials as an Oxford man, but that doesn’t stop the 10 page document containing errors and misconceptions. The authors claim that Marriott and British Airways have already been fined (they haven’t), and that there are only two reasons to refuse a subject access request (ignoring the existence of exemptions in the Data Protection Act 2018). They use ‘information commissioners’ as a term to describe regulators across Europe, and believe that the likely outcome of a controller rejecting a SAR from a suspiciously acting applicant would be ‘prosecution’. In the UK and most if not all EU countries, this is legally impossible. At the end, their standard SAR letter cites the Data Protection Act 1998, despite the fact that in context, any DPA is irrelevant and that particular one was repealed more than a year ago.

Such a list of clangers would be bad (though not necessarily unexpected) in a Register article, but despite presenting their case with a sheen of academic seriousness, Pavur and Knerr have some serious misconceptions about how GDPR works. It supposedly offers “unprecedented control” to the applicant, despite their experiment utilising a right that has existed in the UK since 1984. They claim GDPR represents a “sea change” in the way EU residents can control, restrict and understand the use of their personal information, even though most rights are limited in some way and are rooted firmly in what went before. They claim that “little attention has been paid to the possibility of request abuse”. I’ve been working on Data Protection since the authors were schoolchildren, and I can say for certain that this claim is completely false. SARs being made by third parties, especially with malicious intent, has been a routine concern in the public and private sector for decades. Checking ID is instinctive and routine in many organisations, to the point of being restrictive in some places.

Other assertions suggest a lack of experience of how SARs actually work. Because of the perceived danger of twitchy regulators fining organisations for not immediately answering SARs “it is therefore fairly risky to fail to provide data in response to a SAR, even for a valid purpose”. This year, the ICO has had to enforce on high profile organisations for failing to answer SARs (it didn’t fine any of them), and is itself is happy to refuse SARs it receives from elderly troublemakers. SARs are routinely ignored and refused, but the authors imagine that nobody ever wants to say no for fear of entirely imaginary consequences.

Pavur and Knerr think that panicking controllers will make a mess of the ID check: “we hypothesized that organisations may be tempted to take shortcuts or be distracted by the scope and complexity of the request”. This ignores three factors. First, for many organisations, a SAR is nothing new, and the people dealing with it will have seen hundreds of SARs before. Second, the power advantage is with the controller, often a large organisation ranged against a single applicant (and in the UK, facing a regulator unlikely to act on the basis of one SAR complaint). Third, and most important, they don’t factor in the reality that the ID check takes place *outside* the month. ICO says that until the ID check is made, the request is not valid and the clock is not ticking. A sense of panic when the request arrives – necessary for the authors’ scenario to work – will only be present in those with little experience, and if you’re telling me that people who don’t understand Data Protection tend to cock it up, I have breaking news about where bears shit.

Another unrealistic idea is that by asking whether data has been inadvertently exposed in a breach (a notion written into the template request), the authors make the organisation afraid that the applicant has knowledge of some actual breach. “We hypothesised that such a belief might cause organisations to overlook identity verification abnormalities”. I can’t speak for every organisation, but in my experience, a breach heightens awareness of DP issues. Making the organisation think that the applicant has inside knowledge of a breach will make most people dot every ‘I’ and cross every ‘T’. Equally, by suggesting that ID be checked through the unlikely option of a secure online portal, the authors hope to make the organisation feel they’re running out of options, especially because they think the portal would have to be sourced within a month. Once again, this is the wrong way around. An applicant who wants to have their ID checked via such a method would either get a flat no, or the controller could sort it out first and then have the month to process the request.

A crucial part of the white paper is this statement: “No particularly rigorous methodology was employed to select organisations for this study”. Pavur and Knerr say that the 150 businesses operate mainly in the UK and US, the two countries they’re most familiar with. I’m going to stick my neck out and bet that the majority of the businesses who handed over the data without checking are US-based. Only two of the examples in the paper are definitely UK – a rail operator and a “major UK hotel chain”. Many of the examples are plainly US businesses (they cite them as Fortune 100 companies), and one of the most specific examples of sensitive data that they obtain is a Social Security Number, which must be a US institution of some kind.

If you tell me that a significant number of UK businesses, who have been dealing with SARs since 1984, don’t do proper ID checks, that’s a real concern. If you tell me that it’s mainly US companies, so what? Many US companies reject the application of GDPR out of hand, and I have some sympathy for their position, but it’s ridiculous to expect them to be applying unwelcome foreign legislation effectively. This is the risk that you take when you give your data to a US company that isn’t represented in the UK or EU. Pavur and Knerr haven’t released the names of the organisations that failed to check ID, and until they do, there’s not much in the paper to show that this is a problem in the UK, and a lot to suggest that it’s not.

The potential solutions they come up with are flawed. They say regulators should reassure organisations that they will not be prosecuted if they reject requests without ID, despite no evidence that any regulator says anything different (or indeed, has enforced in such circumstances). Their main recommendation for legislators is recommending that government ID verification schemes should be used by all controllers to check the ID of SAR applicants. It’s true that there is no standardised ID check and controllers will act on a case by case basis, but that’s infinitely preferable to Dominic Cummings’ government knowing every time you exercise your data protection rights.

I have never run a training course that mentions SARs that doesn’t mention checking ID. At least in the UK, a request isn’t seen to be valid unless some form of ID has been presented. In the last month, two different data controllers (the Conservative Party and Trilateral Research) have insisted on seeing a driving license or equivalent before processing my SAR, despite me applying from the email address they have on file. A few US controllers handling SARs in a sloppy manner isn’t a cause for great concern. It certainly doesn’t suggest significant flaws in the way GDPR is drafted.

For all my criticisms of the pair’s approach, they do admit that the white paper was “a cursory assessment”.  I don’t doubt their expertise in security, their good intentions or the truth of their ultimate message: checking ID is essential when dealing with SARs. The problem with the experiment is that it reads like what two clever people reckon subject access is like, rather than how it works in the real world. I’d strongly suggest that if they follow up on this first attempt with a more robust piece of research (which is hinted at in the white paper), they approach the subject with a more realistic and detailed understanding of how Data Protection actually works, and maybe get some advice from people with real SAR experience.

Mates’ Rates

A while ago, I noticed an FOI request sent to the Information Commissioner’s Office on the website What Do They Know? I always keep an eye on requests made to Wilmslow, but this one was especially intriguing. It asked about payments made by ICO to external suppliers and consultants where the work had not been put out to tender. Even the progress of the request became notable because the Information Commissioner was seemingly incapable of answering it. The original request was made on February 26th 2019, and the ICO didn’t answer it until July 9th, more than three months after the legal time limit. Twice, the ICO set itself a deadline by which it would definitely answer the request, and twice it failed to do so. Remember friends, the Information Commissioner is the regulator for FOI. They’re supposed to ensure that other public sector bodies answer their FOI requests, but they’re terrible at answering their own. It’s worth noting Liz Denham was almost certainly aware of the request, as there is specific mention of her private office being involved during the glacial march towards a reply.

SIDENOTE: I made an FOI about enforcement and monetary penalties to the ICO that was due on July 16th. The Senior Information Access Officer handling my request told me that he hoped to provide me with a reply by Friday, and an answer came there none. I wonder if they’ll sit on it like they did here.

Anyway, due to the busiest July I have ever had (take that, haters), I missed the fact that the payments request had finally been answered, apparently in full, with no use of exemptions. At first glance, it’s nothing to get excited about. The total amount is less than £240,000, and though the highest payment made to anyone is £58000, the explanation of why this work was not put out to tender doesn’t sound outrageous:

A project brief was developed and three suppliers were approached for quotes. The requirements were in two parts, the first part was research and the second part was delivery. The second part of the brief was tendered to the supplier who completed the first after the proposed next steps were evaluated by the Board and it was agreed to implement their proposals – meaning they were uniquely placed to deliver the second part.

A couple of items did leap out at me. David Smith is not a unique or distinctive name, but it’s hard to believe that the particular David Smith (DP) Ltd paid £5152 for international engagement isn’t the same David Smith who was until recently Deputy Information Commissioner. Apparently, Smith is “uniquely placed” to deliver international engagement on behalf of the ICO. One would think that the Commissioner’s pan-continental roadshow would provide all the engagement the office could require, but I suppose throwing a few grand in an old friend’s direction isn’t the worst thing the ICO has ever done.

Equally, I doubt the Simon Entwistle who received £5791 is a different Simon Entwistle to the Simon Entwistle who was until recently Deputy Information Commissioner. Apparently, he is uniquely placed to carry out ‘Executive Coaching’. Granted, Entwistle is an old ICO hand, originally appointed by Richard Thomas, but it’s rather odd that having retired, the organisation has to pay him to coach the ICO’s senior people. Elizabeth Denham was paid around £180,000 in 2018 – 2019, and every member of the executive team is within spitting distance of 100K. These are well-paid, experienced people – if they’re in these jobs, that should be because they already have the skills to do these jobs. If they don’t, why were they appointed?

The sum paid to Entwistle for coaching isn’t massive, but it’s not the only one. Two different amounts, totalling £17,968, were paid to a ‘Philip Halkett’ for executive coaching, a role which he was once again “uniquely placed” to carry out. I cannot say for certain who Halkett is, and I am happy to be corrected if I have got it wrong. However, I believe he is a former Deputy Minister in Canada’s Ministry of Forests, and is based in British Columbia, where he describes himself on LinkedIn as ‘semi retired’. There are literally hundreds if not thousands of people offering coaching in the UK, but since Denham became Commissioner, the ICO has paid just shy of £18000 to a man who has no website or company that I can find, whose sole contribution to the internet is a single retweet about Denham, and whose main qualification for the job appears to be that he comes from the same remote corner of Canada as she does.

Halkett isn’t the only Canadian to feel the benefit of the ICO’s munificence. The former Information Commissioner of Canada, Suzanne Legault, is “uniquely placed” to deliver the secretariat for the International Commissioner’s Conference that Denham has been nominated to organise. It’s probably a complete coincidence that Legault is Canadian, and it’s not like organising an international conference isn’t a thing that loads of organisations do all of the time all over the world.

Most intriguing is the work carried out by a British customer service guru, Mark Colgate. Colgate has been paid £20000 to deliver “advice on development of service excellence programme and delivery of training to 500 staff”. If you visit Colgate’s website, you’ll find an uncharismatic man plugging some basic customer service ideas using gratingly clunky acronyms. Colgate sums up his philosophy as ‘Tofu’, which means that he is selling a fundamentally unappetising and artificial concept. I’m joking, ‘Tofu’ means ‘Take Ownership and Follow Up’. Another element of the Colgate Method is FAME, a concept that is so vapid and forced I can’t bear to reproduce it here.

Most regulators do not have customers. In particular, the ICO is not an ombudsman. It is not their job to give complainants redress or resolution. The ICO’s role is to ensure that controllers comply with the law – Article 57 of the GDPR states that the first task of the supervisory authority is to ‘monitor and enforce‘ the regulation. Handling complaints from members of the public is in there, but the public are not the ICO’s customers, any more than controllers are. The aim should not be to give either side what they want, but to ensure that controllers do those things that they are obliged to do, and that individuals’ rights are respected. A ‘customer service’ mentality is at best a distraction, and at worst, risks creating expectations that cannot be met. Many controllers have experience of ICO case officers who keep pushing a dead-end complaint because they clearly don’t want to give an angry or unreasonable complainant an answer they don’t like. If you swap it and make the controllers the customer, it’s at least as bad. The ICO has a long, shabby history of bending over backwards to appease ‘stakeholders’; the aforementioned David Smith was a big fan of describing the ICO as ‘enablers’ of business and innovation, rather than an organisation with a clear mission to enforce some specific laws.

But let’s assume that I’m wrong. Let’s assume that the ICO does need to spend thousands of pounds training its staff on customer service. What exactly is it about Mr Colgate’s brand of bargain basement Dale Carnegie that meant he had to be awarded this work without a tender process? Why, given the plethora of genuine customer service experts in the UK, was Mr Colgate “uniquely placed in respect of experience and expertise” to deliver this work, especially when it is so similar to so many other people? Is there any clue in his CV? Mr Colgate’s current berth as “Professor of Service Excellence” at the Peter B. Gustavson School of Business, which you can find in the fine city of Victoria. In Canada. Specifically, in British Columbia. About a ten minute drive from the Office of the Information and Privacy Commissioner for British Columbia, the last incumbent of which was a certain Elizabeth Denham. Fans of funny coincidences may also care to note that the current incumbent of that office is Mr Michael McAvoy, last seen running the ICO’s investigation into data analytics, a role that I do not believe was advertised externally.

At this point, you might be saying ‘so what’? Denham has thrown some work to her mates both home and abroad: is this so terrible? In my opinion, it really is. A good chunk of my work comprises a single day’s training, and in many cases, the client gets multiple quotes before giving me the work. I simply don’t believe the ICO’s claims that these people are the only possible candidates, especially as there was no competition or objective test. Public money should not be spent on a whim, especially not with the particular flavour of favouritism and self-indulgence that appears to be on show here. Bringing in your Canadian friends to provide luxury services that thousands of people in the UK are well-placed to provide shows lamentably poor judgement.

This is not the first time I have blogged about Denham’s terrible decisions. She did an advert for a commercial company. She enthusiastically endorsed a book she hadn’t read, making claims about the author which were not true. She made misleading claims in the media to get headlines and bragged on TV that she had used powers that actually don’t exist. She announced the Facebook fine prematurely and now faces accusations of bias and procedural unfairness. We haven’t had a decent Commissioner since Elizabeth France, but despite Richard Thomas’ over-caution and Chris Graham’s superficiality, both of them seemed able to do the job without the growing list of howlers for which Denham is responsible. Paradoxically, she is the most respected and popular of all the Commissioner’s incarnations and my complete lack of faith in her judgement makes me the bad guy, as usual. Nevertheless, questions need to be asked about what exactly is going on in Wilmslow, how decisions are being made, and how money is spent. There are a number of well paid non-executive directors on the ICO board; I would be keen to know what they think of all this.

Quis custodiet ipsos custodes, and all that.

A Boy’s Best Friend is his Data

Just over a month ago, I enjoyed a series of bad-tempered Twitter exchanges with Benjamin Falk, Founder and “Chief Talker” of the personal data outfit Yo-Da. Falk has an interesting perspective on Data Protection. Instead of coming to DP through the traditional routes of information management, security, governance or the law, Falk is an ‘information economist’. He doesn’t see the subject as an issue of human rights, instead looking at it through the prism of economics. Because Data Protection is concerned with information, and there are other contexts where information is a commodity traded in a market, Falk has had the revelation that the processing of personal data is just another market, and this is the only way to understand it. Falk perceives this market as a ‘dumpster fire‘, and he alone has the solution. He has founded what he calls the “world’s first Personal Data agency” and hopes to lure people into signing up for an ill-defined service that he asserts will put them in control of their information. Somewhere along the way, money will be made.

Falk has some eye-catching ways to explain the ‘market’ he seeks to disrupt:

personal data is best understood as a newspaper that we publish about ourselves, whether we like it or not“.

Sometimes, he thinks personal data is “a really really boring autobiography, it’s just information about yourself written down somewhere“.

Falk’s view of data subjects is that they are “an author with an information rights management problem

I can imagine that if a person had, say, an AI program and they had to persuade gullible investors to buy into a wheeze that hadn’t really been worked out properly, this kind of eye-catching guff might get them going. However, it’s nonsense. Most personal data isn’t published or created for public consumption like a newspaper (indeed, many people have laboured for years under the misapprehension that personal data in the public domain isn’t personal data at all). Equally, a lot of personal data doesn’t fit Falk’s favourite analogy of a ‘robo-biography‘ because it is generated by people and not machines. You can’t simplify a million different controllers doing things for themselves in a million different ways. It’s complicated.

As Yo-Da’s website says, users will be able to “discover, fetch, control and erase” personal data from “any company operating in Europe”. However, the first thing you see on Yo-Da’s homepage is the following: “who earns from your personal data? everyone but you“. Falk also wants people to monetise their data. There’s not much detail however, making me wonder Falk has got this far by saying ‘AI’ a lot without a clear idea of how that will translate to the power he claims to put in subjects’ hands. After all, in order to work, Yo-Da needs to be able to successfully obtain and amalgamate data held on millions of different systems, in thousands of formats, processed for a host of different reasons by a multitude of businesses as varied as Apple, Tesco, 2040 Training and the Friendly Furry Shop. I’d like to see this in action.

The idea of individuals monetising their data is common to survey platforms like YouGov and CitizenMe, while Paul Olivier-Dehaye has been touting the automation of SARs and other data rights for years. A mock-up of the Yo-Da app shows data obtained from Starbucks (including how many coffees the user has drunk) with a suggestion at the bottom that this data be combined with that information from Transport for London or NHS England. Rather than selling data at scale like most data brokers, Yo-Da seems to encourage subjects to obtain vast quantities of data about themselves (the app shows a user having obtained data from 1200 companies) to create a “rich personal database” which presumably the user will then sell with Yo-Da’s assistance.

Falk’s ambitions are not limited to data monetisation. Yo-da, he claims, will stop subject’s rights from being infringed. The ‘dumpster fire’ of poor data protection practice in the UK is the fault of greedy consultants like me who ensure that our clients don’t actually comply with the law so we can keep charging them. Like Hercules diverting rivers to sluice the Augean stables, Falk’s tweets demonstrated a belief that Yo-Da will wipe Data Protection clean. Solving DP’s many problems is “easy to do“, he says, it’s just that nobody has actually tried (take that, Liz Denham). I don’t see how, but even if you believe that Yo-Da’s data jumble sale could change the face of DP forever, it can surely only do so if millions of people participate. Even if thousands of people sign up, Yo-Da will barely scratch the surface of how much personal data is processed across the UK and the EU. People will still be obliged to provide their data to pay their taxes, claim their benefits, use the NHS, set up a bank account or a mortgage, or be employed. The ability to get a slice of your data (it won’t be all of it) and possibly hawk it to dodgy data brokers (about the only people who I can imagine might buy it) won’t change that, and would do nothing to stop DeepMind, lost discs, the Met Police’s use of facial recognition or hospitals letting TV companies film vulnerable women without consent.

Moreover, just imagine how Yo-Da could go wrong if it actually works. At the moment, the fact that the different aspects of your life are often held in silos is wholly to your advantage from a data protection perspective. Capitalism is trying to connect the various loose ends of your life, but there are limits.  As a middle aged man with middling health, the NHS doesn’t know how often I drink coffee at Starbucks, or how regularly I get the Tube in London rather than using a TfL bike (I would like to confirm to my GP that I never drink coffee in Starbucks and I have only used the Tube once this year on my many visits to the capital). But what else could be added? Could Yo-Da include how many orders from Beers of Europe I make? How often I go to SoLita for a burger? Yo-Da is selling a seductive idea – one might almost paraphrase it as ‘take back control’, but it probably contains the same risk of unintended consequences as that rancid propaganda. Falk positions his company as the saviour of privacy rights, but he’s encouraging people to conspire in their own exploitation by creating an intrusive and potentially prejudicial data cocktail and then flogging it to the highest bidder.

I’m ignoring the practical problem that the key to driving his plan is subject access requests, and SARs rarely provide a seamless, rich repository of information, ready to be amalgamated and exploited. SAR disclosures are often messy and incomplete, a patchwork left behind by the removal of third parties and exempt data, and often delivered in PDFs. Only data supplied direct to the controller by the subject or obtained under observation has to be supplied in a portable form. There are legitimate reasons to refuse requests altogether. Falk has asserted repeatedly that “ownership and rights mean the same thing“, and so subjects own their data, but this won’t be any help to his business model. Subjects own the copy of the data that they receive from their SAR, but that doesn’t give them automatic access to any and all data held. They don’t own the data held by the controller. The promises of control and erasure made on the Yo-Da website are embarrassingly simplistic – you can’t object to a controller processing your data under contract or legal obligation or ask them to erase the data. They can resist an erasure request because they need to establish, exercise or defend a legal claim. Only someone who doesn’t understand how limited the GDPR rights of objection are would make the grandiose claim that “Yo-Da… lets you control who processes your information“. No, it doesn’t. It never will, because the GDPR doesn’t do that.

I think Falk’s claims are hype and his understanding of data protection is fundamentally flawed. Moreover, I don’t trust him. During the period that I spent arguing with the Yo-Da Twitter account, it became clear that I wasn’t just dealing with one person. There were two distinct personalities, inverted versions of the dual identities in Hitchcock’s Psycho. The Norman Bates character – relentlessly polite no matter what the provocation, endlessly ingratiating – is fake, a bot unleashed by Falk to fool people into thinking they’re dealing with a real person. Mrs Bates – the bitter, angry and resentful persona that occasionally lashes out – is real, presumably Falk himself, unable to let the upbeat-to-the-point-of-being-deranged program do all of the talking. Falk called me a jerk for accusing him of being a bot when actually, he was just being “unswervingly polite“. In the end, he had to admit that I was right and that he was using a bot. Ethics is Data Protection’s flavour of the month, and I’m not sure that such duplicitous behaviour will fit in.

Despite the fact that Yo-Da hasn’t launched yet, the website mysteriously features testimonials from happy users, while one of the three case studies highlighting how the service works using happy Yo-Da customers is actually just Falk himself. Falk wants to charge people to use their DP rights. Somewhere in our bickering, either Falk or the bot told me that Yo-Da would be a monthly subscription based on what users can afford, but there’s no hint of that on the website. It’s the same model that Dehaye originally proposed for PersonalData.Io – just as GDPR makes personal data rights free in most cases, in come some chancers hoping to charge you for using them. And I have one last piece of evidence that when it comes to upholding data protection, in giving people “transparency into this secretive ecosystem“, Falk isn’t the champion of data rights he purports to be.

After five days of arguing and provoking whoever / whatever was running the Yo-Da account, on June 4th, I made a subject access request to the company via the Data Protection Officer’s email address on the Yo-Da website (i.e. the specific address they direct you to make SARs to). I explicitly ruled out any personal data processed on the public Twitter account – that is available to me already and besides, I’ve already seen it. I wanted to see any direct messages, emails or other correspondence generated by my spat with Falk and his bot. Of course, there may not be any data at all. It’s quite possible that Falk didn’t talk to anyone about me or what I was saying, but he could have done. Several times, I questioned the fundamentals of Falk’s interpretation and I also asked whether Trilateral Research, the consultancy he has engaged to be Yo-Da’s DPO, agreed with his views. I wouldn’t be surprised if Falk contacted them about what I was saying, or just complained to his colleagues about what a jerk I was.

However you slice it, the deadline for compliance has passed, and Yo-Da has not responded to my request. I have received no data, no confirmation that data is not held, no request for ID, not even an acknowledgement. Nothing, nada, zip. Benjamin Falk proclaims that he seeks to land a knock-out blow for data subjects through the use of the GDPR rights, but the vehicle for this glorious revolution can’t even be arsed to answer a simple SAR. I wondered before why Trilateral wanted to be associated with Falk’s hyperbolic nonsense, but now he has coupled it with contempt for the law he claims to defend, I wonder if they’ll think again? In any case, everyone who receives one of Yo-Da’s SARs when the service launches knows what they can do.

Ignore it, you can.

 

A cure for blindness

The first time I read the GDPR properly, something leapt out at me. For years, the received wisdom about the subject access and other rights provided by the legislation was that they were ‘applicant blind’. You could ask the person for assistance in locating their data, but you could not ask them why they were asking. Even if you knew that the person wanted to wind you up, you had to ignore that. When I got to the GDPR articles about subject rights, it struck me that this was no longer the case.

The relevant text in the final version (Article 12.5) is as follows:

Where requests from a data subject are manifestly unfounded or excessive, in particular because of their repetitive character, the controller may either:

(a)  charge a reasonable fee taking into account the administrative costs of providing the information or communication or taking the action requested; or

(b)  refuse to act on the request

Looking at the foundation, the basis on which the request has been made, opens the door to the applicant’s motive. An unfounded request is one for which there is no legitimate basis, a request which is unwarranted. You cannot come to a conclusion that a request is either ‘unfounded’ and ‘excessive’ in many cases without looking at the person, why they have asked and what they intend to do with the data. The word ‘manifestly’ places a high threshold – it must very obviously be the case that the request is unfounded, but nevertheless, the words are there, and they must be there to allow the controller to refuse in some circumstances. If I’m wrong, tell me what those words are there for.

Believing that GDPR allows controllers to refuse requests because of the motives of the applicant often gets me into disagreements with other DP professionals. Perhaps because the ‘applicant blind’ idea is so basic to some people’s understand of how Data Protection works, or because they disapprove of the idea, a lot of people disagree. Last year, a controversy started when anti-abortion campaigners in Dublin filmed pro-choice demonstrators, and someone on Twitter provided a template SAR request for pro-choice people to use. The idea was to (in one Tweeter’s words) ‘swamp’ the anti-abortion campaign with SAR requests, even to show up and get yourself filmed solely so that you could make a SAR. More recently, pro-Remain campaigners, angry that they are receiving entirely legal election literature from the Brexit Party, suggested making SARs to the party to find out where their data had been sourced from. Virtually every time I pointed out that the data would have come from the electoral register, rendering the SAR pointless, they said they would do it anyway to annoy the Brexit Party and waste their time.

I support the idea of abortion without any hesitation, and I commend those who campaign in favour of the right to abortion. I am also what you might call a Hard Remainer – I wish we weren’t leaving the EU, and when we do, I would support a campaign to go back in on a Full Schengen, Join the Euro platform, partly because I think these things are good on balance, and partly because it would annoy people who voted Leave. Nevertheless, I think the anti-abortion campaign were perfectly within their rights to refuse SARs where they could identify a person’s Twitter comments saying that they intended to do a SAR to waste their time, and if the Brexit Party do the same now, I believe that this would be justified. I think GDPR allows for refusals of requests that are made for reasons other than concerns about personal data.

And if you don’t agree with me, you don’t agree with the Information Commissioner either.

For years, the failed FOI campaigner Alan Dransfield has been sending angry emails and complaints to various people at the Information Commissioner’s Office, usually late at night. I know this because as well as copying in various journalists, news organisations, and politicians, he also includes me. It’s hard to know what Dransfield hopes to achieve with these screeds, which blend an aggressive misreading of how the law works, defamatory accusations against ICO staff and RANDOM words in CAPITALS. Usually these emails come out of nowhere, but his most recent missive was in response to an email from the Information Commissioner, refusing to answer a subject access request he had made to them.

If you ever wanted an extreme case to test the limits of what is acceptable, it’s Dransfield. The ICO’s refusal says that since April 2016, Dransfield has sent them over 120 requests for information under the Data Protection Act 2018 (DPA 2018), the Freedom of Information Act 2000 (FOIA) and Environmental Information Regulations 2004 (EIR). In addition, the email contains this remarkable statement:

since May 2018 we have received in excess of 290 items of correspondence from you. Many of these communications have included unsubstantiated accusations of the ICO’s complicity in various crimes and have targeted members of ICO staff with the intention of causing distress

The ICO refusal points out that having previously refused his FOI and EIR requests as vexatious, they are now no longer even acknowledging them because they are about matters which have been dealt with (something which FOI plainly allows). They then go on to say this:

Your requests for information under Article 15 of the GDPR appear to be similarly motivated. We consider that these requests are not made to legitimately establish what information we hold and how we are handling your personal data, but part of a campaign to challenge the decisions that have already been concluded within due process

As well as copying me into his legally illiterate complaints, Dransfield sometimes emails me direct to call me a dickhead or spew out misogynistic and homophobic abuse, but it’s clear that ICO staff have it much worse than me. He’s a toxic character who thrives on causing discomfort and outrage. You might say that if ‘unfounded’ works on him, it’s only because he’s such an extreme case. But Dransfield is not alone. There are other vexatious, unpleasant people whose SARs will be made in the same vein of perpetuating a complaint or a campaign. Most importantly, look at the basis of the ICO’s refusal: we’re saying no because we don’t think you’re making this request for the right reasons. The ICO believes that an unfounded request is one made for the ‘wrong’ reasons.

Assuming this is correct (and obviously this is a rare case where I think the ICO has got it right), the next question is how far this goes. For years, the UK courts argued that using SARs to pursue litigation was an abuse of process – is that use of a SAR unfounded? I think that weaponised political SARs are unfounded, and even if you disagree, I don’t think you can tell me that it’s impossible. The net result of Dransfield’s adventures in FOI was establishing a principle that has been used to refuse many requests as vexatious – exactly the opposite of what he wanted. His campaign against the Commissioner may, ironically, have the same effect in GDPR.

The ICO rejects SARs they believe have been made for the wrong reasons. If they do this for themselves, there have to be circumstances where they will agree when other controllers do this. Pandora’s Box has been opened. Controllers who are dealing with vexatious applicants or orchestrated campaigns should think very seriously about whether denying a person their subject access right is an acceptable thing to do, but they should do so in the knowledge that the UK’s Data Protection regulator has already done it.