Low Profile

The use of personal data to advance political causes has never had as high a profile as it does now, thanks mainly to Brexit and the lurid tales of data manipulation usually bundled under the vague heading of the ‘Cambridge Analytica scandal’. Thanks to the efforts of certain journalists, the narrative is now fixed. Cambridge Analytica stole personal data from Facebook and used it to manipulate credulous voters to win the Brexit vote. It doesn’t matter that this didn’t happen (if you don’t believe me, read the ICO’s final monetary penalty on Facebook and their report into the political analytics investigation), this is what most people believe. When I ask people what they think Cambridge Analytica did, they usually don’t know or point to allegations that nobody has been able to prove, and when I tell them that CA didn’t work on the Brexit referendum, they often tell me to read something (Brittany Kaiser’s supposedly revelatory emails, for example) that they clearly haven’t read themselves. One of the most depressing things about all this is the number of supposedly intelligent people who rail against fake news, when they are as guilty of spreading it as anyone.

Nevertheless, if there is a good thing to come out of all this nonsense, it could be better scrutiny of how political parties and campaigns use personal data. The ICO says it has carried out audits of the major parties, though so far, nothing has come to light about what they’ve found. In the meantime, journalists have definitely started to look at political processing in more detail. An interesting example emerged today with Rowland Manthorpe’s story on Sky News of the Liberal Democrats’ use of profiling to understand voters. Using subject access, Manthorpe saw the wide range of different factors gathered and used by the LibDems to predict his likely voting intentions, and therefore inform whether and how they might approach him.

It’s very tempting to say ‘so what’? Any party that claims that they don’t do this, using data gleaned from Experian and other data brokers, is almost certainly lying. To make out that that the LibDems are doing something weird and creepy when it’s standard political practice is perhaps unfair. I did a subject access request to the Conservative Party earlier in the year, and I found an equally large amount of information – the Tories think that I have kids, read the Independent and was aged between 26 and 35 in 2017, but have now moved up to the 36 – 45 age bracket. If you seen me recently, you may wish to pause until you stop laughing. They’ve estimated my personal and household income and when I finished full-time education, and classify my household as “forward-thinking younger families who sought affordable homes in good suburbs which they may now be out-growing“. They know every time I have voted since 2014, although not who for.

What’s interesting about all of this is whether any of it is lawful. First off, it’s not transparent. The political parties have privacy policies that allude to some of this profiling but if you don’t support or vote for a party or a campaign, what reason would you ever have to read that policy? I am never going to vote Tory, so why would I look at the bit of their privacy policy that says that they’re going to buy my data from Experian in order to profile me, even if that section exists? And what of Experian, who have happily sold my data to the Tories – what transparency from them? Long story short, I think the transparency aspect of political profiling is fatal to its lawfulness. We don’t know this is happening, and the parties do very little proactively to communicate to voters that it’s going on.

Parking that, it’s worth considering the other aspects of GDPR and the Data Protection Act 2018 which are relevant to this question. To process any personal data, an organisation must have a lawful basis from Article 6 of the GDPR to do so. Several are automatically off the table for this kind of profiling – consent (because they haven’t asked), contract (there isn’t one), vital interests (nobody will die if the Tories don’t incorrectly guess that I have kids) and legal obligation are all gone. This leaves two – necessary for a task carried out in the public interest or necessary for a legitimate interest. Neither of these is automatically available. A task carried out in the public interest has to have some kind of statutory underpinning, which is apparently available via Section 8 of the DPA 2018, which specifies ‘an activity that supports or promotes democratic engagement‘ as a task carried out in the public interest. The explanatory notes to the DPA fleshes this out:

The term “democratic engagement” is intended to cover a wide range of political activities inside and outside election periods, including but not limited to: democratic representation; communicating with electors and interested parties; surveying and opinion gathering, campaigning activities; activities to increase voter turnout; supporting the work of elected representatives, prospective candidates and official candidates; and fundraising to support any of these activities

In order to rely on what many people call ‘public task’, political parties have to satisfy themselves (and potentially the ICO or the courts) that their profiling fits this definition, and that the best way to, for example, communicate with electors is first to profile them. I’m not saying that it’s impossible to clear that hurdle – necessary doesn’t mean the only way, just the most appropriate and proportionate way, but it’s for the LibDems (and every other party) to show that they have thought about this and considered the alternatives. Because this processing is likely to have been carried out automatically (I presume that they don’t have crowds of artisan psephologists doing it by candlelight), this could mean that a Data Protection Impact Assessment is required. I’m not certain of this because I’m not sure whether the profiling would have a significant legal or other effect on the person, but if you read the ICO’s code of practice on political campaigning, they bend over backwards to argue the case for political advertising having that effect. In any case, there are other criteria in the European Data Protection Board’s guidance which might well lead to a mandatory DPIA (for example, large scale innovative techniques, or depending on the data used, large scale processing of special categories).

Of course, they may choose to rely on legitimate interests, which again requires work. They have to demonstrate that they have balanced their legitimate interest in understanding voters against the rights and freedoms of those voters. This is must be *necessary*, and in my opinion, it is exceptionally difficult to make the case for legitimate interests where a person has not been informed of the processing.

Manthorpe’s story lays out another potential problem. The LibDems are creating special categories data (political opinion) and it’s not unknown for politicos to use profiling to infer other characteristics, like Zac Goldsmith’s apparent attempts to infer ethnicity from surnames in the 2016 London Mayoral Election. The use of special categories is technically prohibited, but one of the exemptions is the substantial public interest. The LibDems would have to demonstrate that it is in the substantial public interest for them to process the data, and as before, that it is necessary for them to process data in this way.

That isn’t enough on its own. The use of substantial public interest has to be underpinned by a specific legal authorisation, which can be found in the Schedules of the DPA 2018. The only one that political parties can rely on is paragraph 22, which allows parties to process political opinions where necessary (that word again) for the purposes of the organisation’s political activities. The GDPR’s demand for accountability means that all of this decision-making will need to be documented, and every party will have to show that they considered the proportionality and necessity of their actions. At this point, I think the DPIA question is clearly answered – because the process leads to the creation by inference of political opinions, the party is processing sensitive data on a large scale, hitting two of the criteria set out by the EDPB guidance. Two criteria means that processing is high risk and requires a DPIA; the processing is unlawful if they cannot demonstrate having carried out one.

Of course, all of this only applies to the processing, and both the GDPR and DPA make clear that they have to stop processing the data if the person requests it, even if they’ve done all of the work I’ve described above. There are no exceptions to this. Moreover, if the party wants to send a text or an email to any person, none of this helps; GDPR and DPA may allow the profiling (I don’t believe any party will have implemented the above rigorously enough to satisfy the law), but it does nothing about the rules for direct marketing in PECR. Even if they satisfy the GDPR requirements for processing special categories, that doesn’t help at all with PECR’s flat demand for GDPR-style consent when emailing individual subscribers (i.e. people using their own email addresses).

The LibDems claimed to Manthorpe that their privacy policy cures all ills:

The party complies with all relevant UK and European data protection legislation. We take the GDPR principle of transparency very seriously and state the ways we may use personal data clearly within the privacy policy on our website.

I don’t accept this for a moment. I’m a Data Protection nerd and I don’t go on random organisations’ websites to read their privacy policies just in case they might apply to me. The fact that contacting millions of people to tell them that they’re being profiled would be punishingly expensive isn’t GDPR’s problem – the sense of entitlement that political parties feel about data and how they use it should be secondary to the law. But even if you accept their argument, the fact that all parties are likely to have a file on every voter isn’t in our interests, it’s in theirs. They should be under pressure to show that platitudes like the statement above are backed up by the rigour and evidence demanded by the legislation. This should not be a story about the LibDems; this should be seen as a window into what all political parties do, and feel entitled to do. I have no faith in the ICO to sort this out, but scrutiny of what’s going on is in all of our interests.

 

ADVERT: I’m running GDPR courses across the UK until the end of 2019. In 2020, I’ll be running new courses on the DPA, Law Enforcement and Data Protection and Data Protection by Design. Take a look at my website for more: www.2040training.co.uk 

Going Unnoticed

Last week, I came across an interview with Elizabeth Denham on a Canadian website called The Walrus that was published in April. There are some interesting nuggets – Denham seems to out herself as a Remainer in the third paragraph (a tad awkward given that she has only enforced on the other side) and also it turns out that the Commissioner has framed pictures of herself taking on Facebook in her office. More important is the comparison she draws between her Canadian jobs and her current role: “That’s why I like being where I am now,” she says, settling herself at a boardroom table. “To actually see people prosecuted.”

Denham probably wasn’t thinking of the run of legitimate but low-key prosecutions of nosy admin staff and practice managers which her office has carried out in recent months, which means she was up to her old tricks of inaccurately using the language of crime and prosecution to describe powers that are civil (or more properly, administrative). Since GDPR came in, she’s even less likely to prosecute than before, given that she no longer has the power to do so for an ignored enforcement or information notice. I don’t know whether she genuinely doesn’t understand how her powers work or is just using the wrong words because she thinks it makes for a better quote.

Publicity certainly plays a far greater part in the ICO’s enforcement approach than it should. A few months back, I made an FOI request to the ICO asking about a variety of enforcement issues and the information I received was fascinating. The response was late (because of course it was), but it was very thorough and detailed, and what it reveals is significant.

ICO enforcement breaks down into two main types. Enforcement notices are used where the ICO wants to stop unlawful practices or otherwise put things right. Monetary penalties are a punishment for serious breaches. Occasionally, they are used together, but often the bruised organisation is willing to go along with whatever the ICO wants, or has already put things right, so an enforcement notice is superfluous. The ICO is obliged to serve a notice of intent (NOI) in advance of a final penalty notice, giving the controller the opportunity to make representations. There is no equivalent requirement for preliminary enforcement notices, but in virtually every case, the ICO serves a preliminary notice anyway, also allowing for representations.

According to my FOI response, in 2017, the ICO issued 8 preliminary enforcement notices (PENs), but only 4 were followed up by a final enforcement notice; in 2018, 5 PENs were issued, and only 3 resulted in a final notice. The ratio of NOIs to final penalties is much closer; in 2017, there were 19 NOIs, and only one was not followed up with a penalty. In 2018, 21 NOIs were issued, 20 of which resulted in a penalty. Nevertheless, the PEN / NOI stage is clearly meaningful. In multiple cases, whatever the controller said stopped the intended enforcement in its tracks. In the light of many GDPR ‘experts’ confusion about when fines are real or proposed, the fact that not every NOI results in a fine is worth noting.

The response shows the risks of neglecting to issue a PEN. In July 2018, the ICO issued Aggregate IQ (AKA AIQ) with the first GDPR enforcement notice (indeed, it was the first GDPR enforcement action altogether). My FOI reveals that it was one of only a few cases where a preliminary notice was not issued. The AIQ EN was unenforceable, ordering them to cease processing any personal data about any UK or EU “citizens” obtained from UK political organisations “or otherwise for the purposes of data analytics, political campaigning or any other advertising purposes”. AIQ was forbidden from ever holding personal data about any EU citizen for any advertising purpose, even if that purpose was entirely lawful, and despite the fact that the GDPR applies to residents, not citizens. AIQ appealed, but before that appeal could be heard, the ICO capitulated and replaced the notice with one that required AIQ to delete a specific dataset, and only after the conclusion of an investigation in Canada. It cannot be a coincidence that this badly written notice was published as part of the launch of the ICO’s first report into Data Analytics. It seems that ICO rushed it, ignoring the normal procedure, so that the Commissioner had things to announce.

The ICO confirmed to me that it hasn’t served a penalty without an NOI, which is as it should be, but the importance of the NOI stage is underlined by another case announced with the first AIQ EN. The ICO issued a £500,000 penalty against Facebook, except that what was announced in July 2018 was the NOI, rather than the final penalty. Between July and October, the ICO would have received representations from Facebook, and as a result, the story in the final penalty was changed. The NOI claims that a million UK Facebook users’ data was passed to Cambridge Analytica and SCL among others for political purposes, but the final notice acknowledges that the ICO has no evidence that any UK users data was used for campaigning. As an aside, this means that ICO has no evidence Cambridge Analytica used Facebook data in the Brexit referendum. The final notice is based on a hypothetical yarn about the risk of a US visitor’s data being processed while passing through the UK, and an assertion that even though UK Facebook users’ data wasn’t abused for political purposes (the risk did not “eventuate“), it could have been, so there. I’ve spent years emphasising that the incident isn’t the same as a breach, but going for the maximum penalty on something that didn’t happen, having said previously that it did, is perhaps the wrong time to listen to me.

If you haven’t read the final Facebook notice, you really should. ICO’s argument is that UK users data could have been abused for political purposes even though it wasn’t, and the mere possibility would cause people substantial distress. I find this hard to swallow. I suspect ICO felt they had effectively announced the £500,000 penalty; most journalists reported the NOI as such. Despite Facebook’s representations pulling the rug out from under the NOI, I guess that the ICO couldn’t back down. There had to be a £500,000 penalty, so they worked backwards from there. The Commissioner now faces an appeal on a thin premise, as well as accusations from Facebook that Denham was biased when making her decision.

Had the NOI not been published (like virtually every other NOI for the past ten years), the pressure of headlines would have been absent. Facebook have already made the not unreasonable point in the Tribunal that as the final penalty has a different premise than the NOI, the process is unfair. Without a public NOI, Facebook could have put this to the ICO behind closed doors, and an amended NOI could have been issued with no loss of face. If Facebook’s representations were sufficiently robust, the case could have been dropped altogether, as happened in other cases in both 2017 and 2018. For the sake of a few days’ headlines, Denham would not be facing the possibility of a career-defining humiliation at the hands of Facebook of all people, maybe even having to pay their costs. It’s not like there aren’t a dozen legitimate cases to be made against Facebook’s handling of personal data, but this is the hill the ICO has chosen to die on. Maybe I’m wrong and Facebook will lose their appeal, but imagine if they win and this farrago helps them to get there.

The other revelation in my FOI response is an area of enforcement that the ICO does not want to publicise at all. In 2016, the ICO issued a penalty on an unnamed historical society, and in 2017, another was served on an unnamed barrister. I know this because the ICO published the details, publicly confirming the nature of the breach, amount of the penalty as well as the type of organisation. One might argue that they set a precedent in doing so. What I didn’t know until this FOI request is that there have been a further 3 secret monetary penalties, 1 in 2017 and 2 in 2018. The details have not been published, and the ICO refused to give me any information about them now.

The exemptions set out the ICO’s concerns. They claim that it might be possible for me to identify individual data subjects, even though both the barrister and historical society breaches involved very limited numbers of people but were still published. They also claim that disclosure will prejudice their ability to enforce Data Protection law, using this justification:

“We are relying on this exemption to withhold information from you where the disclosure of that information is held for an ongoing regulatory process (so, we are yet to complete our regulatory process and our intentions could still be affected by the actions of a data controller) or the information is held in relation to sensitive matters and its disclosure would adversely affect relationships which we need to maintain with the organisations involved. It is essential that organisations continue to engage with us in a constructive and collaborative way without fear that the information they provide to us will be made public prematurely, or at a later date, if it is inappropriate to do so. Disclosure of the withheld information at this time would therefore be likely to prejudice our ability to effectively carry out our regulatory function”

The ICO routinely releases the names of data controllers she has served monetary penalties and enforcement notices on without any fears about the damage to their relationship. Just last week, she was expressing how “deeply concerned” she is about the use of facial recognition by the private sector, despite being at the very beginning of her enquiries into one such company. And if maintaining working relationships at the expense of transparency is such a vital principle, how can they justify the publication of the Facebook NOI for no more lofty reason than to sex up the release of the analytics report? They say “It is essential that organisations continue to engage with us in a constructive and collaborative way without fear that the information they provide to us will be made public prematurely”, and yet the Facebook NOI was published prematurely despite the fact that it was a dud. What will that have done to the ICO’s relationship with a controller as influential and significant as Facebook? What incentive do FB have to work with Wilmslow in a constructive and collaborative way now? And if identifying the subjects is an issue, what is to stop the ICO from saying ‘we fined X organisation £100,000’ but refusing to say why, or alternatively, describing the incident but anonymising the controller?

It doesn’t make sense to publicise enforcement when it’s not finished, and it doesn’t make sense to keep it secret when it’s done. Every controller that has been named and shamed by the ICO should be demanding to know why these penalties have been kept secret, while Facebook have every right to demand that the Commissioner account for the perverse and ill-judged way in which she took action against them. Meanwhile, we should all ask why the information rights regulator is in such a mess.

And one final question: did she bring the framed pictures with her or did we pay to get them done?

Mistaken Identity

Over the past week, numerous excited stories have covered a talk given by James Pavur, an Oxford University researcher and Rhodes Scholar, at the Blackhat Convention in Las Vegas. With his girlfriend’s consent, Pavur made 150 subject access requests in her name. In what the BBC called a ‘privacy hack’ until they were shamed into changing the headline, some of those that replied failed to carry out some kind of ID check. Pavur’s pitch is that GDPR is inherently flawed, allowing easy access for identity thieves. This idea has already got the IT vendors circling, and outraged GDPR-denier Roslyn Layton used the story to describe GDPR as a “cybersecurity/identity theft nightmare“. Pavur’s slides are available on the Blackhat website, but so is a more detailed whitepaper written by himself and his girlfriend Casey Knerr, and anyone who has pontificated about the pair’s revelations should really take a look at it.

Much has been made of Pavur’s credentials as an Oxford man, but that doesn’t stop the 10 page document containing errors and misconceptions. The authors claim that Marriott and British Airways have already been fined (they haven’t), and that there are only two reasons to refuse a subject access request (ignoring the existence of exemptions in the Data Protection Act 2018). They use ‘information commissioners’ as a term to describe regulators across Europe, and believe that the likely outcome of a controller rejecting a SAR from a suspiciously acting applicant would be ‘prosecution’. In the UK and most if not all EU countries, this is legally impossible. At the end, their standard SAR letter cites the Data Protection Act 1998, despite the fact that in context, any DPA is irrelevant and that particular one was repealed more than a year ago.

Such a list of clangers would be bad (though not necessarily unexpected) in a Register article, but despite presenting their case with a sheen of academic seriousness, Pavur and Knerr have some serious misconceptions about how GDPR works. It supposedly offers “unprecedented control” to the applicant, despite their experiment utilising a right that has existed in the UK since 1984. They claim GDPR represents a “sea change” in the way EU residents can control, restrict and understand the use of their personal information, even though most rights are limited in some way and are rooted firmly in what went before. They claim that “little attention has been paid to the possibility of request abuse”. I’ve been working on Data Protection since the authors were schoolchildren, and I can say for certain that this claim is completely false. SARs being made by third parties, especially with malicious intent, has been a routine concern in the public and private sector for decades. Checking ID is instinctive and routine in many organisations, to the point of being restrictive in some places.

Other assertions suggest a lack of experience of how SARs actually work. Because of the perceived danger of twitchy regulators fining organisations for not immediately answering SARs “it is therefore fairly risky to fail to provide data in response to a SAR, even for a valid purpose”. This year, the ICO has had to enforce on high profile organisations for failing to answer SARs (it didn’t fine any of them), and is itself is happy to refuse SARs it receives from elderly troublemakers. SARs are routinely ignored and refused, but the authors imagine that nobody ever wants to say no for fear of entirely imaginary consequences.

Pavur and Knerr think that panicking controllers will make a mess of the ID check: “we hypothesized that organisations may be tempted to take shortcuts or be distracted by the scope and complexity of the request”. This ignores three factors. First, for many organisations, a SAR is nothing new, and the people dealing with it will have seen hundreds of SARs before. Second, the power advantage is with the controller, often a large organisation ranged against a single applicant (and in the UK, facing a regulator unlikely to act on the basis of one SAR complaint). Third, and most important, they don’t factor in the reality that the ID check takes place *outside* the month. ICO says that until the ID check is made, the request is not valid and the clock is not ticking. A sense of panic when the request arrives – necessary for the authors’ scenario to work – will only be present in those with little experience, and if you’re telling me that people who don’t understand Data Protection tend to cock it up, I have breaking news about where bears shit.

Another unrealistic idea is that by asking whether data has been inadvertently exposed in a breach (a notion written into the template request), the authors make the organisation afraid that the applicant has knowledge of some actual breach. “We hypothesised that such a belief might cause organisations to overlook identity verification abnormalities”. I can’t speak for every organisation, but in my experience, a breach heightens awareness of DP issues. Making the organisation think that the applicant has inside knowledge of a breach will make most people dot every ‘I’ and cross every ‘T’. Equally, by suggesting that ID be checked through the unlikely option of a secure online portal, the authors hope to make the organisation feel they’re running out of options, especially because they think the portal would have to be sourced within a month. Once again, this is the wrong way around. An applicant who wants to have their ID checked via such a method would either get a flat no, or the controller could sort it out first and then have the month to process the request.

A crucial part of the white paper is this statement: “No particularly rigorous methodology was employed to select organisations for this study”. Pavur and Knerr say that the 150 businesses operate mainly in the UK and US, the two countries they’re most familiar with. I’m going to stick my neck out and bet that the majority of the businesses who handed over the data without checking are US-based. Only two of the examples in the paper are definitely UK – a rail operator and a “major UK hotel chain”. Many of the examples are plainly US businesses (they cite them as Fortune 100 companies), and one of the most specific examples of sensitive data that they obtain is a Social Security Number, which must be a US institution of some kind.

If you tell me that a significant number of UK businesses, who have been dealing with SARs since 1984, don’t do proper ID checks, that’s a real concern. If you tell me that it’s mainly US companies, so what? Many US companies reject the application of GDPR out of hand, and I have some sympathy for their position, but it’s ridiculous to expect them to be applying unwelcome foreign legislation effectively. This is the risk that you take when you give your data to a US company that isn’t represented in the UK or EU. Pavur and Knerr haven’t released the names of the organisations that failed to check ID, and until they do, there’s not much in the paper to show that this is a problem in the UK, and a lot to suggest that it’s not.

The potential solutions they come up with are flawed. They say regulators should reassure organisations that they will not be prosecuted if they reject requests without ID, despite no evidence that any regulator says anything different (or indeed, has enforced in such circumstances). Their main recommendation for legislators is recommending that government ID verification schemes should be used by all controllers to check the ID of SAR applicants. It’s true that there is no standardised ID check and controllers will act on a case by case basis, but that’s infinitely preferable to Dominic Cummings’ government knowing every time you exercise your data protection rights.

I have never run a training course that mentions SARs that doesn’t mention checking ID. At least in the UK, a request isn’t seen to be valid unless some form of ID has been presented. In the last month, two different data controllers (the Conservative Party and Trilateral Research) have insisted on seeing a driving license or equivalent before processing my SAR, despite me applying from the email address they have on file. A few US controllers handling SARs in a sloppy manner isn’t a cause for great concern. It certainly doesn’t suggest significant flaws in the way GDPR is drafted.

For all my criticisms of the pair’s approach, they do admit that the white paper was “a cursory assessment”.  I don’t doubt their expertise in security, their good intentions or the truth of their ultimate message: checking ID is essential when dealing with SARs. The problem with the experiment is that it reads like what two clever people reckon subject access is like, rather than how it works in the real world. I’d strongly suggest that if they follow up on this first attempt with a more robust piece of research (which is hinted at in the white paper), they approach the subject with a more realistic and detailed understanding of how Data Protection actually works, and maybe get some advice from people with real SAR experience.

Mates’ Rates

A while ago, I noticed an FOI request sent to the Information Commissioner’s Office on the website What Do They Know? I always keep an eye on requests made to Wilmslow, but this one was especially intriguing. It asked about payments made by ICO to external suppliers and consultants where the work had not been put out to tender. Even the progress of the request became notable because the Information Commissioner was seemingly incapable of answering it. The original request was made on February 26th 2019, and the ICO didn’t answer it until July 9th, more than three months after the legal time limit. Twice, the ICO set itself a deadline by which it would definitely answer the request, and twice it failed to do so. Remember friends, the Information Commissioner is the regulator for FOI. They’re supposed to ensure that other public sector bodies answer their FOI requests, but they’re terrible at answering their own. It’s worth noting Liz Denham was almost certainly aware of the request, as there is specific mention of her private office being involved during the glacial march towards a reply.

SIDENOTE: I made an FOI about enforcement and monetary penalties to the ICO that was due on July 16th. The Senior Information Access Officer handling my request told me that he hoped to provide me with a reply by Friday, and an answer came there none. I wonder if they’ll sit on it like they did here.

Anyway, due to the busiest July I have ever had (take that, haters), I missed the fact that the payments request had finally been answered, apparently in full, with no use of exemptions. At first glance, it’s nothing to get excited about. The total amount is less than £240,000, and though the highest payment made to anyone is £58000, the explanation of why this work was not put out to tender doesn’t sound outrageous:

A project brief was developed and three suppliers were approached for quotes. The requirements were in two parts, the first part was research and the second part was delivery. The second part of the brief was tendered to the supplier who completed the first after the proposed next steps were evaluated by the Board and it was agreed to implement their proposals – meaning they were uniquely placed to deliver the second part.

A couple of items did leap out at me. David Smith is not a unique or distinctive name, but it’s hard to believe that the particular David Smith (DP) Ltd paid £5152 for international engagement isn’t the same David Smith who was until recently Deputy Information Commissioner. Apparently, Smith is “uniquely placed” to deliver international engagement on behalf of the ICO. One would think that the Commissioner’s pan-continental roadshow would provide all the engagement the office could require, but I suppose throwing a few grand in an old friend’s direction isn’t the worst thing the ICO has ever done.

Equally, I doubt the Simon Entwistle who received £5791 is a different Simon Entwistle to the Simon Entwistle who was until recently Deputy Information Commissioner. Apparently, he is uniquely placed to carry out ‘Executive Coaching’. Granted, Entwistle is an old ICO hand, originally appointed by Richard Thomas, but it’s rather odd that having retired, the organisation has to pay him to coach the ICO’s senior people. Elizabeth Denham was paid around £180,000 in 2018 – 2019, and every member of the executive team is within spitting distance of 100K. These are well-paid, experienced people – if they’re in these jobs, that should be because they already have the skills to do these jobs. If they don’t, why were they appointed?

The sum paid to Entwistle for coaching isn’t massive, but it’s not the only one. Two different amounts, totalling £17,968, were paid to a ‘Philip Halkett’ for executive coaching, a role which he was once again “uniquely placed” to carry out. I cannot say for certain who Halkett is, and I am happy to be corrected if I have got it wrong. However, I believe he is a former Deputy Minister in Canada’s Ministry of Forests, and is based in British Columbia, where he describes himself on LinkedIn as ‘semi retired’. There are literally hundreds if not thousands of people offering coaching in the UK, but since Denham became Commissioner, the ICO has paid just shy of £18000 to a man who has no website or company that I can find, whose sole contribution to the internet is a single retweet about Denham, and whose main qualification for the job appears to be that he comes from the same remote corner of Canada as she does.

Halkett isn’t the only Canadian to feel the benefit of the ICO’s munificence. The former Information Commissioner of Canada, Suzanne Legault, is “uniquely placed” to deliver the secretariat for the International Commissioner’s Conference that Denham has been nominated to organise. It’s probably a complete coincidence that Legault is Canadian, and it’s not like organising an international conference isn’t a thing that loads of organisations do all of the time all over the world.

Most intriguing is the work carried out by a British customer service guru, Mark Colgate. Colgate has been paid £20000 to deliver “advice on development of service excellence programme and delivery of training to 500 staff”. If you visit Colgate’s website, you’ll find an uncharismatic man plugging some basic customer service ideas using gratingly clunky acronyms. Colgate sums up his philosophy as ‘Tofu’, which means that he is selling a fundamentally unappetising and artificial concept. I’m joking, ‘Tofu’ means ‘Take Ownership and Follow Up’. Another element of the Colgate Method is FAME, a concept that is so vapid and forced I can’t bear to reproduce it here.

Most regulators do not have customers. In particular, the ICO is not an ombudsman. It is not their job to give complainants redress or resolution. The ICO’s role is to ensure that controllers comply with the law – Article 57 of the GDPR states that the first task of the supervisory authority is to ‘monitor and enforce‘ the regulation. Handling complaints from members of the public is in there, but the public are not the ICO’s customers, any more than controllers are. The aim should not be to give either side what they want, but to ensure that controllers do those things that they are obliged to do, and that individuals’ rights are respected. A ‘customer service’ mentality is at best a distraction, and at worst, risks creating expectations that cannot be met. Many controllers have experience of ICO case officers who keep pushing a dead-end complaint because they clearly don’t want to give an angry or unreasonable complainant an answer they don’t like. If you swap it and make the controllers the customer, it’s at least as bad. The ICO has a long, shabby history of bending over backwards to appease ‘stakeholders’; the aforementioned David Smith was a big fan of describing the ICO as ‘enablers’ of business and innovation, rather than an organisation with a clear mission to enforce some specific laws.

But let’s assume that I’m wrong. Let’s assume that the ICO does need to spend thousands of pounds training its staff on customer service. What exactly is it about Mr Colgate’s brand of bargain basement Dale Carnegie that meant he had to be awarded this work without a tender process? Why, given the plethora of genuine customer service experts in the UK, was Mr Colgate “uniquely placed in respect of experience and expertise” to deliver this work, especially when it is so similar to so many other people? Is there any clue in his CV? Mr Colgate’s current berth as “Professor of Service Excellence” at the Peter B. Gustavson School of Business, which you can find in the fine city of Victoria. In Canada. Specifically, in British Columbia. About a ten minute drive from the Office of the Information and Privacy Commissioner for British Columbia, the last incumbent of which was a certain Elizabeth Denham. Fans of funny coincidences may also care to note that the current incumbent of that office is Mr Michael McAvoy, last seen running the ICO’s investigation into data analytics, a role that I do not believe was advertised externally.

At this point, you might be saying ‘so what’? Denham has thrown some work to her mates both home and abroad: is this so terrible? In my opinion, it really is. A good chunk of my work comprises a single day’s training, and in many cases, the client gets multiple quotes before giving me the work. I simply don’t believe the ICO’s claims that these people are the only possible candidates, especially as there was no competition or objective test. Public money should not be spent on a whim, especially not with the particular flavour of favouritism and self-indulgence that appears to be on show here. Bringing in your Canadian friends to provide luxury services that thousands of people in the UK are well-placed to provide shows lamentably poor judgement.

This is not the first time I have blogged about Denham’s terrible decisions. She did an advert for a commercial company. She enthusiastically endorsed a book she hadn’t read, making claims about the author which were not true. She made misleading claims in the media to get headlines and bragged on TV that she had used powers that actually don’t exist. She announced the Facebook fine prematurely and now faces accusations of bias and procedural unfairness. We haven’t had a decent Commissioner since Elizabeth France, but despite Richard Thomas’ over-caution and Chris Graham’s superficiality, both of them seemed able to do the job without the growing list of howlers for which Denham is responsible. Paradoxically, she is the most respected and popular of all the Commissioner’s incarnations and my complete lack of faith in her judgement makes me the bad guy, as usual. Nevertheless, questions need to be asked about what exactly is going on in Wilmslow, how decisions are being made, and how money is spent. There are a number of well paid non-executive directors on the ICO board; I would be keen to know what they think of all this.

Quis custodiet ipsos custodes, and all that.

Lateral Thinking

Last week, I wrote a blog about the ‘personal data agency’ Yo-Da, outlining my concerns about their grandiose claims, the lack of detail about how their service works and their hypocritical decision to ignore a subject access request I made to them. Predictably, this led to further online tussles between myself and Benjamin Falk, the company’s founder and ‘chief talker’. As a result of our final conversation, Yo-Da has effectively disappeared from the internet. Clearly, I touched a nerve.

Yo-Da’s website made concrete claims about what their service did, and in fact had done. There were testimonials from satisfied users, and three case studies. Although it was clear that the service wasn’t operating yet, the testimonials were unambiguous: here is what Yo-Da has done for me. There was no hint that they were fictional, nothing to suggest that the service couldn’t do what the site said.

Yo-Da systematically and automatically exercises your data rights

+

Use Yo-Da to ask any company in Europe to delete your personal information

User ‘Samuel’ claimed “Now I go to Yo-Da, search for the company whose (sic) been breached, and with 1-click find out what is happening with my personal information”, while ‘Nathan’ said “Yo-Da was simple to use and helped me understand just how many businesses in Europe have my data.

None of this is true. Yo-Da do not have a working product that does these things. As Falk put it to me “Our technology is still under development” and “We have some ideas that are working. They aren’t perfect.” I am not saying that Yo-Da aren’t developing an automated data rights service; I’m certain that they are. I’m not saying a product will never launch; I expect that it will and I am looking forward to it, though perhaps not for the same reason as Samuel and Nathan. The point is, it doesn’t exist now and the website said that it did.

Originally, Falk claimed that he had deliberately ignored my subject access request because it was unfounded. ‘Unpleasant’ people like me don’t have data rights, he claimed. This didn’t sound right, especially as after I published my blog, Yo-Da’s DPO (Trilateral Research) suddenly woke up and tried to process my request, as if this was the first they’d heard of it. During our correspondence, they made it clear that they agreed with Falk’s decision that my request was unfounded, but were silent on the decision to ignore it.

But in my argument with Falk, he admitted the truth “We have an outsourced DPO for a reason; we can’t afford a full time one. That’s why the SAR went ignored; our service isn’t live yet and so we didn’t expect to receive any requests, because we aren’t collecting any personal data on anyone

In a single tweet, Falk said a lot. He was admitting that all of the testimonials and case studies were fake (he ultimately said to me that they were “obviously fake”). At the same time, he was also not telling the truth. Falk said that the website was a “dummy” to “gauge interest”. In other words, the site exists as an advert for a theoretical service, but its other purpose is to persuade people to sign up to Yo-Da’s mailing list. It was designed to collect personal data. Yo-Da were saying ‘sign up with us to use this service that actually works’. I believe that this is a direct breach of the first GDPR principle on fairness and transparency. I want to know why Trilateral Research acted as a DPO for an organisation that did this.

Falk said that he was joking when he said that he ignored my request on purpose, but Trilateral didn’t acknowledge that. They wrote of a ‘delay’ in acknowledging my request, but concurred with Falk’s unfounded decision. That decision was never made; my SAR was just missed. Nobody was checking the ‘dpo@yo-da.co’ email account – Falk wasn’t, and neither were they, despite being the putative DPO. Either they didn’t know what had happened, or they didn’t care. They definitely backed up their client rather than digging into why a SAR had been received and ignored on spurious grounds without their involvement. Let’s be generous and assume that they didn’t know that Falk was bullshitting. Their client had taken a controversial and disputable decision in a SAR case, and he hadn’t consulted them before he did it, but they didn’t acknowledge that. They backed the unfounded refusal.

Even if Yo-Da one day launches a product that successfully facilitates automated data rights requests to every company in Europe (prediction: this will never happen), they definitely don’t have that product now, and their website claimed that they did. Either Trilateral didn’t know that this is the case, which means that they failed to do basic due diligence on their client, or they knew that the Yo-Da website was soliciting personal data on the basis of false claims.

When I pointed out to Falk that all of the sign-up data had been collected unlawfully (it’s not fair and transparent to gather data about a service that doesn’t exist), the conversation ended. The Yo-Da website instantly vanished, and their Twitter account was deactivated minutes later. I’m certain that Falk will be back, his little spat with me considered to be no more than a bump in the road to world domination. But forget him; what does this say about Trilateral? The best defence I can think of is that they took Falk’s money to be in-name-only DPO but didn’t scrutinise the company or their claims. This is bad. If they had any idea that Yo-Da doesn’t currently do what the website claimed, it’s worse.

According to the European Data Protection Board, the professional qualities that must be demonstrated by a Data Protection Officer include “integrity and high professional ethics”. I seriously question whether Trilateral have demonstrated integrity and high professional ethics in this case. It’s plainly unethical to be named as DPO for an organisation, and then ignore what comes into the DPO email address. Article 38(4) of the GDPR states “Data subjects may contact the data protection officer with regard to all issues related to processing of their personal data and to the exercise of their rights under this Regulation” but Trilateral weren’t even listening. It’s unethical to take on a client without knowing in detail how their services work (or even whether their services work), and that’s the only defence I can see in this case. It’s unethical to be DPO for an organisation that is making false or exaggerated claims to obtain personal data.

I regularly get asked by clients if I can recommend an outsourced DPO or a company who can do the kind of sustained consultancy work that a solo operator like me doesn’t have the capacity for. There are a few names I’m happy to give. I have no hesitation in saying that on the basis of this shoddy episode, I wouldn’t touch Trilateral Research with a bargepole.