Whoops!

Yesterday, after at least a year of pondering it, the Information Commissioner asked the Universities and Colleges Admissions Service (UCAS) to sign an undertaking, agreeing to change the way in which they obtain consent to use students’ data. The data is obtained as part of the application process and subsequently used for marketing a variety of products and services, and UCAS has agreed to change its approach. It’s important to note that this is an undertaking, so UCAS has not been ordered to do anything, nor are there any direct consequences if they fail to do what is stated in the undertaking. An undertaking is a voluntary exercise – it is not served, it does not order or require, it simply documents an agreement by a Data Controller to do something.

Aspects of the story concern me. The ICO’s head of enforcement is quoted as saying: “By failing to give these applicants a clear option to avoid marketing, they were being unfairly faced with the default option of having their details used for commercial purposes” but given that the marketing was sent by text and email, the opportunity to “avoid” marketing is not what should have been in place. If UCAS wanted to sell access to university and college applicants, they needed consent – which means opt-in, not opt-out. As the undertaking itself points out, consent is defined in the EU Data Protection Directive as freely given – an opt-out cannot constitute this in my opinion. If you think that an opt-out does constitute consent, try transposing that thinking into any other situation where consent is required, and see how creepy your thinking has suddenly become. Consent should be a free choice, made actively. We should not have to stop commercial companies from texting and emailing us – the onus should be on them to make an attractive offer we want to take up, not on consumers to bat away their unwanted attentions.

It’s entirely possible that the ICO’s position on consent is better expressed in the undertaking itself, but here we have a little problem. At least when it was published yesterday, half of the undertaking was missing. Only the oddly numbered pages were published, so presumably the person who scanned the document had a double-sided original and didn’t notice that they had scanned it single-sided. The published document also included one page of UCAS’ covering letter and the final signed page of the undertaking, which the ICO never normally publishes. This mistake reveals some interesting nuggets that we wouldn’t normally know, from the trivial (the Chief Executive of UCAS signed the undertaking with a fountain pen, something of which I wholeheartedly approve) to the potentially significant (the covering letter sets out when UCAS might divert resources away from complying with the undertaking).

But that’s not the point. The point is that the ICO uploaded the wrong document to the internet, and this is not the first time it has happened. I know this because on a previous occasion, I contacted the ICO to tell them that they had done it, and many people on my training courses have also seen un-redacted enforcement and FOI notices on the ICO website. The data revealed in the UCAS case is not sensitive (although I don’t know how the UCAS Chief would feel about her signature being published on the internet), but that’s not the point either. The ICO has spent the last ten years taking noisy, self-righteous action against a variety of mainly public bodies for security slip-ups, and the past five issuing monetary penalties for the same, including several following the accidental publication of personal data on the internet.

The issue here is simple: does the ICO’s accidental publication of this undertaking constitute a breach of the 7th Data Protection Principle? They know about the risk because they’ve done it before. Have they taken appropriate technical and organisational measures to prevent this from happening? Is there a clear process to ensure that the right documents are published? Are documents checked before they are uploaded? Does someone senior check whether the process is being followed? Is everyone involved in the process properly trained in the handling of personal data, and in the technology required to publish documents onto the web? And even if all of these measures are in place, is action taken when such incidents are identified? If the ICO can give positive answers to all these questions, then it is not a breach. Stuff happens. But if they have not, it is a breach.

There is no possibility, no matter how hilarious it would be, that the ICO will issue a CMP on itself following this incident, although it is technically possible. What should happen is that the ICO should quickly and effectively take steps to prevent this from happening again. However, if the Information Commissioner’s Office does not ask the Information Commissioner Christopher Graham to sign an undertaking, publicly stating what these measures will be, they cannot possibly speak and act with authority the next time they ask someone else to the same. Whether they redact Mr Graham’s signature is entirely a matter for them.

UPDATE: without acknowledging their mistake, the Information Commissioner’s Office has now changed the undertaking to be the version they clearly intended to publish. One wonders if anything has been done internally, or if they are simply hoping that only smartarses like me noticed in the first place.

Tales from the Crypt

If you don’t work in local government, you may never have encountered the Local Government Ombudsman, an organisation devoted to giving nutcases somewhere to grind their axes investigating possible maladministration in councils. The scope of the LGO’s work includes everything that councils do, but inevitably many complaints are about the most sensitive areas: child protection, looked after children, adoption, and adult social care. In dealing with complaints from the public, the LGO gets access to genuinely and (in Data Protection terms) legally sensitive information. Inevitably, given that councils have been the target of more ICO civil monetary penalties than any other sector, largely because councils are dumb enough to keep dobbing themselves in to Wilmslow, many are keen to use the most secure way of sending this confidential data to the Ombudsman.

It may seem odd, therefore, that the LGO sent an email to councils last month, containing the following message:

Encrypt or not to encrypt – that is the question …..

We’ve had a number of issues accessing encrypted emails which have been sent to us by councils. Whilst we appreciate that your information security policy may dictate how you send information to us, if there is any discretion please only send encrypted emails when it’s absolutely necessary.

Someone mentioned the gist of it to me, but I made an FOI request to the LGO to be certain that they really were sending out such a daft message. The LGO’s Information and Records Manager rather sweetly explained in their response to me that “our intention in sending this request was discourage councils encrypting emails that contain no sensitive personal or confidential data. Of course, if councils are sending sensitive personal data we would expect them to encrypt it – as we would do ourselves“. This is a useful piece of context for someone asking for the information under the auspices of FOI. However, this isn’t what they said to the numerous council link officers who received the email, and who were expected to act upon its contents. It’s almost the opposite.

Encrypting devices within an organisation is an easier proposition, as all the devices and connecting software are already part of the same system. The problem with encrypting email is undoubtedly that it involves different systems and protocols butting heads in the attempt to make a connection. The LGO pointed out to me that their case management system contains its own email system which can make receipt of an encrypted email difficult. But this is the LGO’s problem and nobody else’s. Councils have no choice about whether to supply data – one of the ‘key facts’ about the LGO on their website is that “We have the same powers as the High Court to obtain information and documents“. Given the ICO’s historic fondness for fining the sector for data security lapses, if councils opt for encryption by default, they should be applauded, especially by the organisation set up to investigate their conduct.

This will inevitably pose problems for the LGO internally, but the solution to this is not to encourage councils to reverse sensible changes in behaviour that another regulator has been pushing them into. They are a regulator whose job it is to deal with a diverse and multilayered sector with widely disparate cultures and practices, and they have to be capable of swallowing the inconvenient implications of it this. However difficult it might be to cope with, especially without the clarification provided to me in my FOI response (and as far as I know, to no-one else), the LGO’s current advice is damaging and unsafe. Councils should ignore it, and the LGO should withdraw it.

Crazy Naked Girls

There’s little to like about the voyeuristic coverage of the theft of images of famous women. Whether it is the feverish frottage of the mainstream press (which largely boils down to LOOK AT ‘EM ALL, IMAGINE ‘EM ALL NAKED, NNNNNNNNGGGGGG!!!!!) or the inevitably crass victim blaming (thank you, Ricky Gervais, for The Office and for absolutely nothing else), it’s all depressing.

The data protection strand in all this hasn’t been much better. Mobile devices are not a safe place to store sensitive data (true). The cloud is – in Graham Cluley’s immaculate phrase – just someone else’s computer (true). But too many security commentators have, perhaps unwittingly, aligned themselves with a ‘They asked for it’ line of thinking. A popular analogy is the one about burglary or car theft (this is an example from 2011). Apparently, you can’t complain if you leave your valuables on the front seat of your car and somebody steals them, and the same goes for pictures of your bits and the internet. In other words, the thinking is more or less that if Jennifer Lawrence is silly enough to take pictures of herself naked, she was basically asking for them to be stolen. For me, this is too close to the mentality that blames rape victims for being drunk, rather than rapists for being rapists. Friends, I blame the rapists.

Taking pictures of oneself is normal for most people, not just actresses – I am odd because I don’t do it, but if I was good looking, I probably would, all the time. It must be great to be extraordinary, and to enjoy being extraordinary. It’s too easy to be holier-than-thou and say that the violated only have themselves to blame. The victims made these images for themselves or they made them for someone else specific. They did not make the images for the media, or for the voyeurs who stole, sold or search for them. Anyone who handles or seeks them out violates the subject’s privacy, is a criminal and should be treated as such. The victims did nothing remotely scandalous or reprehensible – indeed, they did nothing that is anyone else’s business but their own. They probably didn’t do a privacy impact assessment before taking the pics, but that’s because they’re human beings and not data controllers.

The car analogy doesn’t work because mobile phones and the internet are not immediately understandable physical objects and spaces. When you leave your laptop on the passenger seat of your car, you can turn around and see the laptop sitting there. The risk is apparent and obvious. There’s a striking moment in Luc Besson’s current film ‘Lucy’ where Scarlett Johansen can see data streams soaring out of mobile phones across Paris, and navigates her way through them. We don’t see data like this. Few understand how the internet actually works (I’ve met a lot of people who think cloud storage means that data is floating in the air like a gas). We don’t see the data flowing or spot the footprint it leaves behind. We don’t know where the data ends up and the companies we use don’t tell us. We use unhelpful misnomers like ‘the cloud’ when we mean ‘server in a foreign land’. Many people don’t know how their phones work, where their data is stored, how it is copied or protected, or who can get access to it. This should be the problem that the photo hack alerts us to.

It’s possible that some people would change the way they used technology if they fully understood how it works, but that should be their choice, based on clear information provided by the manufacturers. At least one of those affected has confirmed that the images of her are quite old, so we can’t even judge the situation on what we know now. If taking the pics was a mistake (and I don’t think I’m entitled to say it was), it was a mistake made possibly years ago.

I don’t think people understand where their data is or how it is stored. Rather than wagging our fingers at the victims of a sex crime, anyone involved in data protection and security should concentrate on educating the world about the risks. I think the big tech companies like Google, Apple and Facebook would be uncomfortable with this idea, which is why security and sharing are presented as such tedious, impenetrable topics. They don’t want more informed use of their services, they just want the data like everyone else. The defaults for sharing and online storage, for location and tracking, for a whole variety of privacy invasive settings should be set to OFF. Activities involving risk should be a conscious choice, not an accidental side effect of living in the 21st century.

Concerns

At the end of July, the Information Commissioner issued a Civil Monetary Penalty on Think W3, an online travel company. Think W3 had flawed security and audit processes, and when a hacker gained access to Think W3’s customer data via a subsidiary company, the ICO (I think reasonably) concluded that the flawed framework was to blame. Think W3 received a Civil Monetary Penalty of £150,000.

When the ICO published the notice on their website, on page 3 of the notice, a sentence or two was tantalisingly redacted. My friend and fellow blogger Jon Baines wrote about the case at the time, noting in particular that Think W3 were not a random small travel company, but a wholly owned subsidiary of Thomas Cook. Thomas Cook bought the company in 2010 and sold it in January this year. The ICO made no mention of Thomas Cook, but Jon made short work of identifying the connection. He suggested to me that perhaps the missing sentence in the CMP was a reference to the parent company, and so I decided to make an FOI request to the Commissioner to find out whether he was right.

The ICO responded (by remarkable coincidence, on the last of the available 20 working days) by providing me with the redacted information:

Both companies were part of the Thomas Cook Group at the time of the below mentioned incident until they were sold on 24 January 2014.

As always, the ICO was unable to leave it at a bald answer (hint to FOI officers, less is often more). They explained the redaction as follows:

“The information was redacted following concerns raised by Thomas Cook, about its inclusion. The concerns focused on the fact that Thomas Cook considered it to be irrelevant and potentially prejudicial. They have said that Think W3 Ltd operated independently of other companies in the Thomas Cook Group and the system that was the subject of the security breach was in no way connected to the systems used in any other part of the Thomas Cook Group. Further, that the Essential Travel computer system that was the subject of the security breach was a legacy system that was used by Think W3 Ltd/Essential Travel before those companies became part of the Thomas Cook Group in 2010 and that system has at no time been connected to the systems used by any other part of the Thomas Cook Group.

As these concerns were only raised at a time when the civil monetary penalty notice was final and could not be altered the information could not be removed, but had to be redacted”

My request was made on the same day that the notice was published, and the response was provided to me within a calendar month. If disclosure is not prejudicial now, it was not prejudicial then. As I said above, it took Mr Baines minutes to make the connection between Think W3 and Thomas Cook, so any notion of prejudice is fanciful. Moreover, Thomas Cook’s claim that their ownership of the company at the time of the breach is “irrelevant” is twaddle. For one thing, Thomas Cook owned the errant company during the time of the incident and more importantly, during the period when their security was inadequate. They also paid the CMP, which makes their claim of irrelevance an insult to our collective intelligence.

Crucially, no matter how independently Thomas Cook allowed Think W3 to operate, what happened in Think W3 reflects on Thomas Cook. The public – providing their data to the range of companies owned by the group – are entitled to know that Thomas Cook do not check whether proper controls are in place in its members. The ICO should have rejected these wholly spurious claims out of hand, and instead, they meekly complied: the information “had to be redacted“.

There are two important reasons why these redactions run entirely counter to what the ICO should be about. Firstly, there are quite a few of us who believe that the ICO’s enforcement of the Data Protection Act is unfairly skewed against the public sector. Out of dozens of Data Protection CMPs since 2010, only a handful have been against private sector companies. Nevertheless, senior figures in the ICO cling to the idea that ‘market forces’ play a part in deterring organisations from misuing our data. Personally, I don’t believe them, but editing the notice prevents the ICO’s own pet theory from being tested. Market forces cannot be influenced as the ICO wishes if they themselves hide the information.

The other problem is that the ICO is not just the regulator of Data Protection, but also of Freedom of Information. Instead of championing openness and transparency, the ICO cravenly removed the Thomas Cook reference when there was no reason to do so other than Thomas Cook’s (pointless) sensitivities. There was no exemption under FOI (as my request demonstrated), just a regulator all too keen to accommodate big data controllers. Indeed, although they have told me what they removed, the redacted notice is, at the time of writing, still on the website.

This is far from the first time the ICO has issued a redacted CMP notice, and it probably won’t be the last. But this one demonstrates that the reasoning behind such censorship is flawed, and we should be quick to ask questions when they do it again.

What’s the damage?

BTO Solicitors recently marked the publication of the Information Commissioner’s annual report with a blog by two of their advocate solicitors about the Commissioner’s recent enforcement activity. BTO enjoyed a notable coup in 2013 by overturning the ICO’s £250,000 civil monetary penalty against Scottish Borders Council. I agree with the blog’s authors, Laura Irvine and Paul Motion, that the Borders case was hopeless; it is the low point in the ICO’s obsessive pursuit of “data breaches”. For several years, Wilmslow seemed to believe that [incident = breach] was a winning formula, and when tested in the Borders case, they were found wanting. The blog asserts that in several other cases, the ICO would equally have found it difficult to defend their CMPs, and again, I agree. Borders is not the only flawed CMP, and others could probably have been overturned.

Having said that, I think their review of recent action is eccentric, even myopic. They assert that the Commissioner “has not changed his approach to “likelihood” since the Scottish Borders appeal“, selecting two examples (Jala Transport and Bank of Scotland) to support their contention. I don’t know whether these two CMPs are sustainable, but they exemplify the difference between a one-off incident and an ongoing breach. I am certain that both are the latter. Jala’s *director* routinely carried the sole copy of his customer database on an unencrypted hard drive which he placed on the passenger seat of his car, while the Bank of Scotland proved incapable of preventing staff from sending faxes to the wrong destination even after the ICO started to investigate them. I think it’s instructive that neither organisation appealed.

Moreover, the argument that the ICO is on the same track is a lot easier to make if you stick rigidly to action taken in 2013, so that’s what Irvine and Motion’s blog does. There have only been 3 CMPs for Data Protection in 2014, and I believe that each would survive Tribunal scrutiny. As always, the incidents are eye-catching – an anti-abortion hacker gets access to the identity of women potentially seeking abortion, a police station is sold with evidence tapes identifying suspects, victims and witnesses, and a filing cabinet is sold with despite containing personal data about compensation payments paid to victims of terror attacks. However, I think it is likely that if BPAS did not properly maintain their website, it would come under attack from anti-abortion campaigners. It is likely that if Kent Police did not properly organise and monitor the clearance of their buildings, evidence would be left behind – and the same goes for the Department of Justice. In each case, the data was sensitive personal data, and to steal a word from BTO’s own blog, to argue that the loss of such data would not be likely to cause damage is frankly bizarre. The 2014 decisions may not be perfect, but they must have been made with the outcome of the Borders case in mind, and I think these three cases show a more robust process and defensible process at work.

The blog ends by considering Christopher Niebel’s successful appeal over the ICO’s £300,000 CMP for his industrial-scale spamming. It’s unlikely that anyone will mount a campaign larger than Niebel’s, which Judge Wikeley described as “a considerable public nuisance“, so the outcome of his appeal may effectively make the UK’s current PECR regime unenforceable. Wikeley suggested that had the bar been set lower (nuisance, rather than damage or distress), the outcome of the appeal might have been different. In response, the Government is currently consulting on whether to make precisely that change. BTO’s blog opposes this, fitting the Niebel case into the narrative of a wayward, overreaching Commissioner:

The likelihood of damage must be based on more than conjecture and distress has to be more than mere irritation. If evidential thresholds are getting in the way of monetary penalties the answer is to provide the requisite evidence, not to call for the lowering of the threshold and potentially criminalising conduct that is undeserving of such categorisation.

ICO’s use of conjecture is flawed and it’s what lost them the Borders case. But the above statement takes a seemingly ideological position that PECR breaches must go unpunished unless substantial damage can be established, without explaining why the law should not be used protect the public from intrusion and irritation. It’s not clear why Irvine and Motion are keen to keep a regime that lets spam go unpunished, and I’m convinced that leaving the threshold as it is will have that effect. Wikeley did not argue that ICO should have done a better job, but that the evidence wasn’t there to hit the target. By implication, with the test as it is, it won’t ever be. More importantly, neither the ICO or the DCMS (the department responsible for PECR) have suggested ‘criminalising’ any conduct. To claim otherwise is a red herring.

The sending of text messages, emails or automated calls without clear consent is already unlawful; the only debate is what the penalty should be for doing so. In wanting to keep the current threshold, Irvine and Motion seem more keen to protect the rights of spammers than the public. There’s a difference between criticising a poor case (Borders) and defending a target that no-one can hit. Damage and distress is not a concept that comes from the Directive – as Wikeley says, setting the bar there was a UK decision. The Directive demands ‘an effective, proportionate and dissuasive penalty‘ and Niebel shows that we don’t have one. Leaving the substantial damage threshold in place is not (as Irvine and Motion put it) “a realistic approach to assessment of the human consequences of data breaches and PECR breaches“; to do so ignores those consequences and by default, protects the illegal spam business model.

Like Irvine and Motion, I think the ICO approach is flawed and inconsistent. However, I support civil monetary penalties for breaches of both Data Protection and PECR and I think they should be maintained and improved. Evidence of the ineffectiveness of the criminal regime abounds. A few weeks ago, the Information Commissioner announced that they had successfully prosecuted Stephen Siddell, manager of an Enterprise car rental outlet in Southport. Mr Siddell was selling data about their clients to a claims management company. When the private sector is sometimes less forthcoming about their security problems than the public sector, Enterprise should be praised for calling the ICO rather than sacking their errant manager and keeping a lid on the problem. Mr Siddell was fined £500 (plus £300 in costs and victim surcharges). The claims management firm remains under investigation and so for the moment is not being named. Meanwhile, the Mail on Sunday reports today that Jayesh Shah, a man who boasted to an undercover reporter that he sent 500,000 spam text messages a day, has been fined £4000 for non-notification (plus costs of around £3000 in costs and surcharges) by magistrates in North London.

Mr Siddell’s future employment prospects are probably bleak, but with such small penalties, someone else will take his place. Police officers are treated fairly mercilessly when caught for data theft, but there is a still a queue of cops willing to raid the PNC. Meanwhile, though the comments about his weight and dress sense in the Mail’s comment section will have been unwelcome, Mr Shah can treat the £7000 outcome as an acceptable business expense. The criminal portion of the DPA provides scant punishment for data thieves (small fines and no criminal record as the offences are not recordable). It is possible for the ICO to issue enforcement notices against spammers and those who breach DP, but the only punishment for breaching an enforcement notice is the same paltry fines. A company prosecuted for breaching an enforcement notice can be closed down and replaced by a clean twin in next to no time.

I enjoy kicking the ICO as much as the next person, and their mishandling of CMP enforcement in recent years is a matter of concern. However, across the UK, Data Protection and privacy are still more honoured in the breach than the observance. There is big money to be made out of exploiting data, and as with health and safety, too many are willing to cut corners, regardless of the harm and distress that might be caused. Indeed, I think CMPs should be broken out of the security stranglehold and applied to damaging inaccuracy and unfairness as well. Rather than keeping the PECR threshold at an unattainable level, I think we should drop it to a straightforward tariff, with a flat rate penalty for every unlawful contact (say £1 per email, £5 per text and £10 per phone call). Post Niebel, private sector organisations that comply with the law will be priced out of the market by those who don’t unless there is a change. Without effective penalties, public sector organisations without a functioning privacy culture will continue to make decisions that put data – and in some cases, the public – at risk.

In their understandable enthusiasm to knock the ICO, I fear Irvine and Motion have lost sight of the purpose of the legislation. It is there to protect the public and to facilitate lawful, legitimate business activities. Personal data should be respected and handled with care. People have a right to a private and a home life without being pestered by spivs. The law and its implementation should penalise and deter misuse, intrusion and abuse. Some organisations will comply without sanction, but we need a strong, effective regime for those who won’t.