Lincolnshire poachers

Dark times in the Fens, as Lincolnshire County Council finds itself in the grip of diabolical cyber-blackmailers who demand £1,000,000 to release the local authority from the grip of a terrifying new strain of virus that has locked up all their files. As ever, it’s unwise to judge the outcome before all of the details are in, but Lincolnshire’s story has some interesting aspects. One element seems to go in Lincoln’s favour: this is “zero-day malware“, the first time that the particular infection has been detected. This obviously would make it harder to defend against, and in any case the Council is “confident it had appropriate security measures in place“.

The Council’s chief information officer Judith Hetherington-Smith reassured residents with the claim that they were “absolutely looking after their data and it hasn’t been compromised”. This implies that no personal data has been compromised, but this can’t be entirely squared with some of Hetherington-Smith’s other comments. For example “Once we identified it we shut the network down, but some damage is always done before you get to that point – and some files have been locked by the software” Right, so there’s some damage then? “A lot of the files will be available for us to restore from the back-up.” A lot of them but not all of them? What about the ones that aren’t available?

That back-up is interesting, in the light of the fact that “People can only use pens and paper, we’ve gone back a few years.” An inherent part of information security is business continuity, ensuring that even if something falls over, the place can keep running. I’m running a course this week for people responsible for risk-managing big information systems, and the client has specifically asked me to emphasise the need for business continuity to be built in. The whole point of this is not to be knocked back to the pen and paper age – I heard a report on Radio 4 that Lincolnshire’s social workers had not had access to systems for several days, which means those charged with protecting the most vulnerable in Lincolnshire don’t have access to information they need to do their job. If this information isn’t “compromised“, then I don’t know how else you would define it. It’s a catastrophe. Rather than attempting to reassure (I’m amazed that no-one has said that they take Data Protection very seriously), the council needs to explain why they are offline for days without a back-up that allows essential systems to keep running.

But the most interesting part of the story, and the element that is most crucial for deciding whether Lincolnshire has breached the Data Protection Act is how the infection got into their systems in the first place. Forget the eye-catching ransom demand, the terrifying challenge of the previously unseen virus, forget even the question of why the Council has no alternative option when attacked than blindness and pens & paper. How did it happen, you cry? How did these cunning cyber-ninjas drip their deadly poison despite all of Lincolnshire’s “appropriate security measures“?

Somebody opened an email. 

I don’t know how good Lincolnshire’s technical security is: however sceptical I might be,  there may be good reasons why they could not mirror their systems or back them up in such a way that they could not be restored more quickly. Nevertheless, everything that the Council has said or done since the incident, even if their claim that no data has been compromised is true (I don’t believe them, but OK), is irrelevant. The fundamental question is why their staff are capable of falling victim to the dumbest, most basic security attack known to humankind. I just hope they don’t get any emails about the frozen bank accounts of the late Dr Hastings Kamuzu Banda. The Lincolnshire incident was entirely, wholly preventable, and they have to explain both to the Information Commissioner and to the fine folk of Lincolnshire why they allowed this to happen.

I have said it a thousand times, and here I am saying it again. An incident is not a breach. In order to have complied, Lincolnshire’s “appropriate security measures” have to include regular training and reminders, specifically warning about threats like malware in emails. Managers have to regularly check how their staff are working and whether they are following the clear, widely disseminated procedures and policies that would be necessary in order to comply. Audits would have to be in place, and the individual systems that Lincolnshire has had to switch off should have been assigned to named asset owners, who are responsible for actively assessing risks entirely like this one, and putting measures in place to keep them running even in the face of attacks.

If the person who opened the email has not been trained, reminded and appropriately supervised, this whole incident is Lincolnshire County Council’s fault and they should be taken to task for it. It doesn’t matter how sophisticated the software was, how unexpected Lincolnshire might be as a target: THEY LET THE BURGLARS IN. All the warm words about what happened after that, even if they’re all true, make no difference to this basic fact. You may say that an organisation can’t prevent human error, but that’s nonsense. Training, reminders, appropriate supervision and picking the right people in the first place massively reduce human error. Everything that happens afterwards is damage limitation: either Lincolnshire did what was required beforehand, or it’s a breach.


A handy guide for data protection regulators.

1) You are being asked about an eye-catching incident that is making the headlines, but which you have not investigated in any way. Is this:


2) You have investigated an incident, and identified a specific principle that has not been properly complied with by the Data Controller. Is this:


If you answered

Mainly As: You’re correct
Mainly Bs: You work at the Information Commissioner’s Office



The British Pregnancy Advisory Service has just received a Civil Monetary Penalty of £200,000 for breaching the seventh principle of the Data Protection Act. A hacker, intent on vandalising the BPAS website, discovered a vulnerability in its coding. The details of thousands of women who had requested a call back about BPAS’ various abortion and contraception services were stored on the site, and the hacker was able to steal them.

The hacker, James Jeffery, threatened to reveal the names of the individuals, and has subsequently been convicted for offences under the Computer Misuse Act. There is no question that Jeffery’s threats to invade the privacy of innocent women were disgraceful, and he has rightly been punished. BPAS has announced that it intends to challenge the ICO’s CMP, and I don’t argue with that. The Information Commissioner’s recent interview with the Independent suggests that he doesn’t properly understand how his powers work, and the loss of the Scottish Borders CMP appeal (a CMP I don’t believe should ever have been issued) suggests he is not alone. The ICO’s use of its CMP powers is disproportionately focused on security and the public sector. The absence of an enforcement strategy for inaccuracy, which can be at least as harmful as poor security, is a disgrace.

However, whatever you think of the narrow issues of the size and nature of the BPAS CMP, the organisation’s approach to the case is a matter of real concern. I’ve written in the past about the annoying habit of data controllers to claim, in the face of some obvious and avoidable cock-up, that they take data protection very seriously when all of the evidence suggests that they don’t. Inevitably, BPAS joined in: “bpas takes any data breach immensely seriously and we were appalled that any information we hold had been compromised“.

Jeffery’s criminal actions are not a shield for BPAS’ failings. I agree with the ICO’s characterisation of them as ‘unforgivable‘. As the ICO CMP notice explains – and BPAS does not dispute – BPAS did not even know that a copy of all requests for a callback was retained on their website, making a series of assumptions about the way their website worked without actually finding out. In retaining callback requests for many years, BPAS breached the fifth data protection principle by keeping information for longer than they needed it. By storing sensitive (in the dictionary sense of the word) personal data insecurely, they breached the seventh principle, which requires organisations to take appropriate technical steps to prevent both ‘unauthorised’ and ‘unlawful’ processing. This means that data controllers have to try to prevent criminal breaches as well as accidents and cock-ups – the greater the risk of a criminal attack, the stronger the security needs to be.

Every organisation is potentially at risk from a hacker and so needs basic steps. BPAS routinely handle medical information, and describe themselves as the UK’s leading abortion provider. The likelihood of BPAS being hacked is much greater than it would be for other organisations, and the consequences for their clients of data being hacked are more damaging. What security is ‘appropriate’ for BPAS is much greater than the norm, and yet their approach had all the competence and planning of a parish council. They deserve to be criticised and perhaps punished, as they have betrayed the trust of every woman who has contacted them. Whatever your view of abortion rights, women should be able to contact an abortion provider in complete confidence. For several years, BPAS has failed to deliver on this. Jeffery was only able to access the data because BPAS left it there.

In the light of this, BPAS’ public approach to the CMP causes me great concern. Most of the statement on their website is about Jeffery’s actions, trying to create the impression that the fault is largely with him. A quote from the Chief Executive, Ann Furedi, makes this explicit. She says: “bpas was a victim of a serious crime by someone opposed to what we do“. BPAS is not the victim here; the victims of Jeffery’s actions were the people who contacted the organisation. BPAS is at pains to play down the significance of the information that was stolen: “These were not personal medical records of women who had undergone treatment at bpas and such records were never at risk“. Given that the BPAS website makes it clear that their main activity is abortion, were the records to be revealed (something made possible because of BPAS’ poor security), they would have been data about women who were likely to be seeking an abortion. No amount of sophistry can reduce the sensitivity of this information. As the ICO points out: “Some of the call back details were from individuals whose ethnicity and social background could have led to physical harm or even death if the information had been disclosed by the attacker“. It isn’t good enough for BPAS to claim that the risk to these women was entirely down to Jeffery; they put their clients in this position, especially given that hacking and criminal attack is regrettably but obviously part of the landscape in which they work. A statement made in 2012 at the time of the incident was even worse, as it claimed “the confidentiality of women receiving treatment was never in danger“, neglecting to say that the confidentiality of many women who contacted them possibly seeking treatment was unprotected.

Behind the scenes, BPAS may well be putting their house in order diligently and enthusiastically. Their public statements paint the organisation as a victim, but they are also guilty of significant failings and it may be that they realise that and simply don’t want to admit it publicly. It doesn’t give me confidence that they’re going to improve security and a more transparent admission of what went wrong would be better. The worst thing about their attempt to manage the bad news and spin their way out of the headlines however, has nothing to do with security or their position or the ICO fine. In none of the BPAS’ public statements, or the interviews I have heard Furedi give is there an apology to the women. They see the ICO’s actions as “appalling” and are horrified by what has happened to them, but for the women, there isn’t even regret.

Everyone thinks Data Protection is about computers and policies and dry, tedious sections of the law. It’s not. Data Protection is about people. It is about protecting their data, communicating with them, and it’s about the actions of people who handle data. It’s a uniquely human topic. The important issue here is not BPAS’ reputation. It is the protection of the identities of the people who BPAS exist to serve. BPAS let them down and should apologise to them now.

Mother! Eat the Cookie! Eat It!

My favourite part of the Information Commissioner’s website is the blog, where a succession of ICO notables talk about how marvellous their particular corner of the business is. The enterprise appears to be modelled on the Opinion section of The Onion, and I look forward to each new instalment with childlike enthusiasm. I’m really hoping they let the Internal Compliance people do one about people who make subject access requests in green ink. They have my permission to publish the mugshot from my driving license.

In the meantime, the one entitled ‘Education key to cookie law success’ by Dave Evans is certainly worth a read. Evans opens his post with the startling claim that “One area where I’ve seen most progress is cookie guidance”, a statement that makes sense only if he’s talking about the document produced by the International Chamber of Commerce, but the rest of the blog is definitely about the apparently marvellous work the ICO has been doing on cookies. I’ve been running – with a growing sense of futility – online courses on the cookie law for more than a year, and in the context of the ICO, “success” and “cookies” are phrases that repel each other like the opposing poles of a magnet. Cookies affect the private sector at least as much as the public sector, and often, much more so. This perhaps explains why the ICO has found it so challenging. Consider some of the landmarks:

  • The ICO published guidance called ‘Changes to the rules on using cookies and similar technologies for storing information’ on 9th May 2011 that stated: “The new legislation comes into force on 26 May 2011. You need to take steps now to prepare and ensure you are ready to comply.” The Commissioner himself ‘urged’ website owners to get to work in an associated press release:
  • Two weeks later, the day before the regulations came into force, the ICO suddenly decided not to enforce this same law for a year.
  • Even though the Commissioner’s slightly patronising school-themed ‘Half-Term Report’ of December 2011 included the comment that “if you are struggling with this part of the rule you are seriously lagging behind”, six months later, Dave Evans was reported by The Register to have said “We don’t expect all organisations not compliant on the 27th to have some evidence of taking action to be compliant.”.
  • On 13th December 2011, the ICO stated that consent – the vital disputed issue at the centre of all the cookie confusion – “must involve some form of communication where an individual knowingly indicates their acceptance”. They deliberately highlighted this quote out on their website. Two days before the ICO ended its self-imposed cookie enforcement abstinence in May 2012, they issued guidance that stated, “while explicit consent might allow for regulatory certainty and might be the most appropriate way to comply in some circumstances this does not mean that implied consent cannot be compliant”.

In other words, anything to avoid going after the private sector. This unwillingness to take action was underlined by an interview Evans gave to a website  in April in which he said that the ICO might not to enforce against someone breaching the cookie law, purely because the website might lose money: “if a company’s revenue would drop if it went for a strict opt-in, then we could look at different ways of educating users and gaining consent”. Every cookie case has already been pre-judged as not meeting the threshold for a civil monetary penalty.

Even though the ICO’s current position seems to be ‘whatever it is you’re doing about cookies is fine’, some in the web industry are so frustrated they have taken to goading the Commissioner to take action against them . In response to this criticism, the ICO’s position probably reveals what lies behind the problem. A spokesman said: “It’s worth noting that this website criticises those regulations, but the ICO is responsible only for regulating those who must comply with the law, and not for how it was drafted

The ICO’s response raises the question of why the change happened in the first place. The argument about whether consent needs to be active or can be inferred from some specific action is a bit sterile – the intention of the change was clearly to shift the onus from users opting-out to websites getting evidence of users’ preferences. In the old version of the Regulations, users of the internet were to be given “the opportunity to refuse the storage of or access to” a cookie; in the new version, users must have “given his or her consent”. Few of the EU’s citizens spend fretful nights over the lurking menace of cookies on their computers, even those who are concerned over their privacy. Subtly dropped onto your machine by unseen electronic tentacles, the cookie is more insidious than the noisy spam text, but it’s equally easy to get rid of. Most web browsers include an option to reject them outright or purge them at the click of a mouse. So why make the change?

My answer to this question is simple, and it goes some way to explaining the ICO’s clod-hopping reluctance to engage with the cookie changes. The cookie changes are their fault. Though the story is a familiar one to many, I’m surprised that it hasn’t been revisited more often in recent months. Some years ago, a company called Phorm started to hit the headlines. The Phorm product (WebWise) worked like this: ISPs provide data to Phorm about the browsing habits of their customers using a cookie. Websites access the cookie, and knowing what sites had been browsed, allows them to display just random adverts, but ones tailored to the interests indicated by the recent browsing. Everyone makes money (except the user whose web browsing has been monetised).

Less ambitious / troubling versions of this idea are alive and well on the internet right now, but the idea of the ISP tracking your every move and selling the results to others didn’t go down very well with Joe Punter. The alleged KGB past of the company’s saturnine CEO Kent Ertugrul probably didn’t help public perception much, but what really lit a fire under Phorm was the revelation that the system had been tested by BT and none of the customers involved knew about it. I should probably put the Phorm / BT case that what they did wasn’t a breach of anything, that no personal data was gathered etc. etc. But their interpretation doesn’t convince me and more importantly, there was no reason to do the trial in secret. BT deserves opprobrium on that point alone. As the fury over the secret trial and the implications of the product itself increased, customers on all sides melted away, and Phorm pulled out of Europe altogether.

The ICO took no action against either Phorm or BT for the secret trial, and a perfect way to understand their approach is to track down a document entitled “Phorm: The ICO View”, published in April 2008, but no longer on their website (thanks, WhatDoTheyKnow for reminding me of it, and to @blepharon for this link). “Whether or not the deployment of the Phorm products raise matters of concern to the Commissioner will depend on the extent to which the assurances Phorm has provided so far are true. The Commissioner has no reason to doubt the information provided by Phorm but some technical experts have publicly expressed concerns.”. The instinct when dealing with big organisations, ‘stakeholders’ or the private sector is believe what you’re told and accommodate and ameliorate rather than act. It’s hard to believe a council or NHS trust being given the same generous benefit of the doubt.

Look at Google. When dealing with the allegation that Google had secretly slurped Wi-Fi data from thousands of UK citizens, former Assistant Commissioner Phil Jones and Dave Evans (remember him?) met with Google, resulting in a decision to delete all the inconvenient and potentially incriminating data, with no further questions. Google was a valued stakeholder needing only a friendly meeting, rather than a data controller that might have breached the law. Evans’ blog states: “In my experience of working as the ICO’s industry strategic liaison manager, the vast majority of businesses want to operate within the law”. But Evans’ experience ought to show that the Streetview data turned out to be more personal than previously advertised, resulting in the ICO having to ask Google to sign an undertaking. Their press release at the time said that Google had been ‘instructed’ to sign, but the whole point of an undertaking is that it is voluntary. Only now that this undertaking has apparently been breached has Google Streetview finally been passed to the Head of Enforcement. Altogether, it’s not quite a ringing endorsement of strategic liaising.

The softly-softly approach is the hallmark of Phorm: believe what you’re told, take no action against the big player. To take action on the secret trial would have been to take on BT, a challenge for which the ICO showed no appetite. As a consequence, as well as infraction proceedings against the UK, I suspect the ICO decision that Phorm use of cookies did not breach privacy, data protection or surveillance law in the UK made a change EU cookie law seem much more necessary. Monitoring and exploitation of web-browsing data is precisely the kind of thing that makes a shift in the balance necessary – had the ICO attempted to argue that the legal status quo did have something to say about Phorm, I doubt we’d be where we are now.

To misquote The Dark Knight, I believe in Chris Graham, the current commissioner. He clearly has more guts than his predecessor, he sorted out the shameful FOI backlog, he has taken more enforcement action than any of the three previous Wilmslow incumbents put together, and his public persona is polite but increasingly pugnacious, precisely the kind of attitude to persuade recalcitrant organisations to take Data Protection seriously. But the cookie debacle is evidence of the Old ICO alive and well: vague, deferential, ineffectual, and embarrassing. In other words, nobody’s definition of success.

NB: The tradition in writing about cookies is to use one of a limited number of obvious cookies puns or references in the title. I have chosen the most obscure I can think of, and if you recognise it, you should be as ashamed of yourself as I am.

Where did I put that thing?

There was an unusually frank admission in the Register’s recent report ( on their home-grown email SNAFU. Roughly 3000 people received an email containing the names and email addresses of roughly 45000 people because, in the Register’s words “The two-stage send process that is the norm for all of our mailers was over-looked because someone was in a hurry”.
Shortly after I took delivery of my iPhone, I checked my emails at the end of a training course. A delegate, hanging around to ask me the obligatory ‘how do I sort out my credit’ question, noticed it and immediately told me what to do when I dropped it down the toilet. Leave it submerged in a jar of sugar for a week, apparently – not salt, which supposedly does not absorb the moisture, but replaces it with crystals instead, killing the little guy once and for all.
I have been given variations on this tip four times since – and always with the certainty that this is a ‘when’ not ‘if’ scenario. One of my punters recently explained that the iPhone in the toilet is an occupational hazard for some women, as they secure their phones in their bra-strap when out on the razz, and when they lean forward to flush, it slips out into the bowl. I once saw a man in the Gents at my local cinema conducting an animated conversation, one hand on his phone, the other dealing with urinal business. The admirable comedian Michael Legge (@michaellegge) has a similar anecdote, except his story ends with the person on the phone complaining when Legge uses the hand drier (“I’m on the phone” the caller remonstrates).
I have many things to say about information security, but what I am getting at here is the foundation of all data loss prevention, security and risk management: people can be idiots. Give them enough space, and there is no end to the daft things that they will do. It doesn’t matter how clever they are: witness former Assistant Commissioner Bob Quick unwittingly parading anti-terror documents in front of the photographers at Downing Street, Oliver Letwin’s recent escapades in St James’ Park, or Chris Huhne telling a colleague that he doesn’t want his fingerprints on a leak, but doing it via a public tweet instead of a direct message. All clever and talented men, all channelling Frank Spencer (ask your Dad). It doesn’t matter how obvious the risk is – a friend rang me a while back to ask me what to do about the fact his employer has just discovered that one of their unencrypted laptops has been left on a bus. I asked him why they  hadn’t encrypted their laptops, and apparently a senior officer had insisted that they could rely on the professionalism of their staff. Which is rather like the captain of the Titanic saying: ‘No, on second thoughts, let’s just ram it. What’s the worst that could happen?’
I’m helping an organisation to update their information security policy at the moment. And the problem is that for all the practical and intelligent elements that we’re going to include, I just keep thinking “and what are the idiots going to do?”. Most people, most of the time, don’t really need to be told what to do with data – they’ll guard it with their lives, keep it accurate and up-to-date and make it accessible only to those who need it because they’ve got common sense. But even the best member of staff goes idiot when they’re in a hurry, or they’ve under pressure, or they don’t understand the technology. And then there are a very small number who are idiots all day, every day. Look around the table, and if you don’t see the idiot, it’s you. And it’s also the man walking behind you using a closed laptop as a tray for four cups of coffee.

In these days of £120000 fines, idiot identification and prevention is vital. If you can take steps like encryption, access control and the ritual burning of fax machines, the idiot’s room for manoeuvre is restricted and the risk is greatly reduced. If you can prove that you have the policies, did the training, and have the idiot’s signature proving they knew what not to do, and did it anyway, even when things go wrong, the organisation can at least prove that it acted correctly and can probably dodge a bullet. If you’re smart, the vast majority who are sensible and reliable may not even feel the constraints, even as they work within them.
Dealing with security breaches is increasingly going to be an exercise in idiot-hunting. The Register’s apparent breach is not that serious (they haven’t lost sensitive or financial data, for one thing), but the ‘someone’ cited in their announcement is pivotal. If The Register can prove that they had proper policies and had clearly communicated them, they haven’t breached the Data Protection Act (even if the ICO gets them to sign a completely unnecessary undertaking, which it probably will). However, if ‘someone’ can argue that they hadn’t been informed of the right practices, there’s a possibility for legitimate ICO action. As a trainer, you’d expect me to tell you that you need to train people. You need to take away options. You need to communicate your messages clearly and to everyone.
But fundamentally, every organisation will continue to employ a few people with a deep capacity for foolishness, and you need to put your mind to work to spot what they’ll come up with next. It’ll probably involve social media, but that’s another blog post.