The Curse of the Padlock

One of the dangers of working in Data Protection is the risk of becoming a pedant. Precision matters; court cases have turned on the meaning of individual words like ‘likely’ and ‘distress’. The legislation is a maze of definitions and concepts that the competent practitioner needs to get to grips with. Lazy thinking can be revealed by an inability to get the details right, so it’s possible to become obsessed with the detail. Even the BCS Data Protection exam has a question which requires you to list the elements of the definition of consent in the right order. It’s easy to lapse into pedantry, to point out every wrongly quoted article, every jumbled phrase.

Nevertheless, getting a simple thing right is often important. GDPR does not cover ‘personal identifiable information’; it covers ‘personal data’ and the definition of the two is not the same. A person who talks about PII in the context of European Data Protection is starting in the wrong place (the US), and can make mistakes as a result. Another error that seems to be creeping in all over the place is more profound, and risks entrenching one of the biggest misconceptions about how data protection works, a misconception many of us have spent years trying to break down.

The problem is the phrase ‘data privacy’.

I see it everywhere – on LinkedIn naturally, in news coverage of the sector, and predictably, the ICO has fallen for it. They describe themselves as “The UK’s independent authority set up to uphold information rights in the public interest, promoting openness by public bodies and data privacy for individuals.” Look at the Data Privacy Advisory Service, who summarise their services as “At DPAS we help organisations safeguard the fundamental human right to have data kept private by putting in place the best possible protection to keep it secure. This is delivered in line with the General Data Protection Regulation (GDPR) and The Data Protection Act 2018.”

The idea is nonsense. It doesn’t exist. There is no right to data privacy – there is certainly no fundamental right ‘to have data kept private’. This isn’t a snide dig at someone quoting the wrong article. The concept of ‘data privacy’ is a complete misunderstanding of what Data Protection is for, and everyone who promotes it is actively thwarting the efforts of the rest of us to implement data protection in a practical way.

Article 8 of the European Convention on Human Rights says: ‘Everyone has the right to respect for his private and family life, his home and his correspondence“. This right is not absolute; it can be interfered with (only when necessary) in the interests of “national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others“. The right is not just about data – it certainly can be, as is evidenced by cases where celebrities and others use the privacy right to prevent the use of images that breach their right to privacy. But the right to privacy doesn’t have to be about data at all – you can breach a person’s right to privacy by simply observing them, by being in a place where they expect privacy, or by denying them the opportunity to do something privately. Data doesn’t have to come into it.

Clearly, if you did a Venn diagram, there would be circumstances where privacy and data protection overlap. By following the Data Protection principles when you handle a person’s private correspondence for example, you probably also do what’s necessary to protect their privacy. The same is true for confidentiality – not all confidential data is personal data, but a decent stab at the principles will probably respect both. There is, however, a significant portion of the Venn diagram where Data Protection and Privacy do not meet, and the DP part of that is important.

The notion of ‘Data Privacy’ obscures two vital elements of Data Protection. First, data protection is not only about private data. It is covers all personal data, private, secret, and public. For years, I have been banging my head against the brick wall of ‘it’s not personal data, it’s in the public domain’. Trying to explain to people that data like photographs, email addresses and other publicly available data is still personal data, just available and easier to use than some other data has long been a difficulty. There was a chink of light in Article 14 of the GDPR which clearly states that a person should be informed even when their data is accessed from ‘publicly accessible sources’. This explicit recognition that public data is still personal data is very helpful, but the notion that ‘data protection’ and ‘data privacy’ are interchangeable muddies the waters again.

Second, in related news, GDPR is not about keeping data private; it is about ensuring that personal data processing is properly regulated. For years, Data Protection has been plagued by the padlock. The Information Commissioner used it as a logo (‘but the padlock is unlocked’ is a defence that umpteen different ICO folk have used when I complained about it), and when I did a Google image search for ‘Data Protection’ today, this is the top set of results:

Screenshot 2019-05-26 at 09.17.53

The problem with the Data Protection Padlock is that it presents the legislation as something that locks data up, keeps it away from people. This understanding of data protection leads directly to the belief that disclosure of personal data is inherently problematic and exceptional, and that belief is toxic. I’m not persuaded that Victoria Climbie or Peter Connelly died solely because data about them wasn’t shared, but the pervasive fear of data sharing didn’t help. The GDPR says that ‘the protection of natural persons in relation to the processing of personal data is a fundamental right‘. The word ‘privacy‘ isn’t mentioned anywhere beyond a reference in a footnote to the ePrivacy Directive, and the processing of personal data is firmly put in the context of operating the EU’s internal market: “This regulation is intended to contribute to the accomplishment of an area of freedom, security and justice, and of an economic union“.

You can’t achieve the economic union by locking all the data away, by keeping it private. To characterise data protection law as being about ‘data privacy’ is to misrepresent its purpose completely. European Data Protection is a compromise – trade is underpinned by the use, even the exploitation of personal data, but people have rights, they have control over their data in some (but not all) circumstances, and the legislation built on foundations of transparency and fairness, not privacy. Arguably, the GDPR tries to even up the power imbalance in some circumstances, but it is not designed to lock up data and keep it private.

Of course, some people might be using ‘privacy’ as a synonym for ‘secure’ – the DPAS statement above seems to elide the two. Only a fool would want to play down the importance of security in the context of using any personal data, but the reduction of Data Protection solely to security is as destructive to a proper understanding of it as the privacy / protection mess. We’ve managed to drag Data Protection out of the IT department, and we need to stamp on this idea that security is the exemplar of good DP practice. Your data can be private and secure, but kept for no good reason, for too long, in an inaccurate state, and there could be too much of it.

Some personal data is private and should remain so. In many situations, the processing of personal data without an eye on people’s legitimate expectations of privacy, especially when monitoring, watching or listening to them, is likely to be unfair and so unlawful. There is a strong link between Data Protection and Privacy, and any attempt to divorce them would be stupid. But the use of ‘data privacy’ as a synonym for data protection is misleading and dangerous – it perpetuates a fundamental misreading of what the legislation is for, and makes the lives of everyone trying to make GDPR work effectively a thousands times harder. It’s time to take this nonsense, lock it up and throw away the key.

Brand new key

Parents at schools in Suffolk recently received an interesting piece of correspondence about an exciting initiative called ‘Suffolk SAFEKey‘, offered by Suffolk Police. For as little as £1 a month, subscribers to the service receive a special key fob with a reference number on it. Once registered, if the keys are lost, the person can use the reference number to contact Suffolk Police’s commercial partner (Keycare Limited) to get keys and owner reunited, incentivised by a £10 reward.

Alerted to this by a concerned citizen, I made an FOI request to Suffolk Police to find out more about the scheme, the arrangement with Keycare Limited, and how the email came to be sent. Suffolk Police told me that they contacted all 18 secondary schools in the county (by phone, so I don’t know how the request was couched), and of those, 8 forwarded the invitation to join SAFEKey to all parents. The force were unhelpfully vague about who else had been approached. I asked who they had contacted, and their answer conflated those they approached and those they claim had approached them. This means I know that those involved are charities (Suffolk Community Foundation / Age UK), “advocacy groups” (whatever that means), Neighbourhood Watch, the University of Suffolk and “lunch clubs and other such groups”, but I don’t know who contacted who.

On one issue, Suffolk Police were admirably clear. I asked them how they had obtained consent to send the email. This was their reply:

The parentmail service is not controlled by the Constabulary and the information provided is not personal data and as such, there is no requirement for us to obtain consent from those third party recipients.

Regulation 22 of the Privacy and Electronic Communications Regulations 2003 (AKA PECR)  applies to emails and texts, and it is remarkably unambiguous, despite all the dodgy marketers and list brokers who purport not to understand it.

a person shall neither transmit, nor instigate the transmission of, unsolicited communications for the purposes of direct marketing by means of electronic mail unless the recipient of the electronic mail has previously notified the sender that he consents for the time being to such communications being sent by, or at the instigation of, the sender

Suffolk Police instigated the sending of the email to parents by making an unsolicited approach to schools, asking them to send it. The email would not have been sent unless they had asked for it to be sent. Regulation 22 does not require them to be the sender. Should there be any doubt about this, the ICO asked Better Together to sign an undertaking following their misbegotten texts during the Scottish Independence campaign. Better Together used an agency – they never held the data and they didn’t send the texts. This is exactly the same situation. There are only two ways that marketing emails could be sent in this way: either parents would have to give consent direct to Suffolk Police, or give consent to the school to receive marketing from the force. This second possibility is one the ICO is keen to play down, as their Direct Marketing Guidance makes clear:

Indirect consent may therefore be valid if that organisation was specifically named. But if the consent was more general (eg marketing ‘from selected third parties’) this will not demonstrate valid consent to marketing calls, texts or emails.

Of course, as the senders of the emails, the schools have also breached PECR. And taking it one stage further, you could argue that Suffolk Police have also breached the Data Protection Act by processing personal data unfairly and unlawfully. If they don’t have a data processor contract with the schools, they may even have breached the seventh principle.

Many public bodies and charities struggle with PECR because they perceive ‘marketing’ as a purely commercial activity. This means that they think the messages they send are somehow not marketing, and are surprised when PECR bites. Suffolk Police can be under no such illusion. SAFEKey is not a policing activity, it is a wholly commercial venture, with the income split 50/50 between the force and Keycare Ltd. Moreover, there is an argument that the force is exploiting its position as a law enforcement body to promote its commercial activities – it’s unlikely that secondary schools would forward information about double glazing or PPI. The force might want this to seem like an aspect of their crime prevention work, but it isn’t – it’s a purely commercial venture. No public body, but especially not the police, should exploit their position as partners with other, smaller public bodies to plug their commercial activities.

There are other concerns. The force didn’t carry out a Privacy Impact Assessment before launching the SAFEKey scheme, which is surprising, as the project involves the force gathering personal data it does not need to carry out its legal functions, purely for the purpose of a commercial venture, using a variety of unrelated bodies as a conduit for the data and transmitting it to a commercial partner. At the very least, you would expect them to consider the risks. Moreover, although the extract I received from the contract between Keycare and Suffolk Police does make it clear that Keycare cannot use or share the personal data they receive for their own purposes, the security demands made by the police are relentlessly generic.

I don’t think the police should exploit the significant position of trust they enjoy to flog commercial services at all. But even if you disagree, there can be no question than when they do, the police should at all times obey the law. They haven’t done so here, and the ICO should investigate. As I did not receive one of the emails, they would ignore any complaint that I made, but they should intervene to make clear to all public bodies how PECR works.

 

Less than ideal

Last week, Stephen Lee, an academic and former fundraiser was reported as having attacked the Information Commissioner’s Office for their interpretation of direct marketing at a fundraising conference. It was, he said “outrageous” that the Commissioner’s direct marketing guidance stated that any advertising or marketing material that promoted the aims and ideals of a not-for-profit organisation was covered by Data Protection. According to Lee, only fundraising activities should be considered to be marketing.

[NB: Third Sector articles are sometimes open to all and sometimes limited to subscribers. If the links don’t work, please accept my apologies!]

He is quoted as saying “Who says that’s right? Just the ICO. Who did it consult? No one.” and  went on to say “Why and how and in what way should we be compelled to comply with that proposition?”

Who says that’s right? Who did the ICO consult? Well, let me see now.

1) The Council of Europe

In 1985, the Council of Europe issued a Recommendation on the protection of personal data used for the purposes of direct marketing. The definition of direct marketing includes both the offer of goods or services and “any other messages” to a segment of the population. The recommendation predates the guidance Mr Lee disparages by more than 30 years.

2) The 1995 Data Protection Directive

The Directive makes clear that direct marketing rules apply equally to charitable organisations and political parties as they do to commercial organisations, and emphasises the need for people to be able to opt-out of direct marketing. By redrawing the definition, Mr Lee would contradict this fundamental right.

3) The Data Protection Act 1998

Given that Mr Lee feels qualified to make bold statements about the interpretation of the Data Protection Act, it’s odd that he doesn’t seem to have taken the time to read it. Section 11 of the Act states that the definition of Direct Marketing “the communication (by whatever means) of any advertising and marketing material which is directed at particular individuals”. The important word there is “any” – organisations do not get to pick and choose which of their promotional messages are covered and which are not.

4) The Privacy and Electronic Communications Regulations 2003

PECR sets up the rules for consent over electronic direct marketing (consent for automated calls, opt-out and TPS for live calls, consent for emails and texts). It does not define direct marketing, but instead says this “Expressions used in these Regulations that are not defined in paragraph (1) and are defined in the Data Protection Act 1998 shall have the same meaning as in that Act”. Therefore, the DPA definition applies to PECR.

5) The Information Tribunal (now the First Tier Tribunal)

In 2005, the Information Commissioner served an Enforcement Notice on the Scottish National Party after they repeatedly and unrepentantly used automated calls featuring Sean Connery to promote the party in the General Election. The SNP appealed, and in 2006, the Information Tribunal considered the issue. One of the main elements of the SNP appeal was against the ICO’s definition of direct marketing. Although the case is about a political party, the ICO’s submissions are based on the proposition that charities as well as political parties are covered by the definition of direct marketing, and that the definition cannot be restricted to fundraising alone. The Tribunal accepted the ICO’s view in full, and dismissed the appeal.

6) The charity sector and anyone else who wanted to be consulted

The ICO may have issued guidance in the 1980s or 1990s on the definition of direct marketing, but the idea that promoting aims and ideals is part of it has been their view since 1999. In guidance issued on the precursor to PECR, the ICO stated clearly that direct marketing includes “not just to the offer for sale of goods or services, but also the promotion of an organisations aims and ideals”. They specifically mentioned charities, as they have ever since. Virtually every iteration of the ICO’s guidance on PECR and direct marketing has been subject to public consultation – indeed, the very guidance Lee is talking about was subject to a public consultation.

Here’s the problem. Lee is an Honorary Fellow of the Institute of Fundraising, and has a long association with it. The IoF has been the most consistently pernicious influence on the charity sector’s compliance with data protection and privacy law in the past ten years. Their guidance and public utterances on data protection are often misleading, and they recently had to change their own Code of Practice because it was legally incorrect. At best, they haven’t noticed the ICO position on charities and direct marketing for more than 15 years. At worst, they deliberately ignored it in favour of an interpretation that largely suits fundraisers. Lee complained at the conference about the “appalling” communication between the ICO and charity umbrella bodies, but Richard Marbrow of the ICO summed the problem up all too well:

One of the things the sector asked for was clarity, and I will try and bring you that. The trouble is, if you then say ‘we don’t like that clarity, could we have some different clarity please?’, we’re not going to get on very well.”

The most important thing about Lee’s outburst is the subtext – if any form of communication is not covered by the definition of direct marketing, then your consent is not required  in the first place and you have no right to stop receiving it. His interpretation is nonsense, but it is also ethically unsound. At its most basic level, privacy means the right to be left alone, the right to have an area of your life which is yours, which others can’t intrude into. Lee seems to want to erode that right. If his view was correct (it’s not), charities could bombard people with phone calls, texts or emails to tell them how marvellous they are, how important their work is, how vital they are for society. As long as they don’t ask for money, the logic of his argument is that people wouldn’t be able to stop them.

Lee’s other question (“Why and how and in what way should we be compelled to comply with that proposition?”) has an easy answer. Ignore it. Carry on breaching the law, ignoring the rules. I went to the cinema last night and saw adverts for two different charities that plainly breached PECR, so that seems to be the plan. Given that the furore over charities began with an innocent person bombarded with unwanted correspondence, it’s remarkable that senior figures in the charity sector are ready for another go, but if Mr Lee wants to drag charities’ reputations deeper into a swamp that they share with PPI scammers and payday loan merchants, he’s welcome.

But the ICO should not listen to their concerns, or open friendly channels of communication with the sector. They should apply the law firmly and regularly until the charities get the message. If this results in more enforcement against charities than other sectors, that will be only because the big charities are among the worst offenders and they haven’t put their houses in order. If charity giving suffers as a result, even amongst the many charities that have not transgressed, they should stop blaming others and look to their fundraisers, their colleagues and themselves.

The Bad Samaritan

The Samaritans have launched a new tool for the persecution of the vulnerable… Sorry, a nannyish attempt to spy on your friends, No, I mean, they’re trying to use technology to do what real friends would be doing anyway…. I’ll try this again. There’s this app they have. You’ve probably heard of it; it runs in the background monitoring tweets of those you follow on Twitter, and analyses them to look for indications that a person may be in need of support. The Samaritans are convinced it’s marvellous and has no Data Protection or privacy implications.

The Data Protection Act 1998 applies to the processing of any personal data, anywhere by any person. Certain areas are carved out – the use of personal data for national security purposes is inevitably and depressingly exempt, as is the use of data for purely personal, domestic reasons, and to an extent, the use of data for journalism. Beyond that, although the Data Protection principles are flexible, they apply to all uses of personal data.

At no point in the text of the Data Protection Act does it say that personal data that is public or published is exempt from the Act’s provisions. There is no section that says that, and no section that can be interpreted as meaning that. Moreover, I can use the same quote I used from the Information Commissioner’s Code of Practice on Online data that I used in my last blog about monitoring of blogs:

“If you collect information from the internet and use it in a way that’s unfair or breaches the other data protection principles, you could still be subject to enforcement action under the DPA even though the information was obtained from a publicly available source.”.

And “You should only use their information in a way they are likely to expect and to be comfortable with.”

As the Samaritans have claimed that their app is entirely legal and has no Data Protection implications, I am certain that they will have no problem answering the following questions:

Principle 1:

  • No consent is being obtained; which data protection conditions allow the Samaritans to monitor and – crucially – to analyse and interpret the state of mind of Twitter users without consent?
  • How are data subjects to be informed that their tweets are being monitored and – crucially – analysed with a notification to any third party who chooses to register?
  • The first principle requires the processing of data to be ‘fair’: what steps have the Samaritans taken to ensure that those registering to receive notifications via the app have no malicious intentions towards the subject and will not use the notification for malicious purposes?

Principle 2:

  • What assessment has been carried out to ensure that the processing (i.e. attempting to identify the subject’s state of mind in order to notify secretly a third party of that) is compatible with the subject’s original purpose in publication? How is that original purpose identified?

Principle 3:

  • How have the Samaritans established that their gathering of data and analysis of Twitter users’ state of mind is relevant and not excessive?

Principle 4:

  • Principle 4 states that personal data ‘shall’ be accurate for the purpose – there is no qualification to this. How have the Samaritans ensured that the analysis of a Twitter user’s state of mind is accurate when alerting a third party to it?

Principle 6:

  • What provisions have the Samaritans in place to provide the following:
  • Subject Access: data subjects are entitled to know what data is held about them, and who has received it. Will data subjects be told who has received alerts about them if they ask? If not, which exemption applies?
  • Section 10 Right to object to damaging / distressing processing: data subjects have a right to object to damaging processing – will such requests be honoured? If not, why not?
  • Section 12: Data subjects have a right to request that any automated processing will be carried out by a human being. Will Section 12 requests be honoured and if not. why not? How many members of Samaritans staff are available to carry out the analysis?

Principle 7:

  • What technological and organisational security measures are in place to ensure that the analysis of Twitter users state of mind (potentially sensitive personal health data as defined by the Act)?

Principle 8

  • How have the Samaritans ensured that the sharing of personal data about Twitter users’ state of mind is restricted to the European Economic Area? If it has not, how is the sharing of information about Twitter users’ state of mind outside the EEA justified under Principle 8.

For the record, I think the 30 day retention period of data (principle 5) may be OK.

Crazy Naked Girls

There’s little to like about the voyeuristic coverage of the theft of images of famous women. Whether it is the feverish frottage of the mainstream press (which largely boils down to LOOK AT ‘EM ALL, IMAGINE ‘EM ALL NAKED, NNNNNNNNGGGGGG!!!!!) or the inevitably crass victim blaming (thank you, Ricky Gervais, for The Office and for absolutely nothing else), it’s all depressing.

The data protection strand in all this hasn’t been much better. Mobile devices are not a safe place to store sensitive data (true). The cloud is – in Graham Cluley’s immaculate phrase – just someone else’s computer (true). But too many security commentators have, perhaps unwittingly, aligned themselves with a ‘They asked for it’ line of thinking. A popular analogy is the one about burglary or car theft (this is an example from 2011). Apparently, you can’t complain if you leave your valuables on the front seat of your car and somebody steals them, and the same goes for pictures of your bits and the internet. In other words, the thinking is more or less that if Jennifer Lawrence is silly enough to take pictures of herself naked, she was basically asking for them to be stolen. For me, this is too close to the mentality that blames rape victims for being drunk, rather than rapists for being rapists. Friends, I blame the rapists.

Taking pictures of oneself is normal for most people, not just actresses – I am odd because I don’t do it, but if I was good looking, I probably would, all the time. It must be great to be extraordinary, and to enjoy being extraordinary. It’s too easy to be holier-than-thou and say that the violated only have themselves to blame. The victims made these images for themselves or they made them for someone else specific. They did not make the images for the media, or for the voyeurs who stole, sold or search for them. Anyone who handles or seeks them out violates the subject’s privacy, is a criminal and should be treated as such. The victims did nothing remotely scandalous or reprehensible – indeed, they did nothing that is anyone else’s business but their own. They probably didn’t do a privacy impact assessment before taking the pics, but that’s because they’re human beings and not data controllers.

The car analogy doesn’t work because mobile phones and the internet are not immediately understandable physical objects and spaces. When you leave your laptop on the passenger seat of your car, you can turn around and see the laptop sitting there. The risk is apparent and obvious. There’s a striking moment in Luc Besson’s current film ‘Lucy’ where Scarlett Johansen can see data streams soaring out of mobile phones across Paris, and navigates her way through them. We don’t see data like this. Few understand how the internet actually works (I’ve met a lot of people who think cloud storage means that data is floating in the air like a gas). We don’t see the data flowing or spot the footprint it leaves behind. We don’t know where the data ends up and the companies we use don’t tell us. We use unhelpful misnomers like ‘the cloud’ when we mean ‘server in a foreign land’. Many people don’t know how their phones work, where their data is stored, how it is copied or protected, or who can get access to it. This should be the problem that the photo hack alerts us to.

It’s possible that some people would change the way they used technology if they fully understood how it works, but that should be their choice, based on clear information provided by the manufacturers. At least one of those affected has confirmed that the images of her are quite old, so we can’t even judge the situation on what we know now. If taking the pics was a mistake (and I don’t think I’m entitled to say it was), it was a mistake made possibly years ago.

I don’t think people understand where their data is or how it is stored. Rather than wagging our fingers at the victims of a sex crime, anyone involved in data protection and security should concentrate on educating the world about the risks. I think the big tech companies like Google, Apple and Facebook would be uncomfortable with this idea, which is why security and sharing are presented as such tedious, impenetrable topics. They don’t want more informed use of their services, they just want the data like everyone else. The defaults for sharing and online storage, for location and tracking, for a whole variety of privacy invasive settings should be set to OFF. Activities involving risk should be a conscious choice, not an accidental side effect of living in the 21st century.