Crazy Naked Girls

There’s little to like about the voyeuristic coverage of the theft of images of famous women. Whether it is the feverish frottage of the mainstream press (which largely boils down to LOOK AT ‘EM ALL, IMAGINE ‘EM ALL NAKED, NNNNNNNNGGGGGG!!!!!) or the inevitably crass victim blaming (thank you, Ricky Gervais, for The Office and for absolutely nothing else), it’s all depressing.

The data protection strand in all this hasn’t been much better. Mobile devices are not a safe place to store sensitive data (true). The cloud is – in Graham Cluley’s immaculate phrase – just someone else’s computer (true). But too many security commentators have, perhaps unwittingly, aligned themselves with a ‘They asked for it’ line of thinking. A popular analogy is the one about burglary or car theft (this is an example from 2011). Apparently, you can’t complain if you leave your valuables on the front seat of your car and somebody steals them, and the same goes for pictures of your bits and the internet. In other words, the thinking is more or less that if Jennifer Lawrence is silly enough to take pictures of herself naked, she was basically asking for them to be stolen. For me, this is too close to the mentality that blames rape victims for being drunk, rather than rapists for being rapists. Friends, I blame the rapists.

Taking pictures of oneself is normal for most people, not just actresses – I am odd because I don’t do it, but if I was good looking, I probably would, all the time. It must be great to be extraordinary, and to enjoy being extraordinary. It’s too easy to be holier-than-thou and say that the violated only have themselves to blame. The victims made these images for themselves or they made them for someone else specific. They did not make the images for the media, or for the voyeurs who stole, sold or search for them. Anyone who handles or seeks them out violates the subject’s privacy, is a criminal and should be treated as such. The victims did nothing remotely scandalous or reprehensible – indeed, they did nothing that is anyone else’s business but their own. They probably didn’t do a privacy impact assessment before taking the pics, but that’s because they’re human beings and not data controllers.

The car analogy doesn’t work because mobile phones and the internet are not immediately understandable physical objects and spaces. When you leave your laptop on the passenger seat of your car, you can turn around and see the laptop sitting there. The risk is apparent and obvious. There’s a striking moment in Luc Besson’s current film ‘Lucy’ where Scarlett Johansen can see data streams soaring out of mobile phones across Paris, and navigates her way through them. We don’t see data like this. Few understand how the internet actually works (I’ve met a lot of people who think cloud storage means that data is floating in the air like a gas). We don’t see the data flowing or spot the footprint it leaves behind. We don’t know where the data ends up and the companies we use don’t tell us. We use unhelpful misnomers like ‘the cloud’ when we mean ‘server in a foreign land’. Many people don’t know how their phones work, where their data is stored, how it is copied or protected, or who can get access to it. This should be the problem that the photo hack alerts us to.

It’s possible that some people would change the way they used technology if they fully understood how it works, but that should be their choice, based on clear information provided by the manufacturers. At least one of those affected has confirmed that the images of her are quite old, so we can’t even judge the situation on what we know now. If taking the pics was a mistake (and I don’t think I’m entitled to say it was), it was a mistake made possibly years ago.

I don’t think people understand where their data is or how it is stored. Rather than wagging our fingers at the victims of a sex crime, anyone involved in data protection and security should concentrate on educating the world about the risks. I think the big tech companies like Google, Apple and Facebook would be uncomfortable with this idea, which is why security and sharing are presented as such tedious, impenetrable topics. They don’t want more informed use of their services, they just want the data like everyone else. The defaults for sharing and online storage, for location and tracking, for a whole variety of privacy invasive settings should be set to OFF. Activities involving risk should be a conscious choice, not an accidental side effect of living in the 21st century.

Peeping Tom

There have already been excellent articles about the antics of contributors to the Facebook page “Women Who Eat on Tubes’, including one by Sophie Wilkinson, one of the women who has been targeted, by the Irish Data Protection lawyer Fergal Crehan, and a surprisingly sympathetic interview in the Telegraph with the page’s progenitor, Tom Burke, in which he insisted he is not a weird deviant. Burke made an unsympathetic and discourteous defence of his activities on Radio 4’s Today programme this morning, claiming that his was an artistic project for the cognoscenti and in any case, that there is not right to privacy in a public place. There has been some measure of privacy in public places at least since Mr Peck won his Human Rights case in 2003, so perhaps Mr Burke hasn’t been keeping up with recent events.

If you haven’t seen the Facebook or Tumblr pages (I understand the Facebook page is gone), they comprised pictures of women eating on the Tube, taken without their consent by men. These women put things in their mouths, know worra mean, eh? The nudge-nudge-wink-wink voyeurism of WWEOT isn’t hard to spot, but I have another point. WWEOT breached the law.

Long standing readers of this blog (and DP anoraks) will know that Data Protection offers an exemption in Section 36 for ‘personal, family and household’ uses of personal data. Private citizens are entitled to take photographs in public places, have CCTV on their houses, keep personal diaries about their neighbours, indeed anything they feel like doing. The DPA is not intended to regulate the ordinary person’s activities. However, the exemption has limits. In 2003, the EU Court of Justice found that a Swedish Citizen, Mrs Lindqvist, who wrote about admittedly trivial details about her fellow churchgoers on a church website, had breached the Swedish equivalent of our Section 32. Mrs Lindqvist’s publication of information on the internet robbed her of the domestic exemption. As I have written before, the UK Information Commissioner has always studiously pretended that the Lindqvist decision didn’t happen, but the UK Courts have on at least one occasion described this position as ‘absurd’.

The effect of Lindqvist – properly implemented – would be significant, and it would have a substantial effect on social media. Any person who wished to publish an image of another person (or other personal data) on the internet would have to comply with the Data Protection Act. They would probably have to use such data fairly, they would need the consent of the individual (or some other condition like legitimate interest), and they would need to respond to subject access requests and Section 10 notices (which prevent processing that causes damage in some circumstances). There’s no question, for those people who want to put images of other people onto the internet, it would be a huge inconvenience. Lindqvist would also drag the Information Commissioner into an endless cycle of domestic disputes.

Two things here: first, the law is the law. The ICO has no serious argument that Lindqvist is not an accurate reflection of European DP law, and should do its job properly. Second, in my opinion, a person should have the right to sit on a tube eating a banana, dance badly in a nightclub, fall asleep on a train or wear red trousers and be posh without their fellow citizens taking their photos and sneering at them online. If you want to put your own data onto the internet, a proper implementation of Lindqvist would not be any hindrance. But if you can explain to me why other people (anyone, female or male) should lose their rights to be left alone because you have a smartphone and you want to use it, feel free to drop me a comment.

And so back to WWEOT. As I understand Lindqvist, Mr Burke as the creator (or – no doubt – curator) of the group lost S36 merely because of the publication, as have all of his contributors. At the very least, he should be doing a Data Protection notification and paying his annual £35 to the Information Commissioner. Of course, this would mean that every Facebook user and blogger that published data about a third party should do the same. Let me be clear, I think they should. However, there is a more concrete reason that WWEOT fails its domestic purposes test. Mr Burke is clear that he sees Women Who Eat on Tubes as an artistic endeavour. So that’s domestic purposes thrown out of the window and bouncing down the road in the rear view mirror. He’s covered by the DPA.

There is another exemption that Burke and his compadres may be able to use. Section 32 of the Data Protection Act allows a fairly broad exemption from much of Data Protection if the data is being processed (i.e. the photographs) for the special purposes – journalism, literature and art. Using Section 32 requires the Data Controller (Mr Burke) to ‘reasonably believe’ that compliance with any of the DP principles is incompatible with the special purposes. It’s possible that Burke might be able to argue that the public interest in the publication of his non-consensual images meant the principles didn’t apply, but I think he should be made to do so.

But by applying the DPA to his images, every woman featured on the site would be and should be able to test his arguments out, and force him to justify the overriding public / artistic interest either to the Information Commissioner (who might run a mile) or the Courts (who would probably make a sensible decision). This should not just be seen as a matter of public debate on whether WWEOT is artistic, or reportage. There is a legal method to test Burke’s assertions that the women could have used – and if an incarnation of WWEOT still exists, still can use.

There is a much wider point to be made about WWEOT and the attitudes that lie behind it that is probably another blog entirely. Nevertheless, the casual intrusion into people’s ordinary daily lives that the internet and smartphones have made possible allows all manner of bullying, stranger-shaming and plain old-fashioned voyeurism. Until we stop looking at what we can do on the internet, and start deciding what we should do, I think clunky tools like the DPA should be employed far more aggressively by the people who find themselves unwillingly in the camera’s lens.

Doctor knows best

Dr Clare Gerada, who was until recently chair of the Royal College of General Practitioners, has written an article for The Times about care.data, stoutly defending the scheme and its benefits for the public. The Times doesn’t give its stories away for free (a stance that they’re perfectly entitled to adopt), so if you want to read the article itself, you’ll either have to subscribe online or buy the newspaper like I did. Accompanying the comment piece is a short article in which she is quoted, perhaps less formally.

The article itself is familiar stuff. “We have nothing to fear” from care.data. Our data will be safe, secure, and used only for “proper and appropriate purposes”. Dr Gerada deserves credit for making clear that identifiable data will be shared outside the Health and Social Care Information Centre: she acknowledges that information will “not be anonymised at all times” because anonymised data only works in a limited number of circumstances. This frankness is refreshing, especially given the fevered Twitter commentary from NHS England’s apparently bewildered National Director for Patients and Information, Tim Kelsey, who still won’t admit that the exchange of a commodity for money is ‘selling’, or that pseudonymised data is identifiable. Only one statement in the comment piece really jars. Gerada describes the care.data leaflet as “asking if we would like to share our data”: we’re being offered an opt-out, and it’s unreasonable to finesse it as being an active choice.

I am also wary of the notion that “Part of the compact to get a universal, free health service is to allow data to be used to monitor diseases, plan services, and look at trends in old and news diseases”. The NHS is not free; it’s just free at the point of delivery. We pay for the NHS with our taxes. Even the poorest pay tax on their weekly shop and the idea that we also have to pay for the NHS with our data is not part of any deal I have ever seen. A much wider debate is necessary on that before we can let that remark slide. Nevertheless, if you want to see the case in favour, Gerada’s comment piece is a well-informed and persuasive rehearsal of the NHS England position. It’s interesting that nobody directly involved in care.data has been able to put the case as fluently and I have no hesitation in recommending it to you.

However if you do read it, permit me to suggest that you read the separate article, and compare what Dr Gerada says when commenting in the Times with what she says on Twitter. She opens her article with the mournful statement that we live in an “Age of Mistrust”. Perhaps one of the reasons is that those we need to trust turn out to have such clunking feet of clay.

Even the comment piece is misleading when put into context. Gerada states that those who do wish to avoid the “very low risk” of re-identification “should be allowed” to opt-out. That’s very generous, except Gerada doesn’t really believe it. On February 3rd, she said on Twitter “I dont think we should be able to opt out – but hey-ho”. She also said on 26th January: here and 25th January: here. There are other similar statements. I can’t find any evidence of a Damascene conversion in advance of her appearance in The Times. Gerada’s comment piece is designed to be reasonable and soothing but her views are actually much less sympathetic to any notion of choice. Should I trust someone who isn’t straight with people about what they really think?

This is bad enough on its own terms, but when you move to the comments in the accompanying article, it gets worse. Gerada is quoted as describing GPs who are opting their patients out unless they choose to opt in as ‘patronising’. She goes on to say that “It is not right for GP practices to make this decision on their patients behalf”. Gerada doesn’t think we should have a choice, but describes those who do as ‘patronising’. It’s an interesting choice of word, as when I used it on Twitter to describe Gerada’s approach to care.data, she responded that she was “just opening up a debate. Will not continue now as clearly wrong”, and later observed that calling people patronising was evidence of “how easy it is to then become personal in the debate- hence squashing further debate.” I shouldn’t call her patronising, but it’s fine for her to smear her fellow GPs with the same word.

Perhaps I overstep the mark if I say that Dr Gerada has a patronising attitude towards her fellow citizens. It may be too much to assert that her article for the Times was hypocritical. It won’t help the ‘debate’ very much if I do. However, how helpful, how constructive is it for Gerada’s to summarise her opponents in this way: The Times quotes her as saying that the act of opting out is ‘selfish, a bit like people who don’t give their kids MMR for herd immunity’. Perhaps you can think of a comment more precisely designed to squash a debate, but I’m dry for now.

Those of us who say no are not simply concerned for our privacy and keen to be given a choice. We’re not even “conspiracy theorists” (which is what she called us earlier this week). We who say no are dangerous. Our decision to opt-out actively puts our fellow citizens at risk. Like Tim Kelsey’s loaded statement on the Today programme earlier this week that those who “do not trust the NHS” to protect their data can opt-out, Gerada’s comments on Twitter and to the Times journalists shows where we’ve got to: Us Versus Them, NHS Fundamentalists versus paranoid heretics. We’re through the looking glass, as one wise person put it to me, and now all that matters is faith. Do you believe in the NHS, or are you against it? All I need to do is finish my blog with a hysterical word like totalitarian or fascist – with due respect to Mike Godwin – and it just gets worse.

Like everything I have written on this subject both here and on Twitter, I doubt it will have any effect on your view of care.data. Either you already agree with me, in which case you will be even more convinced, or you don’t, and you will complain that I am making a personal attack on a respectable, dedicated public figure (needless to say, I have no doubt that Dr Gerada is a respectable, dedicated public figure, which is why I find her view of people like me so depressing). I cannot think of a single issue in my professional life that I have found more dispiriting than looking at this one. It’s become toxic and divisive. They don’t respect or trust Us, and We don’t respect or trust Them. There’s no hope of a resolution.

Soylent Green Is Data

Recently I encountered two good examples of the private sector’s attitude to privacy and consent, both of which vexed me. Firstly, the social media expert Mat Morrison (@mediaczar) highlighted on Twitter a thoughtful blog by @Drdrang about free internet services with the statement that it was the “definitive response to ‘if you aren’t paying for it, you’re the product’ nonsense”. Morrison and I had a short disagreement on Twitter – his view is that it’s unreasonable to pick on the internet’s tendency to feed off its users (my phrase) when so many other sectors like banking and government show the same thirst. He also made the reasonable point that despite being free of the European legal privacy controls, Facebook and Google have commercial imperatives to protect personal data. Morrison talks sense, especially about the way that other sectors are as keen to syphon data as the internet is.

However, I do disagree with him. I don’t think market forces are enough to keep big corporations in line, and  I don’t think it’s right to dismiss the ‘you’re the product’, even if it’s a bit hysterical. Morrison said that the statement is akin to the revelation in ‘Soylent Green’ that the much-desired food that is saving an overcrowded world turns out to be made out of people. But the free internet is not the world of Soylent Green – Facebook, Google and other ‘free’ services do not consume humans and feed them back to other humans. The free internet corporations feed off live hosts so that they can feed other corporations. This is not Soylent Green; the free internet is a parasite. I was tempted to call them vampires, but vampires inevitably kill the host, and the free internet wants us all alive.

Many products are sold on a fantasy – nobody’s life was ever made more exciting because they drank a bottle of Fanta, while a Lynx-sheathed manboy does not automatically become a magnet for the ladies. That Facebook exists to bring people together and make connections is actually less of a lie than most – it’s just that the purpose of that network is so that Facebook can monetise the resultant data. Anyone who doesn’t care about their privacy can laugh off ‘You’re the product’ and go all Anthony Weiner to their heart’s content. But that should be an informed choice. If free internet providers aren’t transparent about their business models, we need scary memes to warn the unwary even if this just means that they are better informed when they swim with the leeches. I use some of these internet services, but I don’t think they are free.

A pronounced rejection of choice inflected the other thing that vexed me. While Morrison has a different, but perfectly respectable perspective from mine, I found the blog written by Phil Lee of the law firm Field Fisher Waterhouse and hosted by the International Association of Privacy Professionals to be pernicious. Headlined ‘A Brave New World Demands Brave New Thinking’, it could just as easily have been called ‘The Internet Knows Best’. As far as the net-enabled future is concerned, I think it’s safe to say that Lee is not a sceptic:

All of these connected devices – this internet of things collect an enormous volume of information about us, and in general, as consumers we want them. They simplify, organise and enhance our lives” (my emphasis)

Given that Lee’s blog trails the exciting prospect of internet-enabled shoes, I can confidently say that I don’t want them because they’re stupid. But this is only the beginning. In the face of a welter of products with the capability to monitor our every waking moment, Lee identifies the likely “knee-jerk insistence on ever-strengthened consent requirements and standards” that ‘we’ in the privacy community will demand, even though the purpose of these new devices is “ultimately to provide services we want”. Given that we want these services, Lee says that the wrong thing to do is to ask us whether we want these services. Explicit consent is “lazy”. It will drive “poor compliance that delivers little real protection for individuals”.

Why?” he asks. Yes, Phil, tell us why.

The problem – according to Lee – is that asking people to give consent requires ever-longer consent notices (I’m trying to think who writes these long, tedious, legalistic consent notices, but at the moment I’m coming up with nothing). Consent becomes more about legal compliance and takes the emphasis off privacy by design. Because they have to get consent, Lee argues that technology designers won’t pay any attention to privacy as they build their products, relying solely on a take-it-or-leave-it, impossibly complex consent question at the end.

There are several things in Lee’s blog that got my goat. Aldous Huxley’s novel ‘Brave New World’ is about a dystopia, so abandoning consent in the face of it is – at least metaphorically – surrendering to totalitarianism. The assumption that people want a self-aware fridge is questionable: I know lots of people who don’t even use Facebook. However, my biggest problem is with the premise that these wonderful new products are driven solely by the desire to simplify my life and so nobody needs to ask me whether I want them. I love Apple products and I have no doubt that if Apple produces a watch, as is the current gossip, I will be tempted to buy one. It will not simplify or organise my life – it will be a toy that I will probably struggle to justify buying. I have only now, after two years, found a more constructive use for my iPad than using it to play Osmos. And I have absolutely no doubt that Apple’s intention in launching such a device will be to suck data out of me as I wear it.

The launch of Google Glass – endlessly and breathlessly reported on by numerous commentators including in Lee’s blog – does not strike me as a wonderful development that will improve lives. It is an audacious attempt to place an electronic corporate filter between people and their perception of the real world. The 1990s paranoia about virtual reality offering a dangerous alternative to reality could always be countered by the fact that even if it worked, it was effectively an experience with a natural end. Google Glass goes with you everywhere – the data host can close its laptop or switch its phone to silent, so to ensure that the flow of data keeps coming, Google escapes the restrictions of devices and seeks to mediate the user’s experience of the world. It’s ‘They Live’ in reverse.

If you want to submit yourself to this, be my guest. If you want a US corporation to mediate your senses, go for it. But Lee’s idea that the complexities of this technology mean that we must find alternatives to giving people a choice is the most counter-intuitive response imaginable. It’s absolutely true that the developers of devices and apps should be thinking of privacy at the design stage, but the watchword for this should be choice. If Google cared about privacy, they would not only be launching Google Glass, they would be offering Google BrickWall, a device which would render the wearer invisible to any of Google Glass’s recording and analytical abilities. Instead, non-users will simply have to point and laugh at Google Glass wearers.

Privacy by Design and consent are not mutually exclusive – there is simply no reason to say they are and no reason to let designers off the hook. There is no need for consent agreements to be complicated or technical. I don’t know if Lee is implying that he and his legal colleagues are not up to the task of explaining new technology to the public and offering them a clear choice, but if so, one can only wonder at his pessimism. The premise of the blog is that the public already want these services, so surely they already have some understanding of what they offer and how they work. The key job is for companies to spell out the surprises and then give people a choice. For example:

  • If you buy these internet shoes and use them as advertised, we’ll know where you are, what you’re doing, who you’re talking to, and what you’re talking about
  • We will sell this data to anyone willing to pay for it
  • They will market shoes to you
  • You can’t turn this feature off
  • You will not be able to moonwalk just because you wear these shoes, even though Justin Bieber did so in the advert
  • If you don’t like this stuff, don’t buy the shoes

The brave new thinking that we need is for internet companies to offer adults choices about what using their services involve. This can be a choice within the service, or simply a clear statement of what the service entails and then the choice of whether to participate. Instead of plugging social media as California’s humble gift to the world, they should explain that the price of using the service is data, and explain how this works. They could go further, offering cheaper versions of gadgets that hoover up personal data and charging more for the versions that just cost money. This would be brave (if divisive), whereas Lee’s version of brave sounds a bit Big Brother Knows Best.

When the individual deals with the private sector, especially the big IT behemoths like Apple or Google, the relationship is already impossibly asymmetric. Therefore, as much as possible should be about consumer choice and consent. Consumers should not be infantilised while companies make decisions about how to give them what the companies decide they want. We’re not a homogenous mass, each eager to surrender our selves for a pair of X-ray specs. And most people care about privacy to some extent, because most people shut the door when on the toilet and close the curtains when getting changed.

Beyond sensible legal restrictions to prevent criminality and exploitation, the internet and technology sector is free to sell whatever it likes, and free to ask for payment in data rather than money (we should draw the line at blood and firstborns). The only principle worth hanging onto in the face of this Brave New World is the right to say Yes, or if you want to, No.

Keep your PECR up (I know, I’m sorry)

The BBC reports that Bournemouth and Poole NHS PCT have got themselves into hot water by calling a member of the public using an external company in order to offer him some health screening as he was in an at-risk group. The PCT were, it seems, attempting to deal with a target imposed on them by the Department of Health. The Trust felt that it was not “practical” for them to get consent in this case.

Given that my only source is the BBC news website, I cannot make any definitive judgement about what went on, although it’s clear that the person concerned managed to convince the Information Commissioner’s Office that the use of his data was unfair. The ICO is quoted as follows: “Individuals should have been informed by the trust that they would be receiving a call inviting them to attend a risk assessment, and that this letter should ideally give them some method for asking not to be contacted”

It’s at this point, however, that I feel entitled to mount my hobby horse and ride it up and down the public highway.

The Information Commissioner’s own definition of direct marketing, found in his guidance on the subject, is ‘the offer for sale of goods or services, or the promotion of an organisation’s aims and ideals’. The rules covering any form of electronic direct marketing (i.e. phone, email, and text) come from the Privacy and Electronic Communications Regulations (usually pronounced ‘Pecker’), not from the Data Protection Act. PECR does not contain any discussion of harm, benefit of legitimate interest – its rules are simple and relatively easy to explain.

Direct marketing cold-calling by phone is legal – unless the person is on the Telephone Preference Service or has told the organisation not to call. Therefore, to make a marketing call, the organisation (in PECR terms, the ‘person’) must screen the numbers they are using against the TPS lists (which they must rent or buy from the TPS itself or a marketing company who has done so). Direct marketing emails and texts are opt-in – you cannot text or email someone without their permission, and the same is true of automated marketing phone calls.  There are some wrinkles – business and personal emails are treated differently – but for direct marketing, that’s about it.

As described in the BBC story, the PCT’s call was a marketing call. They were not calling the person to tell him results, to arrange an appointment for treatment that had already been consented to, to discuss something that was already happening. The PCT’s aims include the hitting of a target for screening of a specific group, and without previous consent, the only possible interpretation of the call is that recruiting people to join the screening is a form of direct marketing. Having worked – briefly and without particular distinction – in the NHS and having had this argument several times, I know that few health staff would agree with me. Indeed, when looking at this issue many in the public sector have the same problem – if a message is clearly of benefit to the recipient, how can we not be allowed to do it?

Although some in the private sector find ways around PECR or ignore it altogether, I have never spoken to a private sector person who didn’t see how the regulations applied to what they do. Public sector, voluntary and charity organisations are obsessed with the value or justification of their message. Labour, the Lib-Dems, the Conservatives and the Scottish Nationalists have all received enforcement notices under PECR for their use of automated marketing calls – the Scottish Nationalists perhaps personified the wider misunderstanding of how PECR works but claiming that being prevented from using automated calls of Sir Sean Connery was a breach of their human rights. It’s not. I have a right not be bothered by what you think I should be interested in, whoever you are. And PECR gives me that.

PECR is a single-minded law in this respect, caring only about the content of the message. If your call, your email, your text is designed to sell, promote, persuade or influence – it’s direct marketing. If you want to change behaviour, get people to make better choices, or even tell them something that will change or save their lives, PECR doesn’t care. Even if you don’t know who the recipient is, that’s irrelevant – this isn’t Data Protection.

Of course, the BBC coverage doesn’t mention PECR and screening against the TPS, which implies that some people in the ICO don’t know what their own position on PECR and direct marketing is, but that’s not a surprise. The point is, the next time someone has a smart idea for a communication campaign, whether it’s health promotion, news of how you’re dealing with anti-social behaviour, or the benefits of recycling, just remember to think about PECR.

Which is a bit funnier if you say it out loud.