The Curse of the Padlock

One of the dangers of working in Data Protection is the risk of becoming a pedant. Precision matters; court cases have turned on the meaning of individual words like ‘likely’ and ‘distress’. The legislation is a maze of definitions and concepts that the competent practitioner needs to get to grips with. Lazy thinking can be revealed by an inability to get the details right, so it’s possible to become obsessed with the detail. Even the BCS Data Protection exam has a question which requires you to list the elements of the definition of consent in the right order. It’s easy to lapse into pedantry, to point out every wrongly quoted article, every jumbled phrase.

Nevertheless, getting a simple thing right is often important. GDPR does not cover ‘personal identifiable information’; it covers ‘personal data’ and the definition of the two is not the same. A person who talks about PII in the context of European Data Protection is starting in the wrong place (the US), and can make mistakes as a result. Another error that seems to be creeping in all over the place is more profound, and risks entrenching one of the biggest misconceptions about how data protection works, a misconception many of us have spent years trying to break down.

The problem is the phrase ‘data privacy’.

I see it everywhere – on LinkedIn naturally, in news coverage of the sector, and predictably, the ICO has fallen for it. They describe themselves as “The UK’s independent authority set up to uphold information rights in the public interest, promoting openness by public bodies and data privacy for individuals.” Look at the Data Privacy Advisory Service, who summarise their services as “At DPAS we help organisations safeguard the fundamental human right to have data kept private by putting in place the best possible protection to keep it secure. This is delivered in line with the General Data Protection Regulation (GDPR) and The Data Protection Act 2018.”

The idea is nonsense. It doesn’t exist. There is no right to data privacy – there is certainly no fundamental right ‘to have data kept private’. This isn’t a snide dig at someone quoting the wrong article. The concept of ‘data privacy’ is a complete misunderstanding of what Data Protection is for, and everyone who promotes it is actively thwarting the efforts of the rest of us to implement data protection in a practical way.

Article 8 of the European Convention on Human Rights says: ‘Everyone has the right to respect for his private and family life, his home and his correspondence“. This right is not absolute; it can be interfered with (only when necessary) in the interests of “national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others“. The right is not just about data – it certainly can be, as is evidenced by cases where celebrities and others use the privacy right to prevent the use of images that breach their right to privacy. But the right to privacy doesn’t have to be about data at all – you can breach a person’s right to privacy by simply observing them, by being in a place where they expect privacy, or by denying them the opportunity to do something privately. Data doesn’t have to come into it.

Clearly, if you did a Venn diagram, there would be circumstances where privacy and data protection overlap. By following the Data Protection principles when you handle a person’s private correspondence for example, you probably also do what’s necessary to protect their privacy. The same is true for confidentiality – not all confidential data is personal data, but a decent stab at the principles will probably respect both. There is, however, a significant portion of the Venn diagram where Data Protection and Privacy do not meet, and the DP part of that is important.

The notion of ‘Data Privacy’ obscures two vital elements of Data Protection. First, data protection is not only about private data. It is covers all personal data, private, secret, and public. For years, I have been banging my head against the brick wall of ‘it’s not personal data, it’s in the public domain’. Trying to explain to people that data like photographs, email addresses and other publicly available data is still personal data, just available and easier to use than some other data has long been a difficulty. There was a chink of light in Article 14 of the GDPR which clearly states that a person should be informed even when their data is accessed from ‘publicly accessible sources’. This explicit recognition that public data is still personal data is very helpful, but the notion that ‘data protection’ and ‘data privacy’ are interchangeable muddies the waters again.

Second, in related news, GDPR is not about keeping data private; it is about ensuring that personal data processing is properly regulated. For years, Data Protection has been plagued by the padlock. The Information Commissioner used it as a logo (‘but the padlock is unlocked’ is a defence that umpteen different ICO folk have used when I complained about it), and when I did a Google image search for ‘Data Protection’ today, this is the top set of results:

Screenshot 2019-05-26 at 09.17.53

The problem with the Data Protection Padlock is that it presents the legislation as something that locks data up, keeps it away from people. This understanding of data protection leads directly to the belief that disclosure of personal data is inherently problematic and exceptional, and that belief is toxic. I’m not persuaded that Victoria Climbie or Peter Connelly died solely because data about them wasn’t shared, but the pervasive fear of data sharing didn’t help. The GDPR says that ‘the protection of natural persons in relation to the processing of personal data is a fundamental right‘. The word ‘privacy‘ isn’t mentioned anywhere beyond a reference in a footnote to the ePrivacy Directive, and the processing of personal data is firmly put in the context of operating the EU’s internal market: “This regulation is intended to contribute to the accomplishment of an area of freedom, security and justice, and of an economic union“.

You can’t achieve the economic union by locking all the data away, by keeping it private. To characterise data protection law as being about ‘data privacy’ is to misrepresent its purpose completely. European Data Protection is a compromise – trade is underpinned by the use, even the exploitation of personal data, but people have rights, they have control over their data in some (but not all) circumstances, and the legislation built on foundations of transparency and fairness, not privacy. Arguably, the GDPR tries to even up the power imbalance in some circumstances, but it is not designed to lock up data and keep it private.

Of course, some people might be using ‘privacy’ as a synonym for ‘secure’ – the DPAS statement above seems to elide the two. Only a fool would want to play down the importance of security in the context of using any personal data, but the reduction of Data Protection solely to security is as destructive to a proper understanding of it as the privacy / protection mess. We’ve managed to drag Data Protection out of the IT department, and we need to stamp on this idea that security is the exemplar of good DP practice. Your data can be private and secure, but kept for no good reason, for too long, in an inaccurate state, and there could be too much of it.

Some personal data is private and should remain so. In many situations, the processing of personal data without an eye on people’s legitimate expectations of privacy, especially when monitoring, watching or listening to them, is likely to be unfair and so unlawful. There is a strong link between Data Protection and Privacy, and any attempt to divorce them would be stupid. But the use of ‘data privacy’ as a synonym for data protection is misleading and dangerous – it perpetuates a fundamental misreading of what the legislation is for, and makes the lives of everyone trying to make GDPR work effectively a thousands times harder. It’s time to take this nonsense, lock it up and throw away the key.

Crazy Naked Girls

There’s little to like about the voyeuristic coverage of the theft of images of famous women. Whether it is the feverish frottage of the mainstream press (which largely boils down to LOOK AT ‘EM ALL, IMAGINE ‘EM ALL NAKED, NNNNNNNNGGGGGG!!!!!) or the inevitably crass victim blaming (thank you, Ricky Gervais, for The Office and for absolutely nothing else), it’s all depressing.

The data protection strand in all this hasn’t been much better. Mobile devices are not a safe place to store sensitive data (true). The cloud is – in Graham Cluley’s immaculate phrase – just someone else’s computer (true). But too many security commentators have, perhaps unwittingly, aligned themselves with a ‘They asked for it’ line of thinking. A popular analogy is the one about burglary or car theft (this is an example from 2011). Apparently, you can’t complain if you leave your valuables on the front seat of your car and somebody steals them, and the same goes for pictures of your bits and the internet. In other words, the thinking is more or less that if Jennifer Lawrence is silly enough to take pictures of herself naked, she was basically asking for them to be stolen. For me, this is too close to the mentality that blames rape victims for being drunk, rather than rapists for being rapists. Friends, I blame the rapists.

Taking pictures of oneself is normal for most people, not just actresses – I am odd because I don’t do it, but if I was good looking, I probably would, all the time. It must be great to be extraordinary, and to enjoy being extraordinary. It’s too easy to be holier-than-thou and say that the violated only have themselves to blame. The victims made these images for themselves or they made them for someone else specific. They did not make the images for the media, or for the voyeurs who stole, sold or search for them. Anyone who handles or seeks them out violates the subject’s privacy, is a criminal and should be treated as such. The victims did nothing remotely scandalous or reprehensible – indeed, they did nothing that is anyone else’s business but their own. They probably didn’t do a privacy impact assessment before taking the pics, but that’s because they’re human beings and not data controllers.

The car analogy doesn’t work because mobile phones and the internet are not immediately understandable physical objects and spaces. When you leave your laptop on the passenger seat of your car, you can turn around and see the laptop sitting there. The risk is apparent and obvious. There’s a striking moment in Luc Besson’s current film ‘Lucy’ where Scarlett Johansen can see data streams soaring out of mobile phones across Paris, and navigates her way through them. We don’t see data like this. Few understand how the internet actually works (I’ve met a lot of people who think cloud storage means that data is floating in the air like a gas). We don’t see the data flowing or spot the footprint it leaves behind. We don’t know where the data ends up and the companies we use don’t tell us. We use unhelpful misnomers like ‘the cloud’ when we mean ‘server in a foreign land’. Many people don’t know how their phones work, where their data is stored, how it is copied or protected, or who can get access to it. This should be the problem that the photo hack alerts us to.

It’s possible that some people would change the way they used technology if they fully understood how it works, but that should be their choice, based on clear information provided by the manufacturers. At least one of those affected has confirmed that the images of her are quite old, so we can’t even judge the situation on what we know now. If taking the pics was a mistake (and I don’t think I’m entitled to say it was), it was a mistake made possibly years ago.

I don’t think people understand where their data is or how it is stored. Rather than wagging our fingers at the victims of a sex crime, anyone involved in data protection and security should concentrate on educating the world about the risks. I think the big tech companies like Google, Apple and Facebook would be uncomfortable with this idea, which is why security and sharing are presented as such tedious, impenetrable topics. They don’t want more informed use of their services, they just want the data like everyone else. The defaults for sharing and online storage, for location and tracking, for a whole variety of privacy invasive settings should be set to OFF. Activities involving risk should be a conscious choice, not an accidental side effect of living in the 21st century.

Peeping Tom

There have already been excellent articles about the antics of contributors to the Facebook page “Women Who Eat on Tubes’, including one by Sophie Wilkinson, one of the women who has been targeted, by the Irish Data Protection lawyer Fergal Crehan, and a surprisingly sympathetic interview in the Telegraph with the page’s progenitor, Tom Burke, in which he insisted he is not a weird deviant. Burke made an unsympathetic and discourteous defence of his activities on Radio 4’s Today programme this morning, claiming that his was an artistic project for the cognoscenti and in any case, that there is not right to privacy in a public place. There has been some measure of privacy in public places at least since Mr Peck won his Human Rights case in 2003, so perhaps Mr Burke hasn’t been keeping up with recent events.

If you haven’t seen the Facebook or Tumblr pages (I understand the Facebook page is gone), they comprised pictures of women eating on the Tube, taken without their consent by men. These women put things in their mouths, know worra mean, eh? The nudge-nudge-wink-wink voyeurism of WWEOT isn’t hard to spot, but I have another point. WWEOT breached the law.

Long standing readers of this blog (and DP anoraks) will know that Data Protection offers an exemption in Section 36 for ‘personal, family and household’ uses of personal data. Private citizens are entitled to take photographs in public places, have CCTV on their houses, keep personal diaries about their neighbours, indeed anything they feel like doing. The DPA is not intended to regulate the ordinary person’s activities. However, the exemption has limits. In 2003, the EU Court of Justice found that a Swedish Citizen, Mrs Lindqvist, who wrote about admittedly trivial details about her fellow churchgoers on a church website, had breached the Swedish equivalent of our Section 32. Mrs Lindqvist’s publication of information on the internet robbed her of the domestic exemption. As I have written before, the UK Information Commissioner has always studiously pretended that the Lindqvist decision didn’t happen, but the UK Courts have on at least one occasion described this position as ‘absurd’.

The effect of Lindqvist – properly implemented – would be significant, and it would have a substantial effect on social media. Any person who wished to publish an image of another person (or other personal data) on the internet would have to comply with the Data Protection Act. They would probably have to use such data fairly, they would need the consent of the individual (or some other condition like legitimate interest), and they would need to respond to subject access requests and Section 10 notices (which prevent processing that causes damage in some circumstances). There’s no question, for those people who want to put images of other people onto the internet, it would be a huge inconvenience. Lindqvist would also drag the Information Commissioner into an endless cycle of domestic disputes.

Two things here: first, the law is the law. The ICO has no serious argument that Lindqvist is not an accurate reflection of European DP law, and should do its job properly. Second, in my opinion, a person should have the right to sit on a tube eating a banana, dance badly in a nightclub, fall asleep on a train or wear red trousers and be posh without their fellow citizens taking their photos and sneering at them online. If you want to put your own data onto the internet, a proper implementation of Lindqvist would not be any hindrance. But if you can explain to me why other people (anyone, female or male) should lose their rights to be left alone because you have a smartphone and you want to use it, feel free to drop me a comment.

And so back to WWEOT. As I understand Lindqvist, Mr Burke as the creator (or – no doubt – curator) of the group lost S36 merely because of the publication, as have all of his contributors. At the very least, he should be doing a Data Protection notification and paying his annual £35 to the Information Commissioner. Of course, this would mean that every Facebook user and blogger that published data about a third party should do the same. Let me be clear, I think they should. However, there is a more concrete reason that WWEOT fails its domestic purposes test. Mr Burke is clear that he sees Women Who Eat on Tubes as an artistic endeavour. So that’s domestic purposes thrown out of the window and bouncing down the road in the rear view mirror. He’s covered by the DPA.

There is another exemption that Burke and his compadres may be able to use. Section 32 of the Data Protection Act allows a fairly broad exemption from much of Data Protection if the data is being processed (i.e. the photographs) for the special purposes – journalism, literature and art. Using Section 32 requires the Data Controller (Mr Burke) to ‘reasonably believe’ that compliance with any of the DP principles is incompatible with the special purposes. It’s possible that Burke might be able to argue that the public interest in the publication of his non-consensual images meant the principles didn’t apply, but I think he should be made to do so.

But by applying the DPA to his images, every woman featured on the site would be and should be able to test his arguments out, and force him to justify the overriding public / artistic interest either to the Information Commissioner (who might run a mile) or the Courts (who would probably make a sensible decision). This should not just be seen as a matter of public debate on whether WWEOT is artistic, or reportage. There is a legal method to test Burke’s assertions that the women could have used – and if an incarnation of WWEOT still exists, still can use.

There is a much wider point to be made about WWEOT and the attitudes that lie behind it that is probably another blog entirely. Nevertheless, the casual intrusion into people’s ordinary daily lives that the internet and smartphones have made possible allows all manner of bullying, stranger-shaming and plain old-fashioned voyeurism. Until we stop looking at what we can do on the internet, and start deciding what we should do, I think clunky tools like the DPA should be employed far more aggressively by the people who find themselves unwillingly in the camera’s lens.

Doctor knows best

Dr Clare Gerada, who was until recently chair of the Royal College of General Practitioners, has written an article for The Times about, stoutly defending the scheme and its benefits for the public. The Times doesn’t give its stories away for free (a stance that they’re perfectly entitled to adopt), so if you want to read the article itself, you’ll either have to subscribe online or buy the newspaper like I did. Accompanying the comment piece is a short article in which she is quoted, perhaps less formally.

The article itself is familiar stuff. “We have nothing to fear” from Our data will be safe, secure, and used only for “proper and appropriate purposes”. Dr Gerada deserves credit for making clear that identifiable data will be shared outside the Health and Social Care Information Centre: she acknowledges that information will “not be anonymised at all times” because anonymised data only works in a limited number of circumstances. This frankness is refreshing, especially given the fevered Twitter commentary from NHS England’s apparently bewildered National Director for Patients and Information, Tim Kelsey, who still won’t admit that the exchange of a commodity for money is ‘selling’, or that pseudonymised data is identifiable. Only one statement in the comment piece really jars. Gerada describes the leaflet as “asking if we would like to share our data”: we’re being offered an opt-out, and it’s unreasonable to finesse it as being an active choice.

I am also wary of the notion that “Part of the compact to get a universal, free health service is to allow data to be used to monitor diseases, plan services, and look at trends in old and news diseases”. The NHS is not free; it’s just free at the point of delivery. We pay for the NHS with our taxes. Even the poorest pay tax on their weekly shop and the idea that we also have to pay for the NHS with our data is not part of any deal I have ever seen. A much wider debate is necessary on that before we can let that remark slide. Nevertheless, if you want to see the case in favour, Gerada’s comment piece is a well-informed and persuasive rehearsal of the NHS England position. It’s interesting that nobody directly involved in has been able to put the case as fluently and I have no hesitation in recommending it to you.

However if you do read it, permit me to suggest that you read the separate article, and compare what Dr Gerada says when commenting in the Times with what she says on Twitter. She opens her article with the mournful statement that we live in an “Age of Mistrust”. Perhaps one of the reasons is that those we need to trust turn out to have such clunking feet of clay.

Even the comment piece is misleading when put into context. Gerada states that those who do wish to avoid the “very low risk” of re-identification “should be allowed” to opt-out. That’s very generous, except Gerada doesn’t really believe it. On February 3rd, she said on Twitter “I dont think we should be able to opt out – but hey-ho”. She also said on 26th January: here and 25th January: here. There are other similar statements. I can’t find any evidence of a Damascene conversion in advance of her appearance in The Times. Gerada’s comment piece is designed to be reasonable and soothing but her views are actually much less sympathetic to any notion of choice. Should I trust someone who isn’t straight with people about what they really think?

This is bad enough on its own terms, but when you move to the comments in the accompanying article, it gets worse. Gerada is quoted as describing GPs who are opting their patients out unless they choose to opt in as ‘patronising’. She goes on to say that “It is not right for GP practices to make this decision on their patients behalf”. Gerada doesn’t think we should have a choice, but describes those who do as ‘patronising’. It’s an interesting choice of word, as when I used it on Twitter to describe Gerada’s approach to, she responded that she was “just opening up a debate. Will not continue now as clearly wrong”, and later observed that calling people patronising was evidence of “how easy it is to then become personal in the debate- hence squashing further debate.” I shouldn’t call her patronising, but it’s fine for her to smear her fellow GPs with the same word.

Perhaps I overstep the mark if I say that Dr Gerada has a patronising attitude towards her fellow citizens. It may be too much to assert that her article for the Times was hypocritical. It won’t help the ‘debate’ very much if I do. However, how helpful, how constructive is it for Gerada’s to summarise her opponents in this way: The Times quotes her as saying that the act of opting out is ‘selfish, a bit like people who don’t give their kids MMR for herd immunity’. Perhaps you can think of a comment more precisely designed to squash a debate, but I’m dry for now.

Those of us who say no are not simply concerned for our privacy and keen to be given a choice. We’re not even “conspiracy theorists” (which is what she called us earlier this week). We who say no are dangerous. Our decision to opt-out actively puts our fellow citizens at risk. Like Tim Kelsey’s loaded statement on the Today programme earlier this week that those who “do not trust the NHS” to protect their data can opt-out, Gerada’s comments on Twitter and to the Times journalists shows where we’ve got to: Us Versus Them, NHS Fundamentalists versus paranoid heretics. We’re through the looking glass, as one wise person put it to me, and now all that matters is faith. Do you believe in the NHS, or are you against it? All I need to do is finish my blog with a hysterical word like totalitarian or fascist – with due respect to Mike Godwin – and it just gets worse.

Like everything I have written on this subject both here and on Twitter, I doubt it will have any effect on your view of Either you already agree with me, in which case you will be even more convinced, or you don’t, and you will complain that I am making a personal attack on a respectable, dedicated public figure (needless to say, I have no doubt that Dr Gerada is a respectable, dedicated public figure, which is why I find her view of people like me so depressing). I cannot think of a single issue in my professional life that I have found more dispiriting than looking at this one. It’s become toxic and divisive. They don’t respect or trust Us, and We don’t respect or trust Them. There’s no hope of a resolution.

Soylent Green Is Data

Recently I encountered two good examples of the private sector’s attitude to privacy and consent, both of which vexed me. Firstly, the social media expert Mat Morrison (@mediaczar) highlighted on Twitter a thoughtful blog by @Drdrang about free internet services with the statement that it was the “definitive response to ‘if you aren’t paying for it, you’re the product’ nonsense”. Morrison and I had a short disagreement on Twitter – his view is that it’s unreasonable to pick on the internet’s tendency to feed off its users (my phrase) when so many other sectors like banking and government show the same thirst. He also made the reasonable point that despite being free of the European legal privacy controls, Facebook and Google have commercial imperatives to protect personal data. Morrison talks sense, especially about the way that other sectors are as keen to syphon data as the internet is.

However, I do disagree with him. I don’t think market forces are enough to keep big corporations in line, and  I don’t think it’s right to dismiss the ‘you’re the product’, even if it’s a bit hysterical. Morrison said that the statement is akin to the revelation in ‘Soylent Green’ that the much-desired food that is saving an overcrowded world turns out to be made out of people. But the free internet is not the world of Soylent Green – Facebook, Google and other ‘free’ services do not consume humans and feed them back to other humans. The free internet corporations feed off live hosts so that they can feed other corporations. This is not Soylent Green; the free internet is a parasite. I was tempted to call them vampires, but vampires inevitably kill the host, and the free internet wants us all alive.

Many products are sold on a fantasy – nobody’s life was ever made more exciting because they drank a bottle of Fanta, while a Lynx-sheathed manboy does not automatically become a magnet for the ladies. That Facebook exists to bring people together and make connections is actually less of a lie than most – it’s just that the purpose of that network is so that Facebook can monetise the resultant data. Anyone who doesn’t care about their privacy can laugh off ‘You’re the product’ and go all Anthony Weiner to their heart’s content. But that should be an informed choice. If free internet providers aren’t transparent about their business models, we need scary memes to warn the unwary even if this just means that they are better informed when they swim with the leeches. I use some of these internet services, but I don’t think they are free.

A pronounced rejection of choice inflected the other thing that vexed me. While Morrison has a different, but perfectly respectable perspective from mine, I found the blog written by Phil Lee of the law firm Field Fisher Waterhouse and hosted by the International Association of Privacy Professionals to be pernicious. Headlined ‘A Brave New World Demands Brave New Thinking’, it could just as easily have been called ‘The Internet Knows Best’. As far as the net-enabled future is concerned, I think it’s safe to say that Lee is not a sceptic:

All of these connected devices – this internet of things collect an enormous volume of information about us, and in general, as consumers we want them. They simplify, organise and enhance our lives” (my emphasis)

Given that Lee’s blog trails the exciting prospect of internet-enabled shoes, I can confidently say that I don’t want them because they’re stupid. But this is only the beginning. In the face of a welter of products with the capability to monitor our every waking moment, Lee identifies the likely “knee-jerk insistence on ever-strengthened consent requirements and standards” that ‘we’ in the privacy community will demand, even though the purpose of these new devices is “ultimately to provide services we want”. Given that we want these services, Lee says that the wrong thing to do is to ask us whether we want these services. Explicit consent is “lazy”. It will drive “poor compliance that delivers little real protection for individuals”.

Why?” he asks. Yes, Phil, tell us why.

The problem – according to Lee – is that asking people to give consent requires ever-longer consent notices (I’m trying to think who writes these long, tedious, legalistic consent notices, but at the moment I’m coming up with nothing). Consent becomes more about legal compliance and takes the emphasis off privacy by design. Because they have to get consent, Lee argues that technology designers won’t pay any attention to privacy as they build their products, relying solely on a take-it-or-leave-it, impossibly complex consent question at the end.

There are several things in Lee’s blog that got my goat. Aldous Huxley’s novel ‘Brave New World’ is about a dystopia, so abandoning consent in the face of it is – at least metaphorically – surrendering to totalitarianism. The assumption that people want a self-aware fridge is questionable: I know lots of people who don’t even use Facebook. However, my biggest problem is with the premise that these wonderful new products are driven solely by the desire to simplify my life and so nobody needs to ask me whether I want them. I love Apple products and I have no doubt that if Apple produces a watch, as is the current gossip, I will be tempted to buy one. It will not simplify or organise my life – it will be a toy that I will probably struggle to justify buying. I have only now, after two years, found a more constructive use for my iPad than using it to play Osmos. And I have absolutely no doubt that Apple’s intention in launching such a device will be to suck data out of me as I wear it.

The launch of Google Glass – endlessly and breathlessly reported on by numerous commentators including in Lee’s blog – does not strike me as a wonderful development that will improve lives. It is an audacious attempt to place an electronic corporate filter between people and their perception of the real world. The 1990s paranoia about virtual reality offering a dangerous alternative to reality could always be countered by the fact that even if it worked, it was effectively an experience with a natural end. Google Glass goes with you everywhere – the data host can close its laptop or switch its phone to silent, so to ensure that the flow of data keeps coming, Google escapes the restrictions of devices and seeks to mediate the user’s experience of the world. It’s ‘They Live’ in reverse.

If you want to submit yourself to this, be my guest. If you want a US corporation to mediate your senses, go for it. But Lee’s idea that the complexities of this technology mean that we must find alternatives to giving people a choice is the most counter-intuitive response imaginable. It’s absolutely true that the developers of devices and apps should be thinking of privacy at the design stage, but the watchword for this should be choice. If Google cared about privacy, they would not only be launching Google Glass, they would be offering Google BrickWall, a device which would render the wearer invisible to any of Google Glass’s recording and analytical abilities. Instead, non-users will simply have to point and laugh at Google Glass wearers.

Privacy by Design and consent are not mutually exclusive – there is simply no reason to say they are and no reason to let designers off the hook. There is no need for consent agreements to be complicated or technical. I don’t know if Lee is implying that he and his legal colleagues are not up to the task of explaining new technology to the public and offering them a clear choice, but if so, one can only wonder at his pessimism. The premise of the blog is that the public already want these services, so surely they already have some understanding of what they offer and how they work. The key job is for companies to spell out the surprises and then give people a choice. For example:

  • If you buy these internet shoes and use them as advertised, we’ll know where you are, what you’re doing, who you’re talking to, and what you’re talking about
  • We will sell this data to anyone willing to pay for it
  • They will market shoes to you
  • You can’t turn this feature off
  • You will not be able to moonwalk just because you wear these shoes, even though Justin Bieber did so in the advert
  • If you don’t like this stuff, don’t buy the shoes

The brave new thinking that we need is for internet companies to offer adults choices about what using their services involve. This can be a choice within the service, or simply a clear statement of what the service entails and then the choice of whether to participate. Instead of plugging social media as California’s humble gift to the world, they should explain that the price of using the service is data, and explain how this works. They could go further, offering cheaper versions of gadgets that hoover up personal data and charging more for the versions that just cost money. This would be brave (if divisive), whereas Lee’s version of brave sounds a bit Big Brother Knows Best.

When the individual deals with the private sector, especially the big IT behemoths like Apple or Google, the relationship is already impossibly asymmetric. Therefore, as much as possible should be about consumer choice and consent. Consumers should not be infantilised while companies make decisions about how to give them what the companies decide they want. We’re not a homogenous mass, each eager to surrender our selves for a pair of X-ray specs. And most people care about privacy to some extent, because most people shut the door when on the toilet and close the curtains when getting changed.

Beyond sensible legal restrictions to prevent criminality and exploitation, the internet and technology sector is free to sell whatever it likes, and free to ask for payment in data rather than money (we should draw the line at blood and firstborns). The only principle worth hanging onto in the face of this Brave New World is the right to say Yes, or if you want to, No.