Privacy notice: human rights for sale

The forthcoming GDPR (General Data Protection Regulation) changes in Europe will provide a little relief from all those email lists that one seems to belong to without ever remembering where or when you signed up. As the uproar over Facebook and its dealings with Cambridge Analytica show, the regulations might also begin to address some fundamental questions facing our technological society. The wider problem stems from the fact that neither the tech companies nor governments, nor individual members of the public have any real clue how to deal with the issues that technology throws up. In this chaotic context, it almost began to seem reasonable for companies to sell people's data without gaining prior consent. But is privacy really for sale?
 
In relation to companies and privacy, two fundamental issues arise:
• Where does privacy end?
• Where should transparency begin?
 
Although the issues are novel, we do not have to make it up as we go along. Privacy is a basic human right, enshrined in Article 12 of the Universal Declaration of Human Rights. It is therefore appropriate and possible to use the UN framework designed for defining the relationship between companies and human rights, to set the expectations for companies on privacy.
 
Since privacy is a basic human right, it should be respected by companies. On the other hand, people can choose to share information about themselves. The key questions are: share with whom? And why? It was interesting to see Facebook’s Founder & CEO Zuckerberg underline the point in his Senate committee testimony that his company only collected data that was necessary to deliver services to customers. This seems like a good rule, but of course it presumes the service was what the individual actually wanted (and we should beware of the implications should the service promise the world). In line with a human rights treatment, the key test here is whether the individual gave their free, prior and informed consent to the use of their data by the company. As the US Senators interviewing Zuckerberg enjoyed pointing out, an online 80 page set of terms and conditions, doesn't really count. And neither does a survey filled out by someone else.
 
Just like any other new technology, information technology should be developed and deployed responsibly. From a human rights perspective, doing so requires companies to apply a process covering due diligence to examine the likely risks involved in deploying the technology, act to reduce the most salient risks identified, put in place procedures to deal with breaches of rights where they occur, and finally, provide remedy to the individuals affected by any breaches. Companies like Facebook are just now scrambling to catch up with a process that has been clearly set out for some 10 years.
 
Yet another part of the UN Framework on companies and human rights is transparency. All the processes just listed should be reported upon by the company. This rarely happens in relation to any human right. But here we need to remember that transparency is not in itself a human right. Yet to approach human rights responsibly, it is necessary for a company to be transparent.
 
It is important to point out, as I did in my book 'Corporate Truth', that privacy and transparency have opposite application to companies and individuals. Ethically, and despite the legal position, companies should be transparent about themselves, while respecting individual privacy. The usual position is the reverse: companies sell data about individuals, but won't reveal anything about themselves (unless they can get something out of it).
 
So, to answer the two fundamental questions posed above:
• For people, privacy does not end
• For companies, transparency begins at the beginning.
 
Professor Adrian Henriques is a consultant and an Associate of C&E Advisory. He blogs at www.henriques.info