Rodota: Some Remarks on Surveillance today

Some Remarks on Surveillance today

Stefano Rodotà [1]

Cite as Rodotà, S., "Some Remarks on Surveillance today", in European Journal of Law and Technology, Vol. 4., No. 2., 2013.

What does it mean surveillance today? I say today, because this problem belongs to the political social and legal agenda since many decades. We have an enormous bibliography on this issue, with titles like technologies of control against technologies of freedom, Orwell in Athens, the surveillance society, and so on. And since 1971 people are talking on the assault to the privacy, the death of the privacy. I make reference to the privacy issue because it is quite apparent that the widening of the surveillance implies a continuous erosion of the personal data protection. The technological trend towards the total disclosure can change our society into a society of surveillance, control, social sorting. In the title of a book on the transparent society there was a question: will technology force us to choose between privacy and freedom? In other words: which is the degree of surveillance compatible with a democratic system?

In 1999 Scott McNally, at that time Ceo of SunMicrosystems, said brutally: "You have zero privacy anyway. Get over it". More recently Mark Zuckerberg described a world where privacy has disappeared as social norm, as a "natural" effect of the passing from the Web 1.0 to the Web 2.0, to the social networks. I would like not to discuss this general issue, but I think that there is a legitimate doubt about that conclusion. Are we facing an irresistible technological drift we cannot resist? Or the final vanishing of the data protection is a useful tool for security agencies and business community to be free from any legal constraint rather than a realistic registration of what is happening? We must remember the reaction of the Facebook community - a billion of people, the third world nation after China and India - to the Zuckerberg's statement. New rules on data protection, even if controversial and insufficient, have been introduced. Surveillance is not a destiny.

But it's true data the passing to the Web 2.0 implies a fresh approach to the surveillance issues. And this is more and more true if we look at the Web 3.0, the Internet of things. First of all, surveillance is not only the result of a deliberate and specific activity, but also a by-product of the behaviour of the same individuals, leaving voluntarily a lot of informations about them. Second: surveillance is not only the result of conscious data treatment, but in an increasing number of cases the result of the place given to algorithms. Third: surveillance is not aimed to controlling single individuals, particular activities or specific segments of the society, but is becoming a universal procedure, involving the people at large. Fourth: the risks of the surveillance do not come out especially from the activities of public security agencies, but from the unrelenting collection by private, commercial bodies. Let me give you some examples.

We are about to experience what a EU research group termed a "digital tsunami", which might ultimately overthrow the legal tools that safeguard not only privacy, but the same identity and the very freedom of individuals . We are faced with an in-depth change in societal organisation, whereby the public security criterion is liable to become the sole benchmark.

This objective is stated openly. A 2008 document by the EU Council Presidency reads as follows: "Every object the individual uses, every transaction they make and almost everywhere they go will create a detailed digital record. This will generate a wealth of information for public security organisations, and create huge opportunities for more effective and productive public security efforts". Furthermore, "in the near future most objects will generate streams of digital data (…) revealing patterns and social behaviours which public security professional can use to present or investigate incidents". A Statewatch report, The Shape of Things to Come (the same title of a 1933 novel by H. G. Wells) shows how the EU has substituted the concept that data relating to EU citizens should in principle be kept private from State agencies, in favour of the principle that the State should have access to every detail about our private lives. In this scenario, data protection and judicial scrutiny of police surveillance are perceived inside the EU as "obstacles" to an efficient law enforcement cooperation. This implies that European governments and EU policy-makers are pursuing unfettered powers to access and gather masses of personal data on the everyday life of everyone, on the ground that we can all be safe and secured from perceived "threats".

The criticisms are levelled against a specific feature of the digital tsunami - the growing use of the public security argument to downsize freedoms and rights and turn our societal organisation from a society of free individuals into a "nation of suspects". This is unquestionably a key issue because it has to do with the change in the relationship between citizens and State; more specifically, it is a violation of the undertaking made by the State vis-à-vis every individual that their data will be used selectively in compliance with such principles as data minimization, purpose specification, proportionality, and relevance. In this manner, some of these principles underlying the system of personal data protection are being slowly eroded. This applies, first and foremost, to the purpose specification principle and the principle concerning separation between the data processed by public bodies and those processed by private entities. The only principle to be referred to becomes the principle of availability, with a view to improving the exchange and use by law enforcement agencies. The multi-functionality criterion is increasingly applied, at times under the pressure exerted by institutional agencies. Data collected for a given purpose are made available for different purposes, which are considered to be as important as those for which the collection had been undertaken. Data processed by a given agency are made available to different agencies. It means that individuals are more and more transparent and that public bodies are more and more out of any political and legal control. It implies a redistribution of political and social powers.

This means that the so-called digital tsunami should also be evaluated from different standpoints - starting exactly from the identity perspective. The full availability of all personal data to public agencies brings about a veritable transfer of identity into the hands of such agencies, which can actually rely on informations that are unknown to the given data subject. This phenomenon is bound to take on increasing importance in view of the increasing amount of object-generated information. One comes face to face, in this manner, with one of the key features of data protection - the right of access, meaning everyone's unconditional power to know who holds what data concerning her/him and how such data are used.

The breach of these fundamental principles is apparent in the private sectors too. We have concrete examples before us, and they are growing in number by the day. We all are aware of the cases of employees required to carry a small "wearable computer", which allows the employer to guide their activities via satellite, direct them to the goods to collect, specify the routes to be followed or the work to be done, monitor all their movements and thereby locate them at all times. In a report published in 2005 by Professor Michael Blackmore from Durham University, commissioned by the English GMB trade union, it was pointed out that his system already concerned thousands people and had transformed workplaces into "battery farms" by setting the stage for "prison surveillance". We are facing a small-scale Panopticon, the harbinger of the possibility for these types of social surveillance to become increasingly widespread. Similar results, although concerning only location at the workplace, are already possible by means of the insertion of a RFID chip in employees' badges.

What would be of a society in which a growing number of individuals were tagged and tracked? Social surveillance is committed to a sort of electronic leash. The human body is equated to any moving object, which can be monitored remotely via satellite technologies or else radiofrequency devices. If the body can become a password, location technologies are bringing about the creation of a networked person.

We are confronted with changes that have to do with the anthropological features of individuals. We are confronted with a stepwise progression: from being "scrutinised" by means of video surveillance and biometric technologies, individuals can be "modified" via the insertion, direct or indirect, of chips or "smart" tags in a context that is increasingly turning us precisely into "networked persons" - persons who are permanently on the net, configured little by little in order to transmit and receive signals that allow tracking and profiling movements, habits, contacts, and thereby modify the meaning and contents of individuals' autonomy.

In this context, surveillance can easily become the management of the same autonomy. Personal data mining, big databanks, continuous production of personal and group profiles, uses of algorithms, experimentations about autonomic computing make possible new forms of surveillance through new forms of building up the personal identity,. The mechanisms of societal control are deeply changing. Identity is becoming objective via mechanisms that do not rely on awareness of one's own self; it is turning into a functional replacement for autonomy - at least to the extent an adaptive scheme is built out of an identity that was "captured" at a given time, with all the respective features and needs, whereupon that identity is committed to self-management systems that can provide responses and meet the requirements arising out of the given circumstances. The construction of this "adaptive" identity might be regarded as a process that originates exactly from the freezing of that identity and keeps on adapting it to the relevant environment without any decisions being made and/or any awareness being developed by the individual. This is made possible by the unrelenting collection of information yielding statistical and probabilistic estimates that can accordingly anticipate/implement what the individual data subjects would have decided in the given circumstances. The possibility of a conscious intervention by the individuals could be totally excluded, making impossible their participation by default too. The construction of identity depends on algorithms, and we must be aware of the role played by mathematic models and algorithms as key elements of an economic organisation that has produced the financial crisis.

In fact, the environment can act upon the user's behalf without conscious mediation. It can extrapolate behavioural characteristics and generate pro-active responses. One might argue that we are faced with the separation between identity and intentionality, which may give rise to unaccountability, discourage the propensity to change, and jeopardise a vigilant approach to the governance of one's self. Indeed, one should wonder whether the activities performed via self-management systems are a projection from the past rather than the anticipation of future events . We are not only risking a final expropriation of the identity as a product of the person's autonomy. Surveillance can be converted in the production of objective, obliged identities, becoming the reference people will be evaluated.

How can we react? Which are the legal, social, political ways to be followed? One can start from Article 15 of EU Directive 95/46 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. This article, at point (1), provides that "member States shall grant the right to every person not to be subject to a decision which produces legal effects regarding him or significantly affects him and which is based solely on automated process of data intended to evaluate certain personal aspects relating to him, such as performance at work, creditworthiness, reliability, etc.". For the sake of simplification, one might argue that this is a general provision on the allocation of decision-making powers in the digital world.

However, the symbolic as well as practical import of this provision is markedly reduced by the restrictive interpretation espoused by several domestic laws and, in particular, by the increasingly widespread, sophisticated application of profiling techniques - which have changed the very meaning of "decision". The studies on data mining and profiling have highlighted the basically regulatory importance attached to categorization, which is often socially more binding than legally binding decisions. This point is actually made in the EU directive, which refers to decisions "significantly affecting" persons. Profiling results into social sorting and accordingly brings about the risks of social stigma and exclusion. It is no chance that the definition of privacy had already emphasized the risk of social stigma well in advance of the coming of the age of profiling.

A second approach could look at new forms of management of the identity by the same individual. The identity in the cloud has suggested a new approach to the identity itself in the social context, going toward a "user-centric open identity network". The idea is "an identity system that is scalable (so it works everywhere), user-centric (serving your interest, instead of something done to you by outside interests) and, importantly, customizable. This new system would recognize that each of us has multiple identities. We will be able to spoon out bits and pieces of our identity, depending on the social or business context we find ourselves in (…) You could separate your identity into discrete units and assign different access permission depending on your role in a given situation. You could create a business profile, a health care profile, a friend profile, a mom or singles profile, a virtual profile and so on". If you not obliged to disclose your whole identity, the possibility of the surveillance can diminish.

A third defensive approach can be taken into account making reference to a multiple set of instruments. The right to be forgotten, prospected by some new EU rules; the right to make silent the chip, giving individuals the power to interrupt the communication of data related to him directly or indirectly; the function "do not track", made possible through an application on the technical applications; the limits to the access Facebook for collecting some personal data or by employers or for producing commercial profiles; the widening of the opt-in instead the opt-ou; the privacy by design that can make effective principles like minimization, purpose, limitation of the data retention (as pointed out, for instance, by the Bundesverfassungsgericht in its decision on the EU directive on data retention).

But the individuals' empowerment cannot satisfy the conditions for defending democracy in this new, continuously changing environment. A direct control by State and international organizations is needed. People cannot be let alone in front of Big Data, the power of public and private bodies governing the digital world.



[1] Emeritus Prof. Università La Sapienza di Roma and previously President of the Italian Data Protection Authority