Koch, Matzner, Krumm: Privacy Enhancing of Smart CCTV and its Ethical and Legal Problems

Privacy Enhancing of Smart CCTV and its Ethical and Legal Problems

Heiner Koch, [1] Tobias Matzner, [2] Julia Krumm [3]

Cite as: Koch H., Matzner T. & Krumm, J., "Privacy Enhancing of Smart CCTV and its Ethical and Legal Problems", in European Journal of Law and Technology, Vol 4., No. 2., 2013.

Abstract

To reflect the technical progress which results from the invention of "smart" cameras, five joint research partners at German universities have been involved with "smart closed circuit television" (smart CCTV) as part of the research project MuViT (German acronym for Pattern Recognition and Video Tracking) since 2010. The project decidedly focuses on research and development of smart CCTV systems. Consequently, technological possibilities and modes of application are considered that are not yet feasible but seem to be promising research goals or political desiderata (e.g. through public funding). In this paper we present results of the project MuViT concerning the legal requirements and social and ethical consequences of the new technology.

Smart CCTV provides information at a new level of quality when it identifies people and analyzes their movements. It is claimed to be less expensive and more effective than ordinary CCTV. If this is true, a considerably higher number of (smart) CCTV systems could be installed. But people are often critical about privacy intrusion in the context of surveillance. Therefore, such an expansion of (smart) CCTV could be justified by arguing that it will not have a high impact on privacy. It could even be argued that smart CCTV is privacy enhancing and therefore ethically justified, since a lot of information is only processed by algorithms and need not be available to human operators. We want to show in which way this argument might fail.

1. Privacy enhancing

Prima facie it might seem that smart CCTV intrudes privacy even more than ordinary CCTV. Being "smart" is an additional feature which generates more data. But whether this has an impact on privacy depends on the use of smart CCTV. One main privacy problem with ordinary CCTV is the possibility to identify persons. If someone watches the CCTV monitors he or she might be able to identify everyone being watched. If these videos are stored, every person on the video is - in principle - identifiable (at least if the videos have a high enough resolution). In order to identify persons who are suspected to be in danger or to pose a risk to others non-involved persons need to be watched, and could eventually be recorded and identified, too. This is a privacy problem which might be solved by smart CCTV.

Depending on the way smart CCTV is used, at least three different kinds of use could be imagined (following Macnish 2012):

  1. Full automation (computer filters information and computer decides)
  2. Partial automation unblinkered (computer and operator filter information and operator decides).
  3. Partial automation blinkered (computer filters information and operator decides)

In the full automation case, the smart system does not only recognize suspicious behavior but also decides whether intervention is necessary (and maybe which kind of intervention is appropriate). This entails a lot of ethical and juridical problems. Therefore this is not a realistic implementation scenario in Germany at this point in time.

Still the scenario can show the privacy advantages of smart CCTV: As long as the smart system is not capable of face recognition and/or other methods to identify persons it does not intrude privacy (in the sense of being identifiable). Nevertheless, this is only true as long as it does not store video images with the possibility to access.

Realistic scenarios will be partially automated. In the unblinkered case, the smart CCTV systems only work in addition to ordinary CCTV monitoring. In security areas where every false negative of the smart CCTV could be a serious problem, this kind of smart CCTV support for the operators could be imaginable. If the system fails, the human operator may succeed. At the present stage of technological development (and depending on the scenario in which smart CCTV will be used) the system will produce too many false negatives and false positives and therefore the unblinkered dual human/machine information filter scenario is likely. However, this use of smart CCTV poses a greater threat to privacy than ordinary CCTV. This is quite obvious because none of the problems normal CCTV poses to privacy have been solved yet. Instead further problems might arise because of the additional use of smart CCTV (more on that III.2).

Therefore, the only realistic privacy enhancing scenario is blinkered partial automation. Only suspicious behavior will be reported to the operator or will lead to recording the video. Thereby the human observer can only see the video in case of unusual or noteworthy events. All the other data is deleted immediately. Hence only the privacy of "suspicious" people will be intruded, as they are the only ones becoming identifiable. Yet, no such system will perform without error. Setting the system to a higher sensitivity could reduce such false negatives but on the other hand this would lead to a higher number of false positives and thus to a higher degree of privacy intrusion: In such a blinkered scenario being tagged as suspicious means date becoming available to humans.

These considerations show that smart CCTV could be used in a privacy enhancing way (blinkered partial automation) if false negatives are not a serious problem or if the smart CCTV does not produce more false negatives than human operators. However, using the technology in this way could nevertheless counteract its own privacy enhancing purpose. First, this might be due to the expansion of CCTV (see part II) and second, this might be because privacy is not only about being identifiable. Privacy is supposed to protect autonomy (Van den Hoven 2009, Rössler 2001, Nagenborg 2005) [4]. Even if you are not identifiable, surveillance can and shall influence your behavior. This is not only the case for criminal behavior but also for suspicious behavior or what people think suspicious behavior might be.

2. Expansion of (smart) CCTV

The main argument for the claim that smart CCTV counteracts its own privacy enhancing purpose is its expansion based on its efficiency and on its ethical justification. Therefore, in this part, we try to show in which way and why smart CCTV might lead to an expansion of surveillance in comparison to ordinary CCTC, in order to argue in part III that smart CCTV raises several ethical problems, including privacy intrusion.

2.1. Expansion because of efficiency

Ordinary CCTV is very inefficient. Human operators are able to monitor only a few camera images at the same time. Furthermore, they can only monitor for a short period of time without lacking in concentration. Additionally, human operators are quite expensive. Here smart CCTV might offer more efficient solutions for the same price. Yet, because of the high expenses for human operators, a lot of CCTV systems just record the images for a certain period of time. They are used only after something happened to identify someone or reconstruct what has happened. Replacing these non-monitored CCTV systems with smart CCTV would be more expensive. Therefore, smart CCTV would not be a cost effective possibility to expand CCTV in this case.

Mostly, camera images are badly monitored. One human operator has to monitor more cameras than he can handle (Gouallier 2009; 17ff). Computers are limited only by their processing power but can work 24/7 without interruptions and without lacking in concentration. In this way a significantly higher number of camera images can be monitored for the same amount of money. For example, it is argued by INDECT: "There will also be economic benefits, in terms of the reduced staffing requirements." (http://www.indect-project.eu/) In this case supporting the operator through smart CCTV can be a cost effective alternative. However, this is only true if one operator can monitor more cameras than before while producing (together with the smart support system) the same amount or less false positives and false negatives. If such a constellation is working effectively, a higher number of cameras can be monitored for the same price and hence lead to an expansion of smart CCTV.

In Germany, the police monitors its own installed cameras very well which is highly expensive. Provided that smart CCTV works reliably, a significantly higher number of cameras could be monitored for the same price. From an economic point of view this could lead to an expansion of smart CCTV installed by the police and even by private persons, too. Taking all of this in consideration, new fields of application for smart CCTV are emerging. So far, large area surveillance by drones (like in INDECT) would be either too expensive with ordinary CCTV or would make no sense, because the cameras are not properly monitored. But drones equipped with smart CCTV can be used effectively.

2.2 Expansion because of reduced privacy intrusion

If smart CCTV is in some cases less expensive and more efficient than ordinary CCTV, ethical considerations still could oppose its introduction. One of the main ethical concerns in the area of surveillance is privacy. Therefore, if smart CCTV enhances privacy, it might count as ethically justified. For the protection of privacy it might even be demanded that smart CCTV be used instead of ordinary CCTV.

In German and European research funding privacy is an important issue. For law compliance and ethical considerations privacy protection is one of the main points (Kroener & Neyland 2011).

ADDPRIV (funded by the European Seventh Framework Programme (FP7)) is an example for an attempt to improve privacy protection of ordinary CCTV by introducing smart elements. "The ADDPRIV project (Automatic Data relevancy Discrimination for a PRIVacy-sensitive video surveillance) seeks to improve public safety by ensuring the individuals' privacy right, enriching the current video surveillance systems through an automatic discrimination of relevant data recorded. The project addresses the challenge of determining through an automatic, accurate and reliable manner which information obtained from a distributed system of surveillance cameras is relevant from the security perspective and which is not, and can be safely deleted. This will limit unnecessary data storage and will protect the citizens' privacy right." (http://www.addpriv.eu/) The Ethics Board of INDECT argues quite similar: „The value that will be added by deployment of INDECT research outcomes is that existing systems would operate with less human intervention, which will lower the level of subjective assessment and the number of human mistakes. This means less staff will be required for supervision of surveillance activities (e.g. monitoring of CCTV camera networks). This will resulting (sic!) in less opportunities for illegitimate use of such information, or for human error to result in violations of the rights of the individual." (http://www.indect-project.eu/) Private industries also try to justify smart CCTV by referring to privacy enhancement: "Established in 2003 Smart CCTV Ltd. is a value added reseller and systems integrator, specialising in the newly emerging market of video analytics. Our aim is to use this technology to improve peoples lives by making them safer, reducing crime and anti social behavior. Historically CCTV has been seen as an intrusive technology watching people going about their daily lives, Video Analytics ensures that unless someone is acting a predetermined threat manner the fact that they are in the view of a camera is never available to a human operator." ( http://www.hellotrade.com/smart-cctv/profile.html)

In the EU parliament Paweł Robert Kowal justifies INDECT by referring to the report of the ethics board: "The ethical review of the INDECT project had a positive outcome, and no infringements relating to the project's ethical aspects were identified" and also mentions the privacy aspect "The INDECT system is intended to identify threats by means of monitoring, including, in particular, pornography, arms trading and trafficking in drugs and human organs, and it is also intended to protect data and privacy." (EP Debates) [5]

In summary, reduced privacy intrusion is considered as one of the important positive features of Smart CCTV. So apart from the increased functionality, Smart CCTV is also considered to provide ethical benefits.

3 Why is (II) a problem?

Smart CCTV seems to be more cost effective than ordinary CCTV and even protects privacy. But we want to show that:

  1. smart CCTV counteracts privacy enhancement because it could lead to an expansion of CCTV and thus to a higher degree of privacy intrusion in total.
  2. smart CCTV poses new risks to privacy.
  3. smart CCTV leads to serious other ethical problems than privacy

3.1 In total: more privacy intrusion

The amount of persons made identifiable by smart CCTV depends on the number of persons classified as suspicious. This again depends on what smart CCTV is supposed to detect and on the number of false positives. If only very specific and easily identifiable behavior has to be detected by smart CCTV, the behavior of just a small number of persons will lead to an alarm or the highlighting of a scene. An example might be using smart CCTV to prevent people entering a high security area. In this case an expansion of CCTV would be likely not to lead to an expansion of privacy intrusion. But if another kind of behavior that is quite common and hard to identify shall be detected, we might have a significant higher number of possible person identifications if (smart) CCTV expands. This might be the case if anti-social-behavior shall be identified or very vague criminal intentions. In such applications more false positives will occur.

The problem of (ordinary and smart) surveillance concerning privacy is not only that people are identifiable. People might also feel controlled. This could constrain their decisions and behavior. Successful surveillance does not only help the operators to see unwanted behavior but stops people from acting "suspiciously". Even if no one is highlighted by the smart CCTV system and thus no one is identifiable, people might want to be let alone by the surveillance system in order to decide and act without having to have in mind that they are watched and controlled (this kind of privacy is strongly connected with normalization). This effect is a lot stronger if cameras watch every step you take. It makes a huge difference if single spots are monitored or almost every public spot. Surveillance can gain a new quality by quantity. This kind of CCTV expansion is only likely if all these cameras can be monitored (by computers in smart CCTV) and justified to the citizens (by emphasizing that almost only "bad people" will be identifiable).

Another problem of smart CCTV is the danger of future privacy infringements. Even if introduced as blinkered smart CCTV, the danger remains that in the future or in other areas of application data might be stored and analyzed. This of course is a greater risk in authoritarian regimes but also a problem of the private use of smart CCTV - for both security and commercial purposes. The more (smart) CCTV systems are in use the higher the risk.

3.2 New risks to privacy

By using smart CCTV new information about specific persons can be obtained that a human operator could not have gained. Gathering this information includes motion tracking and movement analysis, identification of behavior, object detection, gesture and facial expression analysis, and analysis of interactions. Human operators are limited in their attention. Computers can direct their attention at several targets and features at the same time. No stumble, no smile, and no handshake has to be missed and let be passed unanalyzed anymore. Therefore smart CCTV might lead to a situation in which less people are identifiable, but once at the focal point of the smart system and thus identifiable, a lot more personal information is collected. Furthermore, even if one passes uncontrolled, this will be based on a lot of personal information: Also the identification as unsuspicious needs the generation and processing of a lot of data, the use of which has to be carefully controlled.

Combining data allows obtaining new information which is a threat to privacy. In projects like Tapewire or INDECT the combination of data is intended. Translation problems might occur (see III.3.1). In order to reduce false negatives and false positives, the smart system has to know as much as possible about the monitored persons and their context. This is the main reason why many projects (like INDECT) try to combine visual data with previously obtained data or data from social networks. The best working smart CCTV knows everything. Therefore the technology has an inner drive to privacy intrusion.

Many risks to privacy might be controlled by law. However, this cannot address the huge potential and interest in Smart CCTV for "dual use" in military applications and its proliferation in authoritarian regimes. (See 3.4. below.)

3.3 Other ethical problems

3.3.1 Translation

The cognitive systems used in Smart CCTV needs information about the area and persons under surveillance. These are either provided by experts or generated during a "training" step for the statistical models. Obviously, different information leads to different output of the system. Thus, the data generated by smart systems can only be understood in the context of the particular system used. Especially when people are identified, the reuse of this data in other contexts (by security employees or other cognitive systems) can be problematic since it might be hard to tell, what "being in the database" in general or more specific "being in the database with property x" means exactly. This can lead to rash judgments.

3.3.2 Transparency

The monitored people don't know what behavior is expected by the cognitive system in order to be not suspicious. The reaction of the technology on filmed behavior is not fully known and even if, it is hard for the monitored people to understand it. They would have to know a lot of technical details to grasp how the cognitive system interprets data. For most people a Smart CCTV system is like a complicated black box that, however, has potentially strong impact on their lives. This could lead to a fear to behave suspiciously and chilling effects on public behavior and movement. That way, liberty might suffer even more than with standard CCTV where a common understanding of what is considered to be suspicious by personnel and people under surveillance can be presupposed.

The translation problem (3.1) might foster the transparency problem, because people do not know what will happen with their data and how it will be interpreted in different systems.

A lack of transparency is also a problem for democratic decision processes. Stakeholders can hardly participate in the decision making process or come to informed decisions if they don't know how smart CCTV works and what effect it will have. Because of the complexity of smart CTTV this can even be true for professional politicians.

3.3.3 Normalization

Smart CCTV recognizes suspicious behavior either by statistical deviance from normality or by explicitly defined patterns. The normality thus defined is the basis for bringing diverging behavior to the attention of security personnel. This attention can leads to further actions. In this way a technologically implemented normality in behavior is expected and enforced. This normalization can be very suppressive. This is especially the case in contexts with a plurality of social groups and highly dynamic behavior. Smart CCTV systems need to be highly adaptive and regularly monitored to not suppress free actions in such contexts. Privacy could be understood as a protection from this kind of social pressure.

3.3.4 Authoritarian regimes/dual use

Smart CCTV is perfectly suited for military use and repression in authoritarian regimes. Since it is a precondition for good functionality (as argued above) that the system can be adapted to the context it is used in,it cannot by design be restricted to civil use.

The "smart" component of a Smart CCTV system is usually just a software program that runs on standard PC hardware. Together with widely available digital cameras - also as consumer products - this forms a complete Smart CCTV system. This makes the control of proliferation hardly possible.

Literature:

Amleung, U (2002), Der Schutz der Privatheit im Zivilrecht (Tübingen: Mohr Siebeck).

Gouaillier, Valérie (2009), Intelligent Video Surveillance: Promises and Challenges

Technological and Commercial Intelligence Report. Centre de recherche informatique de Montréal.

Kroener, K, Neyland, D (2011), D 1.3 - Preliminary Scoreboard for Evaluation of SystemCompliance with Privacy, (online): Retrieved August 10, 2012, from http://www.addpriv.eu/uploads/public%20_deliverables/146--ADDPRIV_20113107_WP1_LANCASTER_P_scoreb_R1.pdf.

Nagenborg, Michael (2005), Das Private unter den Rahmenbedingungen der IuK-Technologie (Wiesbaden: VS Verlag).

Rössler, Beate (2001), Der Wert des Privaten (Frankfurt a. M.: Suhrkamp)

Van den Hoven, Marcel Jeron (2009), Information Technology and Moral Philosophy. Philosophical Exploirations in Computer Ethics (Cambridge: Cambridge University Press).

Macnish, K (2012), 'Unblinking Eyes: The Ethics of Automating Surveillance', Ethics and Information Technology 14(2).



[1] IZEW, Germany.

[2] IZEW, Germany.

[3] IZEW, Germany.

[4] For the change in German law to the protection of autonomy see Amelung (2000).

[5] Retrieved August 10, 2012, http://www.europarl.europa.eu/sides/getDoc.do?type=CRE&reference=20110608&secondRef=ITEM-007&format=XML&language=EN