Face Recognition and Privacy

A brief review of the various face recognition technologies, links to recent examples of how they are used, some thoughts on the privacy issues they raise, along with some privacy-enhancing technologies that can be used. If systems are not designed for privacy and security, information providers (the people) will try to autonomously balance information asymmetry.

Face recognition1 is a term that describes many different operations applied to digital still pictures or video streams. Some of them may have privacy issues, while some may be used to enhance people's privacy.

Face Detection

face detection

Face detection is used to automatically detect or isolate faces from the rest of the picture and –for videos– track a given face or person in the flow of video frames.

Most smartphones or digital cameras use face detection, often complemented with smile detection to help you to take nicer pictures.

These algorithms do not represent a privacy issue, since they only spot a face in a photo or video. Face detection algorithms do not really recognize anybody, they only notice that there is a face somewhere in the scene.They may be used to enhance privacy, for instance detecting faces in photos taken in public so that they can be blurred before publication. That's what Google Streetview does: it automatically detects faces of passers-by (and license plates) and blurs them. There's also a special app for activists (SecureSmartCam) that does automatic obfuscation (i.e. automated face blur) on photos taken in public protests, to protect the identity of the protestors.

However this technique is used to detect not only faces, but also to gather some additional information, such as age and sex. These are used in digital signage (video billboards) to display targeted ads: ads that are appropriate to the age and sex of people watching, or their mood (if they look happy or sad). The worrying part of face recognition applied to digital signage comes if billboards not only gather, but retain information, remembering faces to recognize returning visitors to engage interaction with them. But this is no more simple face detection, but face matching.

Face matching

face matching

Face matching automatically compares a given face with other images in some archive and selects those where the same person is present.

This technology is based on several sophisticated biometric techniques, some three-dimensional: it matches quite reliably any face detected in a still picture or a video stream with a database of already known faces.

Face matching is often used by surveillance services to monitor private and public places. It's now easy to build a database of visitors of courhouses, stadiums, malls, transport infrastructures or airports (this is called "border processing"), collecting pictures from surveillance cameras, and automatically build an entry for each visitor at the gates, sometimes combined with other biometric information, as iris scan. It may also track the presence of people in various parts of a building. The record of a person's visits may or may not be associated with personal data (places visited in the building) or her real identity, but every time that that person comes back, she'll be recognized as a former visitor. 

 There are several issues with face matching.
Privacy concerns are twofold: on the one hand face matching can be used beyond intended purpose. If face matching is absolutely necessary for security reasons, it has to be used only for security purposes, not for marketing or to gather information on personal habits. Moreover collected data has to be kept for the time it is strictly necessary, informing the users and paying great attention to data security, so that it could not be stolen or leak outside.

On the other hand, the wealth of publicly available pictures on the web poses a second issue: starting from a single picture and using face matching software it is possible to build a collection of images belonging to a single person. A "face matching search engine" is not available to the public, but it is now absolutely feasible to apply face detection and matching to huge picture and video repositories (such as Fickr, Picasa and Youtube) or from social networks. This is what has been demonstrated in an unreleased prototype software for smartphones called “Augmented ID” or “Recognizr” from the Swedish company The Astonishing Tribe. Other companies are developing similar products that are planned for release. To preserve privacy, such apps must be designed to allow tagging only of consenting people who are already part of your social network.
The privacy issues with similar products, if not designed for privacy, are huge: indiscriminate face matching would allow anyone to take a picture of you with a cellphone and then match it with the wealth of pictures he can find online: it would be a stalker's paradise. The "creepyness" of such a service has been acknowledged by Google's executive Chairman Eric Scmidt.

Another issue is not privacy-related but is very serious. It has to do with matching errors called "false positives" when a match is found between pictures of different people as if they were the same person. What happens if your face is mistakenly associated with someone else's? What happens if you are unlucky enough to closely resemble to some fugitive criminal and try to enter a courthouse or get spotted by one of the many law-enforcement cameras? Or enter a casino if you look like a recognized problem gambler? Your life could quickly become a nightmare, being denied entrance to buildings or even risking arrest. 

 To summarize, while face matching could be useful in restricted contexts, especially where security is a major issue, widespread use of face matching can become a real threat to privacy and personal security.

Face identification

face identification
Face Identification allows to manually or automatically identify someone, linking together pictorial personal data (e.g. a face) with textual data (e.g. name).

Automatic identification requires that the matched face is already linked with some personal identity data in a database, such a person's name, personal website, social network homepage, social security number, driving license or some other data that allows to uniquely identify that person.

Manual identification happens when identification data is either provided by that person (for instance through voluntary enrollment through a registration form in front of a camera), or alternatively is provided by someone else that knows that person. This is precisely what happens with "tagging": by (manually) tagging someone's face with her name in Facebook or Picasa, you make possible the automatic identification of that person in other pictures. Facebook and Picasa already implement automatic face matching of faces being manually identified (tagged). In this context, tagging someone and sharing tagged picture without consent is a major privacy violation. 

Privacy issues can emerge also from unintended use of identified faces: pictures linked to personal pages in social networks can be used to identify people in other contexts, where anonymity is preferred. You may like if your friends on Facebook tag your face in party pictures while you are at school, but perhaps you may not appreciate if that information leaks outside, to be later matched with the picture you sent to a potential employer along with your resume.

It's important that firms running social networks and their users understand that aspect of pictorial data privacy.

 

Identity verification

identity verification
Identity verification allows to automatically perform matching and identification on any new face that has been previously identified.

For instance, certain computer operating systems allow biometric identity verification ("face recognition") instead of username/password authentication using pictures taken by the computer's webcam.

Some firms already use face recognition for their time attendance systems, to keep track of the employees in the firm using high definition surveillance cameras. Schools already use attendance and timekeeping among pupils. Once again, this poses no serious threat to privacy if biometric data is used only within the original scope and intents with people's consent. But any leakage of biometric identification data could be very dangerous, especially since many of these systems are interoperable: even if they don't keep actual pictures, standardized facial biometric data is sufficient to perform identification on new facial images. These systems don't have to keep any picture of you, but only some appropriate "signature" or template associated with your face. It is even conceivable to plan a global biometric face recognition database. A face recognition search engine is a massive weapon of privacy destruction.

Privacy issues

Major privacy issues linked to pictorial data and face recognition can be summarized as follows: 

 (1) unintended use: data collected for some purpose and in a given scope is used for some other purpose in a different scope, for instance surveillance cameras in malls used for marketing purposes; 

 (2) data retention: the time of retention of pictures (or information coming from matched faces) should be appropriate for the purpose they are collected, and any information has to be deleted when expired. For instance digital signage systems should have a very limited time-span, while time attendance systems or security systems have different needs to reach their intended goal. 

(3) context leakage: images taken in some social context of life (affective, family, workplace, in public) should not leak outside that domain. Images from social networks should not be matched with those sent to a potential employer as a part of a resume, or with the one on your driving license. Following this principle, images taken in public places or public events should never be matched (without explicit consent), since the public social context (as when I walk along the road) assumes complete anonymity. This is especially true if the public context is likely to be associated with religious, sexual or political orientation, as may happen in political or religious gatherings. In public places, privacy has to be the default. 

 (4) information asymmetry: pictorial data may be used without explicit consent of the person depicted, or even without the knowledge that that information has been collected for some purpose. For instance I may have no hint that there are pictures of me taken in public places and uploaded in repositories (as Flickr or Picasa). As long as pictures remain anonymous my privacy is quite preserved, but if face matching is applied, this leads to the breaking of privacy contexts, and as a consequence someone may easily hold information about me I do not know myself, and that I am not expecting that others may know. 

Pictorial privacy enhancing techniques and technologies

Academic research on face recognition privacy enhancing techniques concentrates on identification. One possible approach to enhance privacy is splitting the matching and identification tasks: one party provides a face image, while another party has access to a database of facial templates [Erkin et al, 2009]; another approach uses partial de-identification of faces [Newton, Sweeney, Malin,2005];  others try to implement a revocation capability [Boult,2006]. These techniques tend to design biometric systems for privacy and security, reinforcing people's trust. 

Some attempts have been made to develop opt-out techniques to protect privacy in public places. A quite extreme (and uncertain) technique is temporary blinding of cctv cameras. Considering that pixelation is becoming fashionable, one option could be wearing a pixelated hood. Special face-recognition camouflage make-up can be effective, too. A collection of techniques available here. These and other obfuscation techniques [Brunton Nissenbaum, 2011], like posting online "wrong" faces, aim at re-balance information asymmetry. A rather extreme, but effective approach is cosmetic surgery. 

Against wearable computing privacy issues, Stop The Cyborgs proposes a Google Glass ban sign Google Glass Ban Sign. 

 In semi-public contexts, such as conferences, parties and gatherings TagMeNot can be an effective opt-out technique for face recognition.

As a general rule, it seems that whenever systems are not designed for privacy and security and raise issues like unintended use, data retention and context leakage, then information providers (the people) will try to autonomously balance information asymmetry.

References

  1. Harry Wechsler, Reliable face recognition methods: system design, implementation and evaluation (Springer, 2007).
  2. Zekeriya Erkin et al., “Privacy-Preserving Face Recognition,” in Privacy Enhancing Technologies, ed. Ian Goldberg and Mikhail J. Atallah, vol. 5672 (Berlin, Heidelberg: Springer Berlin Heidelberg, 2009), 235-253, http://www.springerlink.com/content/wk623747g141r063/.
  3. Elaine M. Newton, Latanya Sweeney, and Bradley Malin, “Preserving Privacy by De-Identifying Face Images,” IEEE Transactions on Knowledge and Data Engineering 17, no. 2 (2005): 232-243.
  4. T. Boult, “Robust Distance Measures for Face-Recognition Supporting Revocable Biometric Tokens.,” in Automatic Face and Gesture Recognition, IEEE International Conference on, vol. 0 (Los Alamitos, CA, USA: IEEE Computer Society, 2006), 560-566.
  5. Finn Brunton and Helen Nissenbaum, “Vernacular resistance to data collection and analysis: A political theory of obfuscation,” First Monday, May 2, 2011, http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3493/....

Notes

1 Please consider that terms used are not always consistent along standards and scientific literature. Here I will loosely follow terms used in [Wechsler 2007].