The Case for Public Surveillance
We need more cameras with facial-recognition software operating in public spaces
To understand how confused our intuitions are about facial-recognition technology are, start with popular culture; start with Batman.
In Christopher Nolan’s 2008 film, The Dark Knight, the Joker is loose in Gotham City. He has committed a string of atrocities and is poised to commit more. Batman, determined to apprehend him, deploys a surveillance system across Gotham. By scanning the entire city, the system will identify the Joker’s face and voice, and pinpoint his location. Batman’s assistant, Lucius Fox, is impressed but appalled. “Beautiful. Unethical. Dangerous... This is wrong.”
Batman curtly defends his actions. “I’ve got to find this man, Lucius.”
Fox snaps “But at what cost?”
Screencaps from The Dark Knight, Warner Brothers, courtesy of Stephenie Magister
Fox’s morals on this point seem to be flexible, since it was Fox himself who developed the technology, and earlier in the film used it himself to help catch a money launderer. He’s willing to use it personally to help catch the Joker too. “I'll help you this one time. But consider this my resignation. As long as this machine is at Wayne Industries, I won't be” [emphasis mine]. And indeed, one of the final shots of the film is Fox, having ensured the Joker’s capture, smiling as he destroys the surveillance system.
One of the themes of the film is how exigent circumstances can encourage people to betray their principles. Most of the movie’s characters face a personal test, and this is Fox’s: he believes that facial-recognition surveillance technology is wrong, but he does the wrong thing because it will lead to good outcomes. Using this tech is impermissible… unless it’s to catch a money-launderer. Or a notorious terrorist. Then, it seems, it’s okay. But under any other circumstance, it wouldn’t be; which is why the system must be destroyed, to prevent the temptation of its use.
The Dark Knight illustrates how confused we are about public surveillance and facial-recognition technology. It’s obviously useful, but it’s illegitimate, and it should never be used, except in this instance when it’s okay.
In the real world, this confusion appears wherever you care to look:
Earlier this year, the state of New York passed, without debate, a law forbidding New York City’s MTA from using facial-recognition technology to combat fare evasion, a problem that cost the MTA $700 million the year before. One lawmaker justified the ban as a stand against “the criminalization of just existing within the public sphere”
To protect attendees of Taylor Swift’s 2024 Paris concerts, the police deployed AI-powered video surveillance tech to La Défense-area metro stations, to identify terrorists, as was also done earlier in the year to protect the Paris Olympics. This was justified on the ground that it did not employ facial-recognition technology, merely observation, classification, and reporting of behaviours, but critics were unmollified
Speaking of Taylor Swift, at the 2018 Rose Bowl where she performed, kiosks displayed exclusive rehearsal footage; those who interacted with the kiosks were scanned with facial-recognition tech and the results sent to the police, to identify known stalkers. Ticketmaster’s chief product officer said “it’s hard to argue with the value proposition”
These confusions are real. Lucius Fox isn’t, but if he was, I’d tell him that his instincts are right: surveillance of public spaces, empowered with AI and/or facial-recognition technology, is a problem, but only because we haven’t made it clear how, and under what circumstances, it should be done. We should set those rules. Having done that, we should go ahead and deploy this technology widely, because his instincts are right about this too: it’s a valuable and useful tool that would, on margin, make us all better off.
You’ve Been on Candid Camera for Years
The first thing to note is that we already live in a surveillance society.
The world passed its one-billionth CCTV camera sometime in 2021. Put another way, that’s one camera for every eight human beings. They aren’t deployed evenly, of course: The concentration is even higher in developed nations, with China having one camera per 4.1 people and the US one per 4.6 people. Cities are particularly surveilled: while Toronto has 10 cameras per square kilometre and New York has 26, London has a staggering 399. Many cities in Asia have even higher camera densities.
Many of these sensors are monitoring private property, but public spaces are well represented as well. Airports, border crossings, transit hubs, and government buildings all feature cameras. So too do prominent gathering spaces: both Washington, D.C.’s National Mall and Ottawa’s Parliament Hill precincts are monitored by surveillance cameras.
Those are just the cameras that are permanently installed. Mobile cameras are everywhere as well. I drive a Tesla, and whenever I park it away from home, it records everything that happens nearby. Most Tesla vehicles do the same. And of course, everywhere I go, I carry an easy-to-use, high-fidelity video camera in my pocket; I imagine you do too—as of last year, more than half of the global population owns a smartphone.
So cameras are everywhere. Between Google Image Search, AI, and other internet tools, identifying strangers from video is relatively easy, and getting easier all the time. The Vancouver police service has been publishing images of rioters and crowd-sourcing their identities for decades; for that matter, in a more rudimentary fashion, the FBI has been doing it for seventy-five years, through their ‘Most Wanted’ lists.
Facial-recognition technology circa 1999
So, as a society, we seem to accept that cameras may be recording us wherever we go, particularly in public spaces. We seem to accept that the footage may be used by computers to identify us. And we seem to accept that the prospect of being recorded deters crime and disorder: if that wasn’t true, Amazon wouldn’t sell hundreds of models of non-functioning cameras purely as security theatre.
Given all of this, it seems strange to worry about tech-enabled surveillance in public. It’s a tool we already use, inconsistently and in piecemeal fashion, to deter terrorism and crime; if we used it consistently and openly, we’d get more of those benefits.
And we need them, given the high tide of disorder currently affecting our urban public spaces. Many American pundits have opined recently that the willingness of progressive cities to tolerate abuse of public spaces like parks, subway cars, and other places has tested residents’ patience to the limit, to the extent of swinging national elections: here’s Matt Yglesias, Josh Barro, and Noah Smith, to cite only three. Yglesias, for his part, notes not only that liberalism is and should be compatible with maintaining public order; but also that, if you oppose the carceral state, you should be open to deterrence measures: “Tools like surveillance cameras, DNA evidence, and facial recognition software that make it less likely people will get away with crimes reduce the amount of crime that happens, which ultimately is the sustainable route to less incarceration.”
Put another way, who is it that relies on public spaces and services? It’s vulnerable populations. When we fail to make these spaces safe, the well-off can retreat into comfortable private ones: personal vehicles and shopping malls. Everyone else must endure. If we can help prevent abuse of public spaces by using the tools at our disposal, we will help everyone, but especially the least-well-off. I suppose that John Rawls would approve.
Having said that, I am painfully aware that arguments against all this are ready to hand; The Dark Knight is only the latest piece of culture to voice concerns with computer-enabled surveillance. Let’s address those arguments one by one and see how much weight to give them.
Privacy, or “The Panopticon Was a Prison”
The privacy argument against surveillance, especially when linked to tech-enabled facial recognition, is that people have an inherent right to anonymity in public spaces. Yes, others may see us in public, but even so we shouldn't be identified, recorded, or tracked without either our consent, or an expectation that we will engage in wrongdoing… an expectation that must be established, to a high standard and in advance.
This argument against facial recognition technology conflates two distinct concepts: the right to privacy and the expectation of privacy. We have strong rights to privacy in our homes and personal communications, but no right to in public, and our expectation of privacy in public spaces has always been limited. When we step onto a public street, we implicitly consent to being seen—and potentially recorded—by others.
A privacy advocate might say that there’s a difference between being seen and being identified. But as I mentioned, the Vancouver police have been using photos to crowdsource the identities of rioters since the 1990s. And indeed police forces around the world have gear on their cars that pro-actively scans the license plates of cars they pass; identifies the vehicles; and checks for outstanding warrants. The new age of AI and facial-recognition tech doesn’t change the principles at work here, but only makes them more efficient. It automates a process we've long accepted as legitimate.
It is true that before the constraint of human effort and attention meant that there were limits, both to who was identified and for how long that data was kept. Automation relaxes those constraints. The solution isn't to ban the technology but to recreate those limitations through policy (about which more in a moment). We need guardrails, not roadblocks.
Mass Surveillance, or “Big Brother Is Watching You”
The Orwellian argument against mass surveillance and facial recognition is simple: constant surveillance changes how people behave, making them less free and spontaneous even when they're doing nothing wrong. The fear that widespread surveillance will create a chilling effect on public life is understandable but not supported by evidence; we have a case study to hand. In a real-life echo of 1984, London is indeed absolutely riddled with cameras, at nearly 400 cameras per square kilometre, and yet despite (or perhaps because of) this, it remains a vibrant city of public gatherings, political protests, and street life.
The key difference between 1984 and 2024 is the fact that the surveillance we’re considering has a clearly-stated purpose and is, or should be, constrained by law and regular audits. It’s a tool for public safety rather than social control; again, appropriate use of policy is the critical feature. I tend to think that Orwell would have approved.
Accuracy and Bias, or “Brazil Was a Cautionary Tale”
In Terry Gilliam’s Brazil, an insect falls into a machine—literally, a ‘bug in the system’—that leads to an arrest warrant being misprinted and thereby issued for the wrong person. Tragicomedy ensues. The bug in our systems today is algorithmic bias: insufficient, corrupt, or inherently-biased training data leads to error, such that some populations, often women and people of color, suffer persistent and consistent penalty. This is a serious issue that requires attention.
But if bias in machine decision-making concerns you, wait till you hear about bias in human decision-making.
As the quip has it, don’t compare these systems to the Almighty, compare them to the alternative. Both human recognition generally and traditional policing specifically exhibit documented patterns of bias. And human bias, unlike machine bias, is very hard to spot or fix.
Consider New York State’s decision to proactively ban the use of facial recognition to sanction fare evasion. In its absence, transit officers will have to rely on their judgment and attention span to spot violations. That judgment and attention will be influenced by their own biases and limitations. How could it not? An automated system might be imperfect initially, but it can be tested, its results measured, its outputs improved. Banning these systems didn’t eliminate bias: it just keeps it human and less accountable.
Oversight, or “Who Watches the Watchmen?”
Give the security state power, and it will abuse it. For many, that’s a truism. And it follows that if we give the security state the power to surveil us in public, and link that surveillance to efficient, automated facial recognition, that power will be abused, to harass or punish those who haven’t committed a crime. And therefore, the argument goes, this power should be banned in advance. That’s how I make sense of New York’s flat ban on the MTA’s use of such technology.
As a liberal, I have some sympathy for this position. I believe in the power of the state to make everyone better off, and I believe that state power should be thoroughly constrained by respect for civil liberties: which is to say law, regulation, and policy, agreed upon and enforced by the people’s representatives.
But as a techno-optimist, I note that the preference for stasis, and refusal to use the capabilities that technology offers, has led us to where we are: poor in houses, poor in transit service, unable to meet the challenges of our day.
That’s why the current careless, incoherent stance on use of this facial-recognition tech annoys me. Our deployments are piecemeal, our bans are intermittent and inconsistent with widespread private use, and our governance is lacklustre. What we need is policy that makes it clear what the limits are, and then gets out of the way.
The Privacy Commissioner of Canada's (PCC) guidelines for video surveillance in public are almost twenty years old, but they offer a strong foundation. They emphasize the need for clear policies governing use, regular audits by independent bodies, and transparent public reporting. More importantly, they recognize that surveillance technology requires ongoing oversight. In The Dark Knight, Batman gave control of his surveillance system to Lucius Fox precisely because Fox feared its indiscriminate use. In the real world, we also need institutional checks and balances that will govern its use over time. We should establish independent oversight bodies that can audit systems, investigate complaints, and mandate change when needed. These bodies should include technical experts who understand the technology, civil rights advocates who understand the risks, and public-safety officials who understand the operational needs. Their proceedings should be public, their findings binding, and their authority backed by law.
The PCC also suggests that the public should be clearly informed that an area is being monitored. I would go further and suggest that this information should include what data is being collected; why; and how to access it, if you choose. I once worked for a firm that tried to develop public standards that would achieve just this; the effort has moved on, and today exists on its own as Digital Trust for Places and Routines (DTPR), “an open-source communication standard to increase the transparency, legibility and accountability of digital technology in the built environment”. DTPR is another excellent foundation on which to build our governance of public surveillance.
The proliferation of cameras and facial recognition technology isn’t an idle future; it’s the present day. Late in my drafting process for this piece, there was an assassination in New York City, and the subsequent police investigation demonstrated both the power and limitations of our current surveillance capabilities. I recommend reading this article in full, as it demonstrates precisely where the surveillance state is today. Thanks to the plethora of cameras in Manhattan, investigators were able to trace a killer’s movements across a city of millions. But they relied on a patchwork of public and private footage; faced crucial gaps in coverage that meant the trail was lost; and employed facial-recognition software that can only identify people who’ve been previously arrested. Ultimately, the alleged assassin was apprehended in a Pennsylvania restaurant, thanks to another patron who recognized his image from publicized surveillance imagery and called the police.
What's striking about this case is how it exemplifies both the concerns and benefits we’ve considered. Regarding the investigation, the integration of public and private cameras, the methodical reconstruction of movements, the careful preservation of evidence, and the institutional oversight all suggest that many of the guardrails I’ve proposed are already in place; or rather, that what has emerged in an ad hoc way could be formally instituted.
Yet the investigation also reveals how we are living in a worst-case world: we have, simultaneously, too much surveillance for a society committed to privacy and liberty, and too little for a society committed to public safety and order.
Widespread surveillance, empowered by AI and facial recognition, is our reality today. I hope that we recognize its value and its risks, and build it out further, even as we build policy frameworks that ensure it serves the public good. This means clear rules about usage, regular audits, transparent reporting, and independent oversight. In The Dark Knight, Lucius Fox destroys a surveillance system rather than oversee its use. We can do better. With appropriate governance, we can harness these tools to protect public spaces while preserving civil liberties. As recent events have shown, the need is pressing.
I foresee widespread use of burqas and niqabs for all....
Whenever there is an intrusive or irritating "rule" imposed on people, they will find a way around it.
What an excellent write up and timely given that assasination. We have cameras everywhere, as you mentioned, but who is it benefiting to continue this paranoia about 1984? It's a sub 1% of people doing most of the crime, vandalism, and making places unsafe. I think the core barrier is that last piece: who watches the watchmen. Not only could we get a deep level of transparency around access (e.g. when footage is viewed by police), but we could also automate the governance so this leads to less government bureaucracy rather than more. Imagine the benefits for statscan, public health, and even open street maps. Its time to stop the new TDS: Technology Derangement Syndrome.