Do No Evil, See No Evil, Speak No Evil: Face-to-Face with Ourselves in an AI Battleground


Jessica Poon, International Masters on Security, Intelligence, and Strategic Studies —Erasmus Mundus, University of Glasgow.

Even prior to COVID-19, longstanding dialogues around race and artificial intelligence (AI) have been key to the development of this technology and the implementation of sophisticated but experimental AI. AI technologies have already demonstrated their ability to disrupt our daily lives through extensive monitoring. Alongside the Black Lives Matter (BLM) movement and a world transitioning back into a semblance of normality in the wake of a global health crisis, initiating open dialogues around surveillance technologies, their regulation and racial bias, are now more pertinent than ever. This essay explores the ethical limits of implementing AI in surveillance. This will be supported by evidence from several case studies across consumer-facing technologies from the UK, US, and China.

Keywords: AI, policing, surveillance, technology, bias, race, China, UK, US


Introduction

“Move fast and break things,” may have been the favoured position of Big Tech since the early 2000s, but in light of revelations around the use of personal data in recent years, questions have been raised about whether policies surrounding the use of artificial intelligence (AI) in surveillance have gone too far. Antagonising this idea means tracing policy developments amidst the rise of what Jonathan Crary describes as an “appropriation of public spaces and resources into the logic of the marketplace.”1 Continuing in the vein of AI being absorbed into the “logic of the marketplace,” computer scientist Arvin Narayanan positions AI as a suite of technologies; ”AI is an umbrella term for a set of loosely related technologies…companies exploit public confusion by slapping the “AI” label on whatever they’re selling.”2 And so, given the range of AI technologies which are currently being simultaneously deployed within law enforcement, it seems facile to talk about these technologies in isolation. To illustrate the range of forces currently at play, this commentary, which explores the ethical limits of implementing AI in surveillance, will be supported by evidence from several case studies across a broad spectrum of consumer-facing AI technologies.

The Scope of AI Technologies: Issues and Optimism

Automated collection methods and profiling via facial recognition tools are widely believed to be not only time-effective, but more accurate, leading to more streamlined processes of criminal justice.3 The greater accuracy afforded by these automated tools is hoped to pave the way for a crime-free world.4 It is contended here however, that the implementation of automation within law enforcement and policy initiatives only serves to heighten the hostility of the UK’s surveillance environment. This layer of automation within national CCTV systems is touted as alleviating the burden on current policing efforts, often at the expense of undermining human rights. 

The time of spying as manual work is long in the past. Governments have an array of consumer apps and technology to potentially abuse.

Extreme examples can be evidenced in non-democratic states where such technology has been used in law enforcement for a number of years already. Russia’s use of facial recognition to manage its citizens in light of the COVID-19 outbreak, and China’s use of biometric data in policing its population of Uyghur Muslims stand as stark reminders of how AI can be used as a repressive force for state control. Facial recognition automates an already-flawed profiling system, reflecting the inherent biases of its designers and users. In this regard, its application within law enforcement and policing in a democratic context is highly contentious. Without adequate consumer protections, civil liberties are left in a vulnerable position, and if allowed to escalate, rights to protest and other democratic freedoms may be compromised, as we have seen in Hong Kong. 

In this sense, it can be argued that the current techno-optimist narratives are advocated by policymakers with somewhat premature optimism. Surveillance policies require substantial stress testing, especially where burgeoning technologies such as facial recognition are concerned, rather than a blind belief by governments that such technologies will inherently deliver liberation within a structurally imperfect justice system. It is difficult to overstate the consequences of promoting such technology in its still early iterations, given that the policies surrounding civil rights are relatively ill-equipped to accommodate ever-changing digital identities. For instance, AI-based profiling relies on its inputs, and while crime databases record statistically disproportionate numbers of black, asian and minority ethinic (BAME) criminals on account of existing human bias within policing. Computer vision has been largely proven to have a scant capability to recognise and match these non-white faces to their live counterparts, by relative comparison.5 

Whilst AI exhibits some advantages over traditional collection practices, it also has the potential to be even more of a black box than previous surveillance technologies. Not only is an amplification of human bias, but the credibility afforded to these mechanical processes means that in-built human biases are replicated. Governmental bodies regulating these policies are answerable to their citizens, but without appropriate oversight measures and accessible education on data hygiene, a critical component of democracy is lost. Though the question of human bias in AI has been widely documented in several contexts, the increasing volume and availability of data in the world today supposedly creates a counterpoint to such biases. There is an optimistic look to eradicating these, whilst reconciling otherwise discrete pieces of data; a model which promises to deliver a richer scope of analysis than ever before. However, Narayanan brings a technical point to the fore, describing AI as a “snake oil” in reference to its analytical capabilities: “many dubious applications of AI involve predicting social outcomes… We can’t predict the future — that should be common sense. But we seem to have decided to suspend common sense when “AI” is involved…machine learning [that uses] hundreds of features is only slightly more accurate than random…basically a manual scoring rule.6 Implementing faster processing speeds within legal decision-making processes, such as sexual assault cases, presents ethical issues for the parties involved not least of all in reducing sensitive and nuanced moral issues to a limited scope of outcomes.7

These global incidents incite us to question how much the average civilian is able to respond to these invasive policy measures. The shift we see towards normalising AI-powered surveillance in the UK begins in less public and – crucially – less regulated environments than its implementation in public policy initiatives might suggest. The UK’s implementation of AI technologies within policing is made all the more concerning when seen in light of concurrent domestic developments, namely, the use of bulk personal datasets (BPDs). The use of BPDs within the UK’s intelligence services came into formal legislation between 2015 and 2016 with the Investigatory Powers (IP) Bill. Released at the height of the US Presidential election, the implications that the IP Bill would have on civil liberties went somewhat unnoticed. The IP Bill permits the targeted and bulk interception of communications data, inclusive of Internet connection records.8 The corresponding Code of Practice released by the Home Office in 2018 states that “automated systems should, where possible, be used to effect the selection for examination.”9 The use of automation in forming key judgements for both the producers and consumers of intelligence sees a disruption in the intelligence cycle through faster processing of greater volumes of data. 

The United Kingdom

The announcement made earlier this year by London’s Metropolitan Police on plans to implement nationwide facial recognition AI in CCTV systems brings to light the ever-relevant issue of personal rights in the digital age: “are current law enforcement surveillance policies doing enough to protect groups and individuals?” Measures of appropriate oversight, privacy as a civil liberty and the consequences of automation all fall under this remit of adequate protection. This announcement has since been overshadowed by subsequent surveillance issues that have arisen as a result of COVID-19, with the widespread implementation of track and trace apps. 

A formidable precedent for understanding how the use of facial recognition AI could affect the UK’s surveillance environment comes from localised trials, such as Durham Constabulary’s Harm Assessment Risk Tool (HART). HART was trained on 104,000 custody events between 2008 and 2012 in order to provide the means for more “consistent, evidence-based decision-making.”10 HART’s implementation included a decision-making guidance framework dubbed ALGO-CARE, which aims to provide a model of algorithmic accountability. But despite the use of proportionality measures, the researchers conceded that these technologies could only be “exploratory” in nature.11 The academic uncertainty surrounding these decisions would ideally be reflected in wider policymaking, however, early iterations of AI in law enforcement have been subject to narratives which are emancipatory, sometimes to the extent of being prematurely utopian. 

The development of AI has promised a global revolution in the way we live, positing that a utopian future can be attained through better knowledge of users in a system, even if that user data is not necessarily attained through explicit prior consent. This uncritical exchange of personal data is often seen as a sufficient trade-off for better or faster access to services.12 However, a willingness to surrender personal data proves troubling when contextualised by lessons from the US, namely, that data can do wrong by society as well as good. Though the impact of automated policing and facial recognition AI in non-democratic societies has been explored here in brief, it should be said that democratic societies are not immune to the more scalable pitfalls of prematurely implementing AI in policy. Primed by Cambridge Analytica’s role in the 2016 US elections and the revelations in the wake of Edward Snowden’s 2013 disclosures, it is evident that appropriate public education around personal data and substantial oversight measures are needed. 

The current turn in UK policy which seeks to implement facial recognition technologies in public surveillance leaves the majority of civilians vulnerable. A lack of both appropriate oversight and education on how to mitigate this biometric turn sees a shift towards policing methods adopted by non-democratic states– which poses concerns for the shape of digital democracy in the UK. AI, like any other technology, is not inherently instilled with the potential for liberation. The extent to which these tools can do good in society is determined by the level of oversight and the extent to which civilians can consent to how their data is used. The latter is determined in part by wider education, in particular, how well consumers are informed about data hygiene. If civilians are not offered the means to negotiate how their digital identities are managed within a democratic state, what guarantee is there to ensure that the worst excesses of the justice system can be mitigated? The question of how far these technologies can impinge on civil liberties looms ominously in the distance, as long as they remain unchecked. 

The People’s Republic of China

Artificial intelligence has become as much of a buzzword as its counterpart in big data, though it spans a complex scope of applications and factions, including, but not limited to: automation, machine perception, and natural language processing (NLP).13 These applications can increasingly be situated in the realm of consumer technologies such as gaming consoles, home appliances, and not least, mobile phones. Though mass data collection is nothing novel, as evidenced in the policies surrounding bulk datasets in the UK, non-democratic contexts are privy to a far more opaque version of this process of collection and automation in the name of societal improvement.14 While the nature of oversight in a liberal democracy like the UK means that the deployment of automated methods of intelligence collection and analysis is accountable to the public at some level, not to mention less widespread, the use of machine perception, otherwise known as computer vision, has become a contentious issue within the current situation in mainland China and its territories. 

Mainland China has seen an uptake in policing methods which rely on algorithmic probability and metadata from both open and secret sources, as well as exporting sophisticated surveillance technologies abroad, most notably to countries Africa.15 One of the biggest exporters of consumer technologies is Huawei, whose ambitions for “building a fully connected, intelligent world” makes it one of the main proponents of connected and mobile devices with integrated AI capabilities, such as facial recognition features.16 A wealth of personal information, as well as metadata, can be mined from mobile apps and fed into wider uses of facial recognition AI in public spaces. For instance, the popular social app WeChat mines user profiles to form “heat maps” that show crowd density and calculate foot traffic in public, enabling targeted searches to be drawn out from user profile photos and mapped out in turn on other databases.17 In the case of the Xinjiang authorities, a formal Integrated Joint Operations Platform (IJOP) integrates CCTV footage equipped with facial recognition technologies with “wifi sniffers” that gather data from connected electronic devices; cross-referenced with license plates, ID cards as well as health, banking, and legal records.18 

 People may be mappable as data but it is difficult to reconcile bits and pixels with the real human and their actions. The use of data to create prediction models for the inherently unpredictable is at danger of creating a deterministic worldview within the intelligence community and could lead to skewed analysis. Deploying AI on a national level risks making big picture judgements mapped out across two-dimensional variables such as physiognomy and statistics. An artificially intelligent act of looking is in danger of becoming an objective set of facts, in relying on mechanized judgements.

The United States of America 

  AI’s promise of a better future sees it hailed as a force for governance in the new Wild West of the internet and an increasingly networked world.19 While the US’ implementation of predictive policing and facial recognition software is not as prevalent as China’s, the evolving threat environment is being challenged with the rise of NLP technologies in intelligence practices. While much has been written about voice assistants, translation apps, and similar consumer technologies, NLP has been used in intelligence practices to monitor a broad scope of activities, ranging from terrorism to trafficking. 2004 saw the advent of DARPA’s RHINEHART, a deep-neural system which implements Automatic Speech Recognition (ASR) used by the National Security Agency and Central Security Service for counterterrorism operations, such as Operation Iraqi Freedom, which saw analysts deploying the technology in an attempt to verify the voiceprint of Saddam Hussein.20 In a world faced by deepfakes assuming the shape of audio and visual footage, this has wider implications for intelligence practices in terms of maintaining vigilant datasets. It is also important to note that automation, facial recognition and NLP applications within the intelligence cycle are not to be seen in isolation, but as part of a bigger network. AI should also be viewed in line with the development of other technologies influencing intelligence practices, such as social media intelligence (SOCMINT), drone and imagery intelligence (IMINT) technologies, given that AI is both a hardware and a software issue.

  Within liberal democracy, this picture accounts for the fundamental hallmarks of oversight, accountability, and the duty to preserve civil liberties. At its worst, AI could erode the ecosystem of trust in liberal democracies, given the fragile nature of trust in the face of big data; a revolution for the better should not seek to gain knowledge at the expense of measured decisions and ethical judgements.21 Faced with democracies like the US and UK, China’s “one country, two systems” policy arguably sees AI have a greater impact on intelligence practices than it has for its liberal counterparts. More generally, AI provokes a broader issue of securitisation and policy beyond the realm of intelligence practices, given its accelerated growth within the globalised infrastructure of security. 

Endnotes

1.  Jonathan Crary. 24/7: Late Capitalism and the Ends of Sleep. (London: Verso. 2014), 12-17 and Shoshana  Zuboff. The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power (London: Profile Books. 2019), 3-5.

2.  Arvind Narayanan. (Twitter, 2019) 9 November 2019. [Accessed 13 November 2019]. Available from: https://twitter.com/random_walker/status/1196870349574623232?s=20

3.  “Data-Driven Policing.” The Alan Turing Institute, n.d. https://www.turing.ac.uk/research/research-projects/data-driven-policing. 

4.  Chris Baraniuk. “Exclusive: UK Police Wants AI to Stop Violent Crime before It Happens.” New Scientist, November 26, 2018. Accessed 13 November 2019]. Available from: https://www.newscientist.com/article/2186512-exclusive-uk-police-wants-ai-to-stop-violent-crime-before-it-happens/. 

5.  Steve Lohr. “Facial Recognition Is Accurate, If You’re a White Guy.” The New York Times. The New York Times, February 9, 2018. [Accessed 13 November 2019]. Available from: https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html. 

6.  Arvind Narayanan. (Twitter, 2019) 9 November 2019. [Accessed 13 November 2019]. Available from: https://twitter.com/random_walker/status/1196870349574623232?s=20

7.  The idea that justice is lent a certain arbitrariness is also reflected in a parallel debate in lethal autonomous weapons (LAWs), which often borrows frameworks around proportionality, success, intention, and authority from the Just War tradition. 

8.  Intelligence and Security Committee of the British Parliament. Report on the Draft Investigatory Powers Bill, (London: 2016). 

9.   Home Office. Intelligence services’ retention and use of bulk personal datasets: Code of Practice, Pursuant to Schedule 7 to the Investigatory Powers Act 2016, (London: 2018) 50.

10.  Marion Oswald, Jamie Grace, Sheena Urwin, and Geoffrey C. Barnes. Algorithmic Risk Assessment Policing Models: Lessons from the Durham HART Model and ‘Experimental’ Proportionality. Information & Communications Technology Law 27 (2), 2018, 227-228.

11.  Oswald et al. Algorithmic Risk Assessment Policing Models: Lessons from the Durham HART Model and ‘Experimental’ Proportionality. 250.

12.  Caroline Cakebread. “You’re Not Alone, No One Reads Terms of Service Agreements.” Business Insider, November 15, 2017. [Accessed 13 November 2019]. Available from: https://www.businessinsider.com/deloitte-study-91-percent-agree-terms-of-service-without-reading-2017-11?r=US&IR=T.

13.  Damien Van Puyvelde, Stephen Coulthart, and M. Shahriar Hossain. Beyond the Buzzword: Big Data and National Security Decision-Making. International Affairs 93 (6) 1397. An interesting discussion which brings out the dimension of hype attributed to technologies and the effect this subsequently has. The Gartner Hype Cycle is also an important reference point here in terms of looking at the nature of inflated expectations from the technology. 

14.  Nicole Kobie “The Complicated Truth about China’s Social Credit System.” WIRED UK, June 7, 2019. [Accessed 13 November 2019]. Available from: https://www.wired.co.uk/article/china-social-credit-system-explained. 

15.  Xiao Qiang. The Road to Digital Unfreedom: President Xi’s Surveillance State. Journal of Democracy 30 (1) 2019: 56.

16.  “Huawei United Kingdom – Building a Fully Connected, Intelligent World.” Huawei, November 14, 2018. https://www.huawei.com/uk/. Though there is not space to discuss the nuances of the issue here, it is worth noting that the links between Huawei and the Chinese government have been much contested by the British press, and that this adds another dimension to the context of collection and surveillance, in terms of whose data is being exploited and to what ends. 

17.  Steven Feldstein. The Road to Digital Unfreedom: How Artificial Intelligence is Reshaping Repression. Journal of Democracy 30 (1) 2019. 44. 

18.  Feldstein. The Road to Digital Unfreedom: How Artificial Intelligence is Reshaping Repression. 44.

19.  Ava Kofman. 2018. Forget About Siri and Alexa – When It Comes to Voice Identification, the ‘NSA Reigns Supreme.  The Intercept, January 19, 2018. https://theintercept.com/2018/01/19/voice-recognition-technology-nsa/. (Accessed November 2019).

20.  Dan Froomkin. 2015. How the NSA Converts Spoken Words Into Searchable Text. The Intercept, May 5, 2015. https://theintercept.com/2015/05/05/nsa-speech-recognition-snowden-searchable-text/. (Accessed November 2019).

21.  For a further discussion on this, Cambridge Analytica’s involvement in the 2016 US election is a good starting point. 

Please follow and like us:

Leave a Reply