Use of Artificial Intelligence in the UK Police Force

First published 2022; revised 2023

Artificial Intelligence (AI) has emerged as a groundbreaking technology with immense potential to transform various sectors, including law enforcement. In the United Kingdom, the integration of AI into the police force has garnered both attention and scrutiny. Despite one report calling for Police forces in the UK to end entirely their use of predictive mapping programs and individual risk assessment programs, AI in policing is growing and shows no signs of let up; according to Deloitte, more than half of UK police forces had planned to invest in AI by 2020. The use of AI in the UK police force, however, has many benefits, challenges, and ethical considerations.

The adoption of AI technologies in the UK police force offers several tangible advantages. One of the primary benefits is enhanced predictive policing. AI algorithms can analyse vast amounts of historical crime data to identify patterns, trends, and potential crime hotspots. This predictive capability allows law enforcement agencies to allocate resources more effectively and proactively prevent criminal activities. Moreover, AI-powered facial recognition technology has been employed to aid in identifying suspects or missing persons. This technology can scan through large databases of images and match them with real-time surveillance footage, assisting officers in locating individuals quickly and efficiently.

However, the integration of AI in policing is not without its challenges. One of the major concerns is the potential for bias in AI algorithms. If the training data used to develop these algorithms is biased, the technology can inadvertently perpetuate and amplify existing biases, leading to discriminatory outcomes, particularly against minority groups. Ensuring fairness and equity in AI-driven law enforcement practices remains a significant hurdle.

Another issue is privacy infringement. The use of facial recognition technology and other surveillance methods can raise concerns about citizens’ right to privacy. Striking a balance between public safety and individual rights is crucial, as unchecked AI implementation could erode civil liberties. Ethical considerations surrounding AI implementation in the UK police force are paramount. Transparency in how AI algorithms operate and make decisions is essential to maintain public trust. Citizens have the right to understand how these technologies are used and what safeguards are in place to prevent misuse.

Additionally, accountability is crucial. While AI can aid decision-making, final judgments should remain within human control. Police officers should not blindly follow AI recommendations but rather use them as tools to support their expertise. Challenges such as bias, privacy concerns, and ethical considerations must be carefully addressed to ensure that AI is a force for positive change and does not infringe upon citizens’ rights or exacerbate societal inequalities. As the technology continues to evolve, it is imperative that the UK police force strikes a balance between harnessing AI’s capabilities and upholding fundamental principles of justice and fairness.

Links

https://committees.parliament.uk/event/18021/formal-meeting-oral-evidence-session/

https://www.nesta.org.uk/blog/making-case-ai-policing/

Issues Surrounding Black Box Algorithms in Surveillance

First published 2021; revised 2023

The rapid advancement of technology has transformed the landscape of surveillance, enabling the collection and analysis of vast amounts of data for various purposes, including security and law enforcement. Black box algorithms, also known as opaque or inscrutable algorithms, are complex computational processes that generate outputs without offering clear insights into their decision-making mechanisms. While these algorithms have demonstrated impressive capabilities, their use in surveillance systems raises significant concerns such as issues of transparency, accountability, bias, and potential infringements on civil liberties.

One of the primary problems with black box algorithms is their lack of transparency. These algorithms make decisions based on intricate patterns and correlations within data, which makes it difficult for even their developers to fully comprehend their decision-making processes. This opacity prevents people under surveillance from understanding why certain actions or decisions are taken against them. This lack of transparency raises questions about the legitimacy of the surveillance system, as people have a right to know the basis on which they are monitored.

The complexity of black box algorithms also creates challenges in attributing responsibility for any errors or unjust actions. If a surveillance system using black box algorithms makes wrong outcomes or infringes a person’s rights, it becomes challenging to hold anyone accountable. This accountability gap undermines the principles of justice and fairness and leaves people without recourse in case of harm.

Black box algorithms can inherit biases present in the data they are trained on. Surveillance systems using biased data can perpetuate and even amplify existing societal biases, leading to discriminatory outcomes. For example, if historical data reflects biased policing practices, a black box algorithm trained on such data might disproportionately target certain demographic groups, exacerbating social inequalities and eroding trust in law enforcement agencies.

The use of black box algorithms in surveillance also raises concerns about privacy and civil liberties. When these black box algorithms analyse and interpret personal information without clear guidelines, they may invade people’s privacy rights. As surveillance becomes more pervasive and intrusive, people might feel like their fundamental rights are being violated, which might cause societal unrest and resistance to the use of surveillance using black box algorithms.

The implementation of black box algorithms in surveillance often happens without enough public oversight or informed consent. This lack of transparency can lead to public mistrust because people are left in the dark about the extent and nature of the surveillance practices employed by authorities. Effective governance and democratic control over surveillance are compromised when decisions are made behind a shroud of complexity. To address these issues, it is essential to strike a balance between technological innovation and safeguarding individual rights. Policymakers, technologists, and civil society must collaborate to develop comprehensive regulations and frameworks that ensure transparency, accountability, and the protection of civil liberties in the ever-evolving landscape of surveillance technology.

Links

https://www.eff.org/deeplinks/2023/01/open-data-and-ai-black-box

https://towardsdatascience.com/black-box-theres-no-way-to-determine-how-the-algorithm-came-to-your-decision-19c9ee185a8

https://policyreview.info/articles/analysis/black-box-algorithms-and-rights-individuals-no-easy-solution-explainability

The Role of Artificial Intelligence in Predictive Policing

First published 2021; revised 2023

The advent of the 21st century has brought with it technological innovations that are rapidly changing the face of policing. One such groundbreaking technology is artificial intelligence (AI).

In the United Kingdom, the complexities of implementing AI-driven predictive policing models have been evident in the experiences of the Durham Constabulary. Their ambition was to craft an algorithm that could more accurately gauge the risk a potential offender might pose, guiding the police force in their bail decisions. However, it became evident that the algorithm had a potentially discriminatory bias against impoverished individuals.

The core of the issue lies in the data points chosen. One might believe these data-driven approaches are neutral, using pure, objective information to make decisions. But Durham Constabulary’s inclusion of postcodes as a data determinant raised eyebrows. Using postcodes was found to reinforce negative stereotypes associated with certain neighbourhoods, indirectly causing repercussions like increasing house insurance premiums and reducing property values for everyone in those areas. This revelation prompted the removal of postcode data from the algorithm.

Yet, despite these modifications, inherent issues with data-driven justice persist. Bernard Harcourt, a prominent US law professor, describes a phenomenon termed “the ratchet effect.” As police increasingly rely on AI predictions, individuals and communities that have previously been on their radar continue to be heavily scrutinised. This spirals into these individuals being profiled even more, leading to the identification of more offences within these groups. On the flip side, those who haven’t been heavily surveilled by the police continue their offences undetected, escaping this “ratchet effect.” A clear example of this is the disparity between a street drug user and a middle-class professional procuring drugs online. The former becomes more ensnared in this feedback loop, while the latter largely escapes scrutiny.

The allure of “big data” in policing is undeniable. The potential benefits are substantial. Police resources, increasingly limited, can be more strategically deployed. Bail decisions can be streamlined to ensure only high-risk individuals are incarcerated before trial. The allure rests in proactive rather than reactive policing – addressing crimes before they even occur, thereby saving both resources and the immeasurable societal costs associated with offences.

With advancements in technology, police forces worldwide are leaning heavily into this predictive model. In the US, intricate datasets, considering factors ranging from local weather patterns to social media activity, are used to anticipate potential crime hotspots. Some cities employ acoustic sensors, hidden throughout urban landscapes, to detect and predict gunfire based on background noises.

Recently, New Zealand police have integrated AI tools like SearchX to enhance their tactics. This tool, developed in light of rising gun violence and tragic events such as the demise of police constable Matthew Hunt, is pivotal in instantly drawing connections between suspects, locations, and other risk-related factors, emphasising officer safety. However, the application of such tools is bringing forth serious questions concerning individual privacy, technological bias, and the adequacy of existing legal safeguards.

Given the clandestine nature of many of these AI programmes, New Zealanders possess only a fragmented understanding of the extent to which the police are leveraging these technologies. Cellebrite, a prominent tool that extracts personal data from smartphones and accesses a broad range of social media platforms, and BriefCam, a tool that synthesises video footage, including facial recognition and vehicle licence plates, are both known to be in use. With tools like BriefCam, the police have managed to exponentially speed up the process of analysing CCTV footage. Still, the deployment of certain tools, like Clearview AI, without the necessary approvals underscores the overarching concerns around transparency.

The major allure of AI in policing is its touted capability to foresee and forestall criminal activities. Yet, the utilisation of tools like Cellebrite and BriefCam comes with the palpable erosion of personal privacy. Current legislation, such as The Privacy Act 2020, permits police to gather and scrutinise personal data, sometimes without the knowledge or consent of the individuals in question.

Moreover, AI tools are not exempt from flaws. Their decisions are often influenced by the data they’re trained on, which can contain biases from historical practices and societal prejudices. Often, there’s a predisposition to place unwarranted trust in AI-driven decisions. For instance, in some US cities, AI-driven tools have mistakenly predicted higher crime rates in predominantly African-American neighbourhoods, reflecting historical biases rather than real-time threats. Even when they might be less accurate than human judgement, the allure of seemingly objective technology can override caution. This over-reliance can inadvertently spotlight certain individuals, like an innocent individual repeatedly misidentified due to algorithmic errors, sidelining other potential suspects.

Furthermore, biased algorithms have been observed to disproportionately affect the economically disadvantaged and ethnic minorities. A study from MIT Media Lab revealed that certain facial recognition software had higher error rates for darker-skinned individuals, leading to potential misidentifications. The use of AI in predictive policing, if informed by data from heavily surveilled neighbourhoods, can perpetuate existing prejudices. For example, if a neighbourhood with a high immigrant population is over-policed due to historical biases, AI might predict higher crime rates there purely based on past data, rather than the current situation. Such skewed data guides more police attention to these areas, further intensifying the disparities, creating a vicious cycle where over-policing results in more recorded incidents, which in turn results in even more policing.

Locally, concerns around transparency, privacy, and the handling of “dirty data”—information already tinged with human biases—have been raised in the context of the New Zealand government’s AI usage. Unfortunately, a legal structure tailored to the policing applications of AI is non-existent in New Zealand. While voluntary codes like the Australia New Zealand Police Artificial Intelligence Principles and the Algorithm Charter for Aotearoa New Zealand lay down ethical and operational guidelines, they fall short in establishing a robust, enforceable framework.

Despite the promise of constant AI system oversight and public channels for inquiry and challenges, there are evident lapses. The police’s nonchalance is evident from the absence of dedicated avenues for AI-related queries or concerns on their official website. With the police essentially overseeing themselves, coupled with the lack of an independent body to scrutinise their actions, the citizens are left in a precarious position.

As AI continues to gain momentum in the realm of governance, New Zealand, akin to its European counterparts, is confronted with the pressing need to introduce regulations. Such legal frameworks are paramount to ensuring that the police’s deployment of AI contributes constructively to society and doesn’t inadvertently exacerbate existing issues.

In conclusion, while the integration of AI into predictive policing offers an exciting frontier for enhancing law enforcement capabilities and efficiency, it is not without its challenges and ethical dilemmas. From the experiences of Durham Constabulary in the UK to the evolving landscape in New Zealand, the complexities of ensuring fairness, transparency, and privacy become evident. The juxtaposition of AI’s promise with its potential pitfalls underscores the imperative for stringent oversight, comprehensive legislation, and public engagement. As technology’s role in policing evolves, so must our approach to its governance, ensuring that in our quest for a safer society, we don’t inadvertently compromise the very values we aim to uphold.

Links

https://link.springer.com/article/10.1007/s00146-023-01751-9

https://www.deloitte.com/global/en/Industries/government-public/perspectives/urban-future-with-a-purpose/surveillance-and-predictive-policing-through-ai.html

https://daily.jstor.org/what-happens-when-police-use-ai-to-predict-and-prevent-crime/

https://www.npr.org/2023/02/23/1159084476/know-it-all-ai-and-police-surveillance