AI and Policing: Trust, Identity, Privacy, and Security
25 September 2025

Principal Investigators: SPRITE+ and N8PRP
Co-investigators: N/A
Event attendees: 28
Introduction
This innovation forum brought together researchers working on Policing, and members of the N8 police forces, to discuss the emerging importance of AI in policing and the impacts that might have on the relationship between the police and the communities they serve, specifically acknowledging SPRITE+’s remit for the issues of trust, identity, privacy and security.
The intended outcome was to set or at least explore options for a research agenda in this space for possible funding through N8PRP and/or SPRITE+.
After a brief opening talk by SPRITE+ director Prof. Mark Elliot giving A Brief history of AI, there then followed a series of four provocation talks interspersed with group and plenary discussions. The final session was a discussion of research priorities.
The Role of AI in Modern/Future Policing
Speaker: Prof. Joe Burton, Lancaster University
Professor Joe Burton’s presentation explored the transformative potential of AI in serious and organised crime, emphasizing its risks. He highlighted how AI is already being exploited by organized crime groups (OCGs) for automation, scalability, and innovation—ranging from financial fraud using deepfakes to industrialized romance scams. The talk underscored the vulnerability of AI systems to adversarial attacks and the urgent need for proactive disruption strategies, given that criminal actors operate outside regulatory frameworks.
The discussion that followed focused on the challenges posed by criminal use of AI for policing and reflected a mix of optimism and concern. Participants noted AI’s potential to alleviate resource constraints in policing—particularly in recruitment, training, and time management—but also raised alarms about fractured institutional structures and siloed working practices. There was a strong emphasis on the need for transparency to build trust, especially in light of the “black box” nature of many AI systems. Questions were raised about whether AI is being adopted reactively rather than strategically, and whether its use in policing is being driven by assumptions rather than evidence.
Participants also debated the tension between local and global policing needs, the environmental and infrastructural implications of AI deployment, and the ethical dilemma of using AI to combat AI-enabled crime. The session concluded with reflections on the need for coordinated responses, cross-sector collaboration, and a more nuanced understanding of AI’s role—not just as a tool, but as a force reshaping the very fabric of policing.
View Prof. Burton's presentation here.
Trust in AI Systems for Policing
Speaker: Dr Richard Jones, University of Edinburgh
Dr. Richard Jones focused on the complex dynamics of trust in AI-powered policing. He outlined how AI is currently used or piloted in administrative tasks such as emergency call handling, transcription, and case file management, noting both the efficiency gains and the risks of bias, hallucination, and lack of explicability. Drawing on sociological theories, he argued that trust in AI derives not just from technical reliability but also from procedural justice, public perception and making continual improvements.
The discussion revealed deep concerns about institutional inertia and the fragmented nature of UK policing across 43 forces. Participants emphasised the need for proactive communication and citizen-centred AI, suggesting that small, low-risk applications could help build public trust incrementally. There was a call for robust governance structures, risk assessment frameworks, and transparency in both procedures and outcomes.
Participants also explored the temporal dimension of trust—how speed and efficiency might undermine thoughtful decision-making—and the importance of public engagement in shaping AI adoption. Concerns were raised about the erosion of operational discretion and the lack of AI literacy among officers, which could lead to blind acceptance of flawed systems. The session highlighted the need for courts to modernise and for policing organisations to embrace oversight and accountability in their use of AI.
Identity, Surveillance, and Privacy
Speaker: Prof. Mark Levine, Lancaster University
Professor Mark Levine’s presentation examined the psychological and social implications of AI-driven surveillance, particularly facial recognition. Drawing from social psychological research, he questioned whether surveillance was always a 'good', noting that a problem with AI facial recognition is that it treats everyone as a potential suspect, and potentially negatively changes the dynamic of the relationship between the police and public. He warned against the “snake oil” problem—where AI is mis-sold as a panacea—and stressed the importance of strategic deployment over blanket adoption. Prof. Levine argued that surveillance technologies can undermine public trust and partnership, which are essential assets in effective policing.
The discussion focused on the tension between technological capability and community engagement. Participants noted that surveillance utilising ANPR and telecommunications data are already in use, but there is a lack of public awareness and consultation. Concerns were raised about how facial recognition changes the dynamic between police and the public, potentially treating everyone as a suspect and eroding shared identity and cooperation.
There was a strong call for frameworks that safeguard privacy while enabling effective law enforcement. Participants suggested citizen assemblies and public-divisional policing models as a possible way to guide ethical surveillance practices. The session concluded with reflections on the need to balance technological innovation with psychological and social costs, emphasising that humans—not just machines—are vital surveillance assets.
View Prof. Levine's presentation here.
Speech and Audio Forensics: Benefits, Challenges, and Ethics
Speaker: Dr Jess Wormald, University of York
Dr. Jess Wormald provided a comprehensive overview of speech and audio forensics, focusing on the evidential use of automatic speaker recognition (ASR) and transcription technologies. She highlighted the challenges of training, testing, and validating AI systems, especially in forensic contexts where accuracy and reliability are paramount. The presentation emphasised the importance of understanding errors and context to make informed decisions about AI deployment.
Discussion centred on the lack of standards and oversight in AI adoption. Participants stressed the need for a context-driven framework that distinguishes between high-stakes and low-stakes applications. There was concern about the speed of adoption outpacing governance, and the risk of turning oversight into a mere tick-box exercise. The potential threat of deepfake voice recordings was also raised.
Participants also discussed the role of academia in providing objective, independent evaluation, and the importance of using actual police data to improve model accuracy. The session underscored the need for collaboration across forces, increased AI literacy, and practical processes that ensure evidential salience is properly defined and checked. The potential for AI to assist in legal disclosure and case building was seen as transformative, but only if implemented responsibly.
View Dr Wormald's presentation here.
Developing a Research Agenda
The final session focused on identifying actionable research priorities for the next 3–5 years. Participants emphasised the need for cross-sector collaboration, particularly with emergency services, and the importance of interoperability in AI systems. Key questions included how to ensure biometric AI is accurate and secure, how trust in AI is built or eroded, and what the implications are for human skills and agency in policing.
There was a strong emphasis on developing governance frameworks that define evidential salience and support effective oversight. Participants called for the use of actual police data in model training, the development of practical processes, and the recognition of AI’s limitations. The need for continuing professional development (CPD) to improve AI literacy among police officers was also highlighted.
Concerns about overreliance on AI, the risk of delegating accuracy checks, and the potential erosion of empathy and intuition in policing were discussed. The session concluded with a call to recognise and avoid “snake oil” solutions, establish definitive UK-wide standards, and to focus on making large datasets navigable without sacrificing accuracy or oversight. The research agenda proposed aims to balance innovation with ethical responsibility, ensuring AI serves the public good.
Recommendations
1. Strengthen Governance, Standards and Oversight
Develop robust governance frameworks that define evidential salience and ensure effective oversight of AI systems, particularly in high-stakes environments such as forensic analysis and surveillance.
Establish clear standards for training, testing, and validating AI technologies used in policing, with a focus on transparency, accountability, and procedural justice.
Encourage the adoption of regulatory mechanisms that require AI systems to evidence both their procedures and outcomes, supporting public trust and legal defensibility.
2. Enhance Public Engagement and Trust
Implement citizen-centred approaches to AI deployment, such as public consultations and citizen assemblies, to ensure alignment with community values and expectations.
Promote proactive communication strategies that explain the purpose, scope, and limitations of AI tools in policing, helping to mitigate distrust and misinformation.
Prioritise low-risk, high-transparency applications of AI to build incremental trust and demonstrate value before expanding to more sensitive domains.
3. Support Cross-Sector Collaboration and Data Sharing
Facilitate collaboration between police forces, national organisations, academic institutions, and technology providers to co-develop AI solutions and standards that are contextually relevant and ethically sound.
Encourage interoperability across emergency services and justice system agencies to ensure cohesive and efficient use of AI technologies.
Promote the use of actual police data in model training to improve feasibility, accuracy, and relevance, while safeguarding privacy and data protection standards.
4. Invest in AI Literacy and Capacity Building
Introduce basic CPD (Continuing Professional Development) requirements to improve AI literacy among police officers, legal professionals, and decision-makers.
Provide training on the limitations, risks, and ethical considerations of AI, enabling informed use and critical evaluation of technology in operational contexts.
Support interdisciplinary research and knowledge exchange to bridge gaps between technical development and practical policing needs.
5. Mitigate Risks and Avoid “Snake Oil” Solutions
Develop mechanisms to identify and avoid untested or misrepresented AI products, ensuring procurement decisions are evidence-based and aligned with operational goals.
Encourage slow, deliberate adoption of AI technologies, particularly in areas with significant ethical or legal implications, such as facial recognition and deepfake detection.
Promote the development of tools that make large datasets navigable and actionable without compromising oversight or accuracy.