AI Accountability for Policing and Security

Learn More

AIPAS Vision

Law Enforcement Agencies (LEAs) must balance the significant opportunities AI presents for safeguarding society with society’s concerns and expectations about its responsible use. AIPAS will design practical mechanisms and a software tool for UK LEAs to assess and implement AI applications accountably.

About

OBJECTIVES

Operational Objective

Policy Objective

Societal Objective

LATEST UPDATES

AIPAS at AI UK

On the 19-20th March, an AIPAS team of Babak Akhgar OBE, P. Saskia Bayerl and Rowan Dennis were at AI UK to share updates on both the AP4AI project and AIPAS. With more than 1,500 in-person attendees the team were located in the Defence and Security zone sharing with delegates the trajectory of the project … Read more

Watch AIPAS Webinar

Back in November, AIPAS Lead Professor Babak Akhgar OBE delivered a webinar to introduce the AIPAS project to the Responsible AI community. The webinar gives an overview of the AIPAS project, its aim, objectives and intended outcomes. We’re grateful to RAI for the opportunity to share our vision for the project. Watch the webinar on … Read more

AIPAS Kick Off Meeting

On the 16th January, AIPAS partners organised a kick-off meeting for the project at CENTRIC. We were grateful to host representatives from Innovate UK and Innovate UK Business Connect as well as from the Metropolitan Police and the Business Resilience Centre for the North East, Yorkshire and Humber. This was the first chance for all … Read more
Go to Updates

AI AND ACCOUNTABILITY: What Does It Mean to Me?

Fraser Sampson – Professor of Governance and National Security at CENTRIC

Accountability is about people with power using it properly. Where those people are the police, the consequences can be profound and the requirements must reflect that. 

Accountability for police use of technology means answering key questions: ‘how does it work’, ‘what are you going to do with it and why do you need it?’ ‘Who says it’s OK to use it for that purpose and why aren’t you using it for other things?’ ‘What if it goes wrong?’ ‘Where do I find out more about it?’

If you’re buying AI with public money you shouldn’t just expect these questions, you should have already asked (and answered) them yourself and you should encourage further questions from the people in whose name you are deploying it. This is particularly true for the police because our model of policing is based on consent. The burden of proof for AI accountability rests with the police and they must clearly demonstrate how their use of technology is justified, legitimate and proportionate.  Having published standards and principles applicable across all forces will help achieve this. They should also have external verification and assurance mechanisms along with effective mechanisms for timely intervention, improvement, and indemnification. 

Accountability is not just about having the answers, it’s also about having to answer for using (and not using) available technology.  

Rebecca Chapman LLB, Dip IOD – CEO/Director, North East Business Resilience Centre

As a private company aligned to policing for the prevention of cybercrime in small and medium enterprises AI accountability must include a way for even the smallest of organisations to feel safe in its use both for them and to them. AI can be incredibly useful for businesses to work more efficiently and profitably.

To achieve AI accountability, it is essential to communicate effectively and transparently with stakeholders about whatever the AI system does. Owners should provide information about the rationale and logic of the AI system, such as the inputs, outputs, and processes used to make decisions or recommendations. In addition, the subject should be allowed recourse if AI is found to have been used or applied incorrectly with a defined party ‘owning’ the issue in order to seek restitution from them. Policing in general must be reassured that whatever AI they use can, in the worst case, be presented at court in an open and transparent manner with clear rationale as to the conclusions reached to allow no reasonable doubt and the public maintain confidence in any process where AI is used.

PARTNERS

Go to Partners

CONNECT WITH US