AIPAS Vision
Law Enforcement Agencies (LEAs) must balance the significant opportunities AI presents for safeguarding society with society’s concerns and expectations about its responsible use. AIPAS will design practical mechanisms and a software tool for UK LEAs to assess and implement AI applications accountably.
OBJECTIVES
Operational Objective
Improve the knowledge and capabilities of UK LEAs and actors in the P&S domain to assess and integrate AI Accountability into the design, procurement, and deployment decision-making for specific AI capabilities.
Policy Objective
Support policy-making and governance bodies with a mature, tested and (expert and citizen) validated definition of AI Accountability for the nationally critical area of Policing and Security (P&S).
Societal Objective
Improve society’s participation in discussions and decision-making about AI use for P&S purposes as an integral part of AI Accountability procedures.
LATEST UPDATES
New Biometric Update blog post
RAi (UK) blog post
New Biometric Update article: Explainable AI: A Question of Evolution?
AI AND ACCOUNTABILITY: What Does It Mean to Me?
Fraser Sampson – Professor of Governance and National Security at CENTRIC
Accountability is about people with power using it properly. Where those people are the police, the consequences can be profound and the requirements must reflect that.
Accountability for police use of technology means answering key questions: ‘how does it work’, ‘what are you going to do with it and why do you need it?’ ‘Who says it’s OK to use it for that purpose and why aren’t you using it for other things?’ ‘What if it goes wrong?’ ‘Where do I find out more about it?’
If you’re buying AI with public money you shouldn’t just expect these questions, you should have already asked (and answered) them yourself and you should encourage further questions from the people in whose name you are deploying it. This is particularly true for the police because our model of policing is based on consent. The burden of proof for AI accountability rests with the police and they must clearly demonstrate how their use of technology is justified, legitimate and proportionate. Having published standards and principles applicable across all forces will help achieve this. They should also have external verification and assurance mechanisms along with effective mechanisms for timely intervention, improvement, and indemnification.
Accountability is not just about having the answers, it’s also about having to answer for using (and not using) available technology.
Rebecca Chapman LLB, Dip IOD – CEO/Director, North East Business Resilience Centre
As a private company aligned to policing for the prevention of cybercrime in small and medium enterprises AI accountability must include a way for even the smallest of organisations to feel safe in its use both for them and to them. AI can be incredibly useful for businesses to work more efficiently and profitably.
To achieve AI accountability, it is essential to communicate effectively and transparently with stakeholders about whatever the AI system does. Owners should provide information about the rationale and logic of the AI system, such as the inputs, outputs, and processes used to make decisions or recommendations. In addition, the subject should be allowed recourse if AI is found to have been used or applied incorrectly with a defined party ‘owning’ the issue in order to seek restitution from them. Policing in general must be reassured that whatever AI they use can, in the worst case, be presented at court in an open and transparent manner with clear rationale as to the conclusions reached to allow no reasonable doubt and the public maintain confidence in any process where AI is used.
P. Saskia Bayerl, CENTRIC, Professor of Digital Communication and Security at CENTRIC
AI Accountability in Policing and Security (AIPAS) – Whose accountability is it anyway?
AIPAS’s main objective is to develop practical mechanisms for police forces to assess and ensure their AI deployments are accountable – and can be held to account – with the aim to maintain public trust in AI-based policing and security measures. As the saying goes: “policing is accountability” (Markham & Punch, 2007). The tricky part (probably familiar to anyone who has attempted to implement accountability) is the question ‘accountable for what and to whom’?
Throughout our conversations in AIPAS, we found that police explicitly and consistently understand the public as one of their core stakeholders for accountability; except that the public is not a uniform entity. Public trust in AI and public trust in policing is fragmented and varied, which means that disparate publics also have different needs when it comes to accountability: what they need to know, which information they trust, how it is communicated and by whom, how they can and want to be involved in the accountability process. In designing practical mechanisms, this diversity amongst legitimate perspectives, needs and expectations towards accountable AI needs to become part of the guidance. This is a complex process, which requires frequent touchpoints and validations, and more diverse conversations than we probably had originally envisioned.
Our consideration of Equality, Diversity and Inclusion practice for AIPAS have motivated us to anchor diversity into all our activities from research (e.g., 40% of participants were women, compared to the 35% currently in UK police forces) to our outcomes (e.g., ensuring that tools are accessible also to colour blind users).
And while our project aims to improve accountability in policing, we have also come to acknowledge the degree of accountability needed from our side – in particular the way we engage with our main stakeholders, i.e., police as research partners and the public. We have therefore installed frequent ‘coord’ meetings between academic and policing partners and started blogs and reflections by project members to share our thoughts and understandings of AI accountability. At the same time, sharing specifics can be challenging as many of the conversations and findings are security sensitive and thus restricted. Finding the right balance between transparency and safeguarding operations is challenging and fraught with ethical considerations (touching again on the opening question of ‘accountable for what and to whom’).
Setting ourselves accountability goals, and implementing them, turned out to be as complex as developing AI Accountability mechanisms, and it certainly gave us new respect for what we ask from police in terms of AI accountability.
Reference: Markham, G. & Punch, M. (2007). Embracing Accountability: The Way Forward – Part One, Policing: A Journal of Policy and Practice, 1(3), 300-308, https://doi.org/10.1093/police/pam049
RAi UK, the funding body for the AIPAS project, hosts the blog post. Read it here: AI Accountability in Policing and Security – Whose Accountability Is It Anyway?