Main content

Page banner

Let's talk about artificial intelligence and professional regulation - a Q&A

Artificial intelligence (AI) is rapidly becoming part of everyday health and care – from supporting clinical decision-making to reducing administrative workloads. This Q&A explores what that shift means for professional regulation: how regulators can help the workforce use AI safely, how accountability and standards may need to evolve, and why collaboration across the regulatory system matters. Melanie Venables, the PSA’s Director of Policy and Communications, shares the PSA’s perspective on the opportunities, the risks, and the practical steps now needed.

Professional regulation has an important role to play here. It can help build confidence in the use of AI by supporting the workforce to use it safely and well, and by addressing practical questions such as how best to equip registrants to work alongside AI in their day‑to‑day practice.

Q. There’s lots of talk of using AI in health and care but what does it have to do with professional regulation?

A. That’s a good question. AI is a technology, and medical devices are regulated by the  Medicines and Healthcare products Regulatory Agency (MHRA), so it naturally sits within their remit. However, the way AI is being developed, rolled out and used – often very quickly and sometimes informally – means that no single regulator can address all of the issues on their own. AI has the potential to bring significant benefits, including improving access to care and its quality. But, like any new way of working, it also introduces new risks. Those risks need to be identified and managed effectively. Professional regulation has an important role to play here. It can help build confidence in the use of AI by supporting the workforce to use it safely and well, and by addressing practical questions such as how best to equip registrants to work alongside AI in their day‑to‑day practice.

 Q. Is professional regulation already a bit late to the party?

 A. Timing really matters here. Professional regulation needs to strike the right balance between acting at the right moment and being proportionate, so that it doesn’t become an unnecessary barrier to innovation that could ultimately enhance patient safety. It’s also important to recognise that many of the expectations relevant to working with AI – such as transparency, consent and professional accountability – are already embedded within regulators’ existing standards. The key question, when it comes to standards, is therefore whether anything additional is needed to address genuinely new ethical scenarios that may arise through the use of AI. Work by professional regulators to address this needs to be situated within the wider regulatory framework for health and care that the MHRA‑led UK National AI Commission has been tasked with developing. Moving ahead of that work would be counterproductive, which is why continued collaboration across regulators will be essential. Research published by Social Work England in January into the emerging use of AI in social work education and practice in England also supports a collaborative regulatory response. 

Q. What is the PSA doing to help make progress in this area and why is it well-placed?

A. Professional regulation is a fragmented system, with many different regulators and Accredited Registers responsible for setting standards for professionals. All of these bodies need to consider what the use of AI means for their regulatory approach. We know that a consistent approach is likely to be reassuring for the public, for professionals, and for others such as employers. Because of our oversight role, the PSA is well placed to bring these bodies together to think about how best to achieve that consistency. That’s why we set up the PSA Regulatory Data and AI Group. We’re using our engagement with the Group, alongside research such as the work we recently commissioned from the University of Bristol, to inform our input into the MHRA‑led Commission’s work. This is an area where collaboration really matters, and we’re keen to continue building on this work – so watch this space for further developments.  

 Q. Are there other key considerations for the future of AI in health and care?

A. AI also has the potential to strengthen regulatory practice itself. For example, could it be used to help address some of the long‑standing challenges in professional regulation, such as fitness to practise backlogs? Or to generate better insights that support a more preventative, risk‑based approach to regulation? As we explore these possibilities, it will be essential to keep equality considerations firmly in view. Used well, AI could help regulators better understand where groups with shared protected characteristics experience poorer outcomes in regulatory processes – and support action to address those disparities – rather than risk entrenching them. This is still early‑stage work, but it is a clear area of focus for the PSA’s new Strategic Plan 2026–29.