Main content
PSA Regulatory Data and AI Group
To support collective leadership on these issues, the PSA has established the Regulatory Data and AI Group – a forum bringing together professional regulators and Accredited Registers to share learning, identify risks, and explore how AI and regulatory data can be used to strengthen public protection.
The Group meets regularly and provides space for:
- Sharing best practice – what is working well across different regulators as they consider or adopt AI technologies.
- Identifying risks and barriers – from data quality and interoperability to ethical questions and gaps in existing standards.
- Collaborative problem‑solving – exploring common challenges and shaping coordinated responses to cross‑cutting issues, including governance, liability, and workforce readiness.
- Supporting preventative regulation – using data and AI to identify risks earlier and act before harm occurs.
- Connecting with wider system partners – including researchers, government, and other regulatory bodies working on the future regulatory framework for AI in healthcare.
Through this Group, we also work closely with others contributing to the development of national approaches, including the MHRA‑led National Commission into the Regulation of AI in Healthcare.
Read our submission to the Commission's evidence reviewGuiding Safe Use of AI by Professionals
AI is increasingly being used by health and care professionals themselves. This raises important questions:
- How should professionals exercise judgement when working alongside AI tools?
- What training, guidance and safeguards are needed?
- How should responsibility and accountability be assigned when technology influences care?
Working with researchers and stakeholders, we are exploring what good guidance looks like and how it can support safe, ethical and transparent use of AI in practice.
On 26 February, we ran a workshop with the University of Bristol that brought together patients and the public and many of the regulators and Accredited Registers we oversee to discuss the challenges and opportunities in regulating AI technologies. The session focused on identifying areas where regulatory clarity is needed and sharing best practices for ensuring patient safety and ethical deployment of AI. We are using the findings to inform our contributions to the MHRA-led UK National Commission on the Regulation of AI in Healthcare, which has been tasked with developing the overall regulatory framework. We will also publish the findings.
Our Commitment
The PSA is committed to helping ensure that the benefits of AI are realised safely and responsibly. We will continue to:
- convene regulators to promote coordinated, consistent approaches
- support the development of evidence‑based principles for the use of AI in healthcare
- explore how regulatory data and AI can strengthen public protection
- eork collaboratively across the system to address shared challenges.