‘If clinicians could be held liable for AI errors, shouldn't there be guidance on how they use it?’
14 May 2026
We recently hosted a webinar, in collaboration with Dr Helen Smith RN and Professor Jonathan Ives from the University of Bristol, exploring the potential development of guidance on the use of artificial intelligence (AI) in health and social care. In this guest blog Dr Smith and Professor Ives recap the discussions and look at the key points from the workshop as well as future goals.
At the start of the year, we were commissioned by the Professional Standards Authority (PSA) to run a workshop to think about how professional regulators and Accredited Registers can support professionals who use artificial intelligence (AI) in health and social care.
The workshop took place online in February 2026 and was attended by a mix of service-users, health and social care regulators, representatives from the Accredited Registers (registers of health and care practitioners not regulated by law), and other interested parties.
The workshop asked four main questions:
- What should be in place to enable responsible AI adoption?
- Who should be accountable/responsible for any harms caused from AI use?
- Who should be responsible for detecting and addressing problems in AI prior to deployment?
- When a harm does occur, should we focus on finding someone to blame and punish, or should we focus on preventing it from happening again?
It was a lively day of constructive discussion, and we were incredibly grateful for the enthusiastic engagement and input offered by our attendees. The report of the workshop has been published and offers a full round-up of the detailed discussions of the day, but we’d like to highlight a few of the points raised.
Key points raised at the workshop
Firstly, we need to embrace patient and public involvement (PPI) from all communities affected by the use of AI in health and social care. This is vital, and moreover this involvement needs to be fully inclusive to ensure that all people can access the benefits of AI use in their care, without creating or perpetuating inequalities. PPI can be meaningfully undertaken throughout AI development, testing and use, to enable people to raise any concerns they have (for example, about the presence of bias in AI and the negative impact this can have on people and communities). It goes without saying that the outputs of PPI activities should not be performative and must be carefully appraised and acted upon by those with the power to do so.
Secondly, people were worried that employers may put pressure on health and social care professionals to use AI to make efficiencies and save time. They were apprehensive that the time saved by using AI would not be used to improve service quality but simply to see more service-users, leaving insufficient time for quality assurance of the AI. This underlines the important role of employers in setting the tone for how AI is used. Not only must they ensure that their workforce is fully trained and prepared to use AI, but they must also ensure there is sufficient time in the working day to check that AI outputs are safe and appropriate.
Thirdly, because AI technology changes so rapidly, regulation and guidance need regular review to ensure they stay up to date with both the technology and the way it is being used.
Building on a decade of work
We were delighted to support this workshop as it was an opportunity to further develop the work we’ve undertaken over the past decade. When we first started looking at this, we realised that AI has the potential to significantly change the way that health and social care professionals work, and that this came with significant risks. We became particularly interested in how we should respond (both ethically and legally) should the use of AI lead to a service-user being harmed.
Our legal analysis found that there is a risk of a clinician being held responsible for the consequences of using an incorrect AI output – to the point of being held criminally accountable. This is because AI users would have made the choice to follow the AI recommendation. Our ethical analysis argued that this could be unfair to the clinician, and the AI developer could also hold some responsibility for developing an AI to influence clinical decisions. The same would be true of an employer telling a practitioner to use the AI.
We argued in another paper that there is a lack of guidance for health and social care professions to use AI, and later came up with some initial ideas for what a package of professional ethical guidance could look like.
We aim to continue to work with regulators and other authorities to develop guidance for AI use to help all health and social care professionals practise safely with AI. Unified guidance would harmonise and underpin ethical AI use, offering consistencies in care regardless of whether the AI user is a doctor working in cardiology or a social worker in the community. We also feel strongly that it is important to provide guidance to health and social care professionals about the appropriate limits of AI in their practice.
Future goal
Our goal is that all health and social care providers will deliver - and patients can expect to receive - care that is ethically consistent across all health and social care services. This is important so that professionals know, and can deliver, the standard expected of them when using AI, which will empower both them and service-users to challenge inappropriate or unsafe uses.
We aim to continue to keep this work moving forward and would welcome contact from those with similar interests; for example, regulators, healthcare professionals, service users, or anyone who would be affected by the use of AI in health and social care.