Main content

Artificial Intelligence in healthcare: who holds power?

Ruth Ajayi | PSA Associate Board Member

23 Apr 2026

A photo of PSA Associate Board member Ruth Ajayi and author of AI blog

Ruth Ajayi is a patient advocate and portfolio Non-Executive Director with a focus on patient and public involvement, health inequalities and improving care. She is currently the PSA's  Associate Board Member at the Professional Standards Authority, completing her term in May 2026. 

Find out more about Ruth and the PSA's Board

I stood outside the GP practice feeling a mix of emotions I could not quite place. Relief, frustration, hope and disbelief all at once. After 30 years of living with worsening symptoms, I had finally been referred to a gynaecology specialist. What made that moment different was not just the referral itself, but how I got there. AI, specifically Gemini, had played a role in helping me finally be heard and advocate effectively for a referral after years of this health issue being overlooked. That experience raises a broader question at the heart of this blog: if AI is already shaping access to care, who is shaping how it is designed and used?

In this blog, I share what patients are saying about AI, how it is already being used in real-world care, and why involving patients in its design is essential for safe and equitable healthcare.

What patients are telling us about AI

Last year, I gave a presentation on patient-centred AI at the Digital Leadership Forum Healthcare Leadership session, alongside speakers from Roche, GSK, Amazon Web Services and King's College London. Although our perspectives were different, the message was consistent. AI is here to stay. The fears, concerns, expectations and hopes of patients mirror those of healthcare practitioners and providers. This is why involving patients in how AI is designed, implemented and used is not optional.

Ahead of that session, I carried out a short survey within my networks to understand perceptions of AI in healthcare, and I shared these key findings at the event:

  • most people had no direct experience of AI being used in their care
  • those who had encountered it tended to do so indirectly, often through professional settings rather than as patients
  • there was cautious openness, but only with clear conditions
  • people wanted human oversight, strong safety measures, and transparency around how their data is used
  • there was a strong concern about equality, diversity and inclusion
  • participants were clear that if AI is trained on biased or incomplete data, it will reinforce existing inequalities and lead to poorer outcomes for already marginalised groups.

These findings point to clear patient priorities for how AI should be regulated and used in healthcare:

  • AI should support human judgement, not replace it, with every output verified by a qualified professional
  • patients should be told when AI is being used in their care and given a genuine option to opt out
  • organisations must demonstrate that any AI tool will not widen health inequalities before it is adopted
  • there must be clear processes for investigating errors and accountability when harm occurs
  • clinicians need the confidence and training to question or override AI when it does not align with their professional judgement.

Taken together, these priorities make one thing clear: patients are not resisting AI; they are asking for it to be safe, transparent and equitable.

From Dr Google to “Dr AI”

I am observing a shift in patient behaviour, where those with basic digital literacy are moving from relying on search engines to using AI tools. People are using AI to ask better questions, explore patterns and prepare for clinical conversations. This shift from “Dr Google” to what I would describe as “Dr AI” brings both benefits and challenges.

In my own case, I used AI to analyse nearly 20 years of my GP records, cross-reference them with NICE guidance, and review peer-reviewed research. I went into my appointment with a clear, structured understanding of my symptoms and concerns. That preparation changed the conversation. It did not replace clinical judgement, but it enabled a more focused discussion and led to a referral to a specialist service.

AI in practice: opportunity and risk

These experiences are not isolated. A recent UK story described a young woman who used AI to explore her symptoms after feeling dismissed, and the suggested condition was later confirmed. In the United States, a case reported by The New York Times showed how clinicians used AI to identify an unexpected treatment option for a patient with a rare and life-threatening condition. These examples show how AI is already influencing both patient behaviour and clinical decision-making.

We are also seeing AI used in areas such as radiography, where it supports clinicians to interpret scans and detect cancers earlier. This has the potential to reduce delays and improve outcomes. However, the same technology that can improve care can also create harm if it is not designed, governed and tested with a strong commitment to equality, diversity, inclusion and ethical standards. Without this, AI risks embedding bias into clinical decision-making and worsening outcomes for the very groups the health system should be protecting.

From artificial to augmented intelligence

The focus should not be on artificial intelligence in isolation, but on augmented intelligence. AI can process large amounts of information quickly, but it does not understand context, lived experience or nuance. It does not apply common sense in the way humans do, and we have already seen examples where AI produces confident but incorrect outputs, often referred to as hallucinations. This is not because AI is flawed in the way a human might be, but because it is not human.

This is exactly why human involvement and oversight are essential. Clinicians bring judgement, empathy and accountability. Patients bring lived experience and insight into what matters in their care. AI can support both by surfacing patterns and information that might otherwise be missed, but it should never be left to operate in isolation. The challenge is not to choose between humans and AI, but to strike the right balance between them. When these elements work together, decision-making improves. When they do not, the risks increase.

Designing AI with patients, not for patients

To developers and tech teams: patient involvement is not optional, it is essential. Building AI for healthcare without patients is unsafe and unacceptable. It is not enough to consult patients once AI tools have been designed and their functionality is set, when changes are difficult or too late. Patients need to be involved as equal partners from the earliest stages of problem definition and scoping, through development, and into testing phases such as alpha and beta, where tools are tried and refined in real-world settings.

For those procuring AI tools in healthcare, there is a responsibility to look beyond new and emerging technologies and avoid the “shiny object” syndrome. Decisions should not be driven by novelty or promise alone, but by evidence, patient experience data and a clear understanding of real-world impact. This includes involving patient experience experts in procurement and decision-making. Any tool introduced into healthcare must be fit for purpose, safe, equitable and aligned with population needs.

If diversity and inclusion are not built in, AI will embed and amplify existing inequalities. AI in healthcare should close health inequality gaps, support clear and accountable decisions, and work for patients, not simply be used on them.

About the author

Ruth Ajayi is a patient advocate and portfolio Non-Executive Director with a focus on patient and public involvement, health inequalities and improving care. She is currently an Associate Board Member at the Professional Standards Authority, completing her term in May 2026. 

Ruth is a Patient and Public Voice Partner on several programmes at NHS England, including the Responsible Adoption of AI committee. She is also a member of the NICE Prioritisation Board and the Diagnostic Advisory Committee.

Ruth serves on multiple advisory groups and committees across the health and research system. She chairs the NHS Genomic Medicine Service People and Communities Forum and is a co-investigator on NIHR-funded research, including projects focused on women’s health and the use of electronic patient records in the NHS.

She chairs the NHS Genomic Medicine Service People and Communities Forum and is a co-investigator on NIHR-funded research focused on women’s health.

Coming soon: look out for our new publication

How to guide and regulate for health and social care professionals who use AI is the report produced following a webinar we hosted in collaboration with Dr Helen Smith RN and Professor Jonathan Ives from the University of Bristol, where we explored the potential development of guidance on the use of artificial intelligence (AI) in healthcare.