Skip to main content

Blog headeriii

Artificial Intelligence – what is it and what impact will it have on professional regulation?

artificial intelligence for website

The new year (and new decade) had barely got underway and there was a flurry of news stories about how artificial intelligence had outperformed experts in spotting breast cancer. It is likely that as the 2020s progress, news stories like this will become commonplace. In this blog, we look at what AI is, how it is used and what the future might hold, especially in relation to professional regulation. 


Technology is advancing at a significant rate, shaping as well as creating new industries. So much so, we have entered our fourth Industrial Revolution. This will take what was started in the third, with the development of computers and the digital age, and enhance it with machine learning and autonomous systems. The design of regulatory models has followed the challenges posed by innovative technological change. With the emergence of artificial intelligence in an ever-growing number of sectors including health and social care, what does the future regulatory landscape hold?

What is artificial intelligence?

There is no singular definition of what is, and is not, artificial intelligence or AI for short. To discuss this further would require a blog of its own. For our purposes here, we have taken the definition of artificial intelligence accepted by the House of Lords Select Committee on AI as:

'Technologies and systems with the ability to perform tasks that would otherwise require human intelligence and have the capacity to learn or adapt to new experiences or stimuli.' (House of Lords Select Committee on Artificial Intelligence report, AI in the UK: ready, willing and able April 2018)

How is it used?

Artificial intelligence may sound like science fiction but the reality is coming into greater focus. It was recently reported in the Guardian that a computer generated AI machine that produces predictive analytics was being used to assist social workers to assess the probability of a child coming on to the 'at risk register'. Computer systems have been developed to enable councils to analyse vast amounts of data from a variety of sources such as police records, housing benefit files, social services, education and other information available and determine the level of risk posed to a child. With developments like this in mind, are machines making more accurate decisions?

To find answers, a study was conducted to see how artificial intelligence compares with healthcare professionals when it comes to making medical diagnoses based on images. The use of AI in interpreting medical images has been a developing field; using algorithms to learn how to interpret images and classify them has diagnosed diseases from eye conditions to cancers. The study focused on research carried out from January 2012 to July 2019 and assessed AI performance versus expert human analysis. It concluded that the diagnostic performance of artificial intelligence was equivalent to that of health care professionals. It is envisaged that AI diagnostic systems can act as a tool to help tackle the backlog of scans and images and be deployed in places which lack experts to carry this out. However, the study did identify that the percentage of human errors and AI errors was roughly the same. With this in mind, it is important to recognise these risks and identify what measures can be put in place to reduce future risk and better protect patients.

What is the current landscape?

The Government has put the development of AI in the UK firmly on the map following the publication of its Industrial Strategy – Building a Britain fit for the future in 2017 and earmarked AI and Data Economy as one of four Grand Challenges 'to put the United Kingdom at the forefront of the industries of the future'.

The UK is currently subject to European Union legislation concerning AI and big data, including the provisions of the General Data Protection Regulation. In April 2018, the UK was one of 25 European countries to sign up to the Declaration of Cooperation on Artificial Intelligence. When the UK leaves the EU in some form of Brexit, it may decide to remain aligned with EU legislation in this area.

There is plenty of appetite and agreement to engage in dialogue surrounding artificial intelligence, harnessing expertise to expand its use. Yet, at present, there is no general agreement on whether there should be an overarching statutory framework or dedicated regulatory model to oversee the development of AI. What has been established is the Centre for Data Ethics and Innovation (CDEI). Its remit includes the review of existing regulatory frameworks and identifying any gaps; identifying best practice for the responsible use of data and AI; identifying mechanisms to ensure that the law, regulation and guidance keeps pace with developments; and advising and making recommendations to government.

The Government has also created the Regulators’ Pioneer Fund and invested £10 million to assist regulators 'to help unlock the potential of emerging technologies'. With the help of this fund, the Care Quality Commission has been awarded funding to work on a project to explore how it can work with providers to encourage good models of innovation.

A look to the future

In providing evidence to the House of Lords Select Committee on AI, a number of witnesses provided examples of areas in healthcare that could benefit from artificial intelligence. These included AI assisted analysis of x-rays, MRI scans and breast imaging. The Select Committee was told that the emergence of such technology would dramatically reduce the cost of analysing scans and alleviate the strain on the health service at present due to staff shortages. (House of Lords Select Committee on Artificial Intelligence report, AI in the UK: ready, willing and able, April 2018)

Healthcare professionals are going to need to know about the technology available, how to use it and understand its capabilities and limitations. Those embedding new technology in clinical settings will also need to engage with healthcare professionals and patients to help establish where artificial intelligence could be of most benefit.

The emergence of artificial intelligence also raises challenges regarding legal liability in determining who should be held accountable for decisions made or informed by an algorithm which has had an impact on someone’s health.

There are also challenges that would need to be addressed when developing artificial intelligence within a health and social care landscape from the handling of personal data, public trust and mitigating potential risk. AI is creating new challenges for regulation and it is important for regulators to be part of the discussion on the impacts of artificial intelligence, rather than reactive and called upon when something goes wrong.

Keep in touch

Sign up to our e-newsletter to get regular updates about our work

sign up wide banner

Disclaimer

Please note the views expressed in these blogs are those of the individual bloggers and do not necessarily reflect those of the Professional Standards Authority.