NHS creates blueprint for testing bias in AI models

Researchers used COVID-19 chest imaging data to assess the accuracy of health and care algorithms.
By Tammy Lovell
09:38 AM

Photo: John Fedele/Getty Images

The NHS AI LAB has created a blueprint for testing the robustness of artificial intelligence (AI) models, paving the way for safer adoption in health and care.

Working with a research group, the imaging team ran a proof-of-concept validation process on five AI models / algorithms using data from the National COVID-19 Chest Imaging Database (NCCID), a database of chest scans and supporting medical information from NHS trusts.

They tested how accurately the models detected positive and negative COVID-19 cases from medical images and how the models performed with different patient sub-groups e.g. by age, ethnicity, and/or sex.

This validation process will support work at NHS Digital to develop new AI assurance processes and bespoke training, as well as helping to clarify a structure for AI regulations.

The research group was formed by the British Society of Thoracic Imaging, AI firm Faculty, Queen Mary University of London, Royal Surrey Foundation Trust, and the University of Dundee.

WHY IT MATTERS

There has been increasing interest in the role of AI to support disease diagnosis, improve system efficiency and enable at-home care.

During the pandemic, many new AI radiology products were developed to help identify COVID-19 from scans. However, there are concerns about the effectiveness and reliability of these tools. Bias in AI models can lead to negative outcomes such as discrimination and the issue of race blind data has been flagged as an issue.            

Assessing bias can be done through a validation process, which is key to ensuring that the AI technologies adopted in the NHS are safe and ethical.

The NCCID includes more 60,000 images from 27 hospital trusts, making it well placed to provide both separate training and testing data, as well as a process through which to validate and examine the robustness of the AI tool and detect any potential issues around performance and bias.

THE LARGER CONTEXT

This project goes hand in hand with the NHS AI Lab’s AI ethics initiative, which is working to ensure that AI is developed in a fair and patient-centred way.

Last year the UK government launched a review into the impact of potential bias in medical devices, which will consider the enhanced risk of bias in algorithmic based data / AI tools.

The government also announced a £36 million AI research boost for projects to help the NHS transform the quality of care and speed of diagnoses for conditions such as lung cancer as part of NHS AI Lab's £140 million AI in Health and Care Award.

ON THE RECORD

Dominic Cushnan, head of AI imaging, NHS AI Lab, said: “Unfair and biased models can lead to inconsistent levels of care, a serious problem in these critical circumstances. Outside of the NHS, our validation process has helped guide the use of AI in medical diagnosis and inform new approaches to the international governance of AI in healthcare.”

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.