
iStock
The World Health Organization (WHO) has called for caution in the use of artificial intelligence (AI) generated large language model tools (LLMs), adding that the risks associated with them should be carefully assessed.
In a statement issued yesterday, the United Nations (UN) agency said that while it was ‘enthusiastic about the appropriate use of technologies, including LLMs,’ to support health-care professionals, patients, researchers and scientists, it had concerns that the cautious approach that is normally adopted when a new technology is taken up is ‘not being exercised consistently with LLMs’.
As the statement explains, LLMs include some of the most rapidly expanding platforms such as ChatGPT, Bard, Bert and others, which can imitate understanding, processing and producing human communication.
The WHO adds that there has been a ‘meteoric public diffusion’ and ‘growing experimental use [of LLMs] for health-related purposes’, which has generated huge excitement because the new AI technology has great potential to support health needs.
However, it urges careful consideration of the risks associated with their use ‘to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequity’.
To improve safeguards, the WHO is calling for widespread adherence to a number of key values: transparency, inclusion, public engagement, expert supervision and rigorous evaluation.
‘Precipitous adoption of untested systems could lead to errors by health-care workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world,’ warns the UN agency.
The WHO has raised a number of specific concerns:
- The data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness.
- LLMs generate responses that can appear authoritative and plausible to an end user. However, these responses may be completely incorrect or contain serious errors, especially for health-related responses.
- LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response.
- LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content.
While it is committed to harnessing new technologies, including AI and digital health to improve human health, the WHO recommends that policy-makers ensure patient safety and protection while technology firms work to commercialise LLMs.
The UN agency adds that clear evidence of the benefit of using these new technologies should be measured before their widespread use in routine health care and medicine, whether that is by individuals, care providers or health system administrators, and policy makers.
The WHO has published guidance on the ethics and governance of AI for health and reiterates the importance of applying these when designing, developing and deploying AI for health.