Scientists from the Oxford Internet Institute have issued a critical warning about the potential dangers posed by artificial intelligence (AI) in scientific research.
Their concerns, presented in a recently published essay in Nature Human Behaviour, stem from the inherent limitations of AI and the human tendency to over-rely on technology, which could significantly compromise the accuracy and integrity of scientific findings.
The crux of their argument lies in the nature of large language models (LLMs), such as AI-driven chatbots, and their usage in scientific endeavours.
These tools, they contend, have a propensity to generate fabricated information.
This flaw, coupled with the human inclination to imbue machine outputs with undue authority, could escalate the spread of misinformation, thereby endangering the fabric of scientific integrity.
NEW: Large Language Models (LLMs) pose a risk to science with false answers, according to researchers from @oiioxford.One reason for this is the data the tech uses to answer questions doesn't always come from factually correct sources.
⬇️ #OxfordAIhttps://t.co/uFP7GWAflH
— University of Oxford (@UniofOxford) November 22, 2023
Does not inherently prioritise truth
The researchers emphasise that the design ethos of LLMs does not inherently prioritise truth.
Instead, these systems are gauged on a spectrum of parameters like utility, harmlessness, efficiency and customer adoption.
"Utility and persuasiveness trump accuracy," they argue, stressing the lack of an intrinsic mechanism within these AI systems to ensure factual correctness.
Furthermore, the essay sheds light on the 'Eliza effect' – a phenomenon where humans attribute undue significance to AI outputs that appear human-like.
This tendency, combined with the AI's often assertive tone, sets the stage for widespread misinformation.
Trumps in 'zero-shot translation'
Interestingly, the scientists acknowledge certain contexts, such as 'zero-shot translation', where AI's reliability may be higher.
Brent Mittelstadt, an Oxford professor specialising in AI ethics, describes this as involving a limited, trustworthy data set, rather than the AI acting as a vast, all-knowing repository.
At the heart of their caution lies an ideological concern: the potential erosion of the human element in science.
They question the prudence of delegating tasks intrinsic to the spirit of scientific inquiry – such as hypothesis formulation and theoretical exploration – to AI systems.
While acknowledging the impressive capabilities of modern machines, the researchers emphasise their inability to differentiate fact from fiction.