In a striking demonstration of the potential misuse of artificial intelligence, researchers from Flinders University in Australia have generated more than 100 fake health-related blog articles using OpenAI’s GPT Playground.
This peer-reviewed study, published in JAMA Internal Medicine, reveals the alarming ease with which AI can be used to create disinformation, particularly concerning vaccines and vaping.
The researchers also produced 20 realistic images in under two minutes to accompany these blogs, using generative AI tools.
They attempted similar tests with Google’s Bard and Microsoft’s Bing Chat but did not achieve the same results.
The study underscores the urgent need for regulatory measures to mitigate the risks posed by such technology in spreading misinformation.
To highlight the risks AI poses in the spread of misinformation and disinformation, @Flinders public health experts mass-produced more than 100 blog articles full of disinformation on vaccines and vaping in just over an hour using OpenAI’s GPT Playgroundhttps://t.co/TUzyXwOZcM pic.twitter.com/VJmCBrYZ0D— Australian Science Media Centre (@AusSMC) November 15, 2023
Researchers express deep concern
Funded by various grants, including a postgraduate scholarship from the National Health and Medical Research Council (NHMRC) and a Beat Cancer Research Fellowship from the Cancer Council South Australia, this research emphasises the threat posed by the unchecked use of generative AI in public health domains.
Bradley Menz, a registered pharmacist and researcher at Flinders University, expressed deep concerns about the findings.
He pointed to prior disinformation pandemics causing widespread fear and confusion, highlighting the urgency for governmental regulation to curtail malicious AI use.
Call for collaboration
The study's senior author, Dr Ashley Hopkins, emphasised the need for AI developers to work in tandem with healthcare professionals.
This collaboration aims to ensure AI vigilance frameworks prioritise public safety and well-being.
This groundbreaking research demonstrates the ease with which AI can generate large volumes of misleading content, stressing an immediate need for effective strategies to manage these risks and safeguard public health in the face of rapidly advancing AI technologies.