A recent study by pharmacists at Long Island University has raised concerns about the reliability of the free version of ChatGPT, OpenAI's widely popular chatbot, in providing accurate drug information.
The research, highlighting potential risks to patients, suggests a cautious approach to using ChatGPT for medication-related queries.
This study involved posing 39 medication-related questions to ChatGPT, with only 10 responses meeting the researchers' satisfaction. The remaining 29 responses were either off-topic, inaccurate, incomplete or a combination thereof.
These findings underline the importance of verifying ChatGPT's drug information with trusted sources like healthcare professionals or government websites.
OpenAI's stance and usage policy
Responding to the study, an OpenAI spokesperson emphasised that ChatGPT was not a replacement for professional medical advice.
OpenAI's usage policy specifically mentions that its models are not fine-tuned for medical information and should not be used for serious medical diagnostic or treatment purposes.
Limitations and scope
A significant limitation of the free ChatGPT version is its data cutoff in September 2021, potentially missing critical updates in the fast-evolving medical field.
This gap raises concerns about the chatbot's ability to provide up-to-date information on newly authorised medications.
Conducted by Sara Grossman, an Associate Professor of Pharmacy Practice at LIU, the study focused on the free version of ChatGPT, reflecting the version most accessible to the general population.
Grossman acknowledges the study's limitations, indicating it represents a snapshot of the chatbot’s capabilities as of early this year.
She suggests that current performance might differ, given the rapid development in AI technologies.