OpenAI's ChatGPT is reportedly deteriorating in capability and researchers are yet to determine the cause, according to a recent study conducted by Stanford and UC Berkeley.
The recent study demonstrated that newer versions of ChatGPT provided significantly less accurate answers to the same set of questions within a span of a few months, with researchers unable to explain this deterioration in performance.
Researchers Lingjiao Chen, Matei Zaharia and James Zou put ChatGPT-3.5 and ChatGPT-4 models through a series of tasks involving solving math problems, answering sensitive questions, writing new lines of code and conducting spatial reasoning from prompts to gauge the reliability of the different versions.
Highlighting the potential for substantial change in LLM behaviour over relatively short periods, the researchers stressed the importance of continuous monitoring of AI model quality.
They recommend that users and companies relying on LLM services in their workflows implement a form of monitoring analysis to ensure consistent performance.
We evaluated #ChatGPT's behavior over time and found substantial diffs in its responses to the *same questions* between the June version of GPT4 and GPT3.5 and the March versions. The newer versions got worse on some tasks. w/ Lingjiao Chen @matei_zaharia https://t.co/TGeN4T18Fd https://t.co/36mjnejERy pic.twitter.com/FEiqrUVbg6— James Zou (@james_y_zou) July 19, 2023
Shift in models
ChatGPT's responses to sensitive queries, particularly those relating to ethnicity and gender, also evolved to become more concise and avoidant.
The researchers observed a shift in the models' approach to dealing with sensitive questions.
While earlier versions offered extensive reasoning for refusing to answer certain sensitive queries, the June versions simply issued an apology and refused to respond.
In the tests, the March version of ChatGPT-4 could identify prime numbers with an impressive 97.6% accuracy.
However, by June, the same model's accuracy had sharply declined to a mere 2.4%.
This study follows OpenAI's announcement on June 6 about plans to create a team dedicated to managing potential risks associated with superintelligent AI systems, which the organisation anticipates emerging within the decade.