OpenAI has quietly discontinued its ‘AI Classifier tool’, which was created to help users differentiate between human and AI-created text, citing poor accuracy in its function.
The organisation initially warned of the tool's potential inaccuracies, particularly with text samples under 1,000 characters and the possibility of falsely identifying human-created text as AI-generated.
OpenAI also stated its commitment to improving AI-generated content detection methods, adding that it was researching more effective provenance techniques for text.
The website that housed the classification tool is currently inactive.
“Investigating more effective methodologies”
In a recent blog post, Open AI said: "As of July 20, 2023, the AI Classifier is no longer available due to its low rate of accuracy."
The tool's webpage is now defunct and the brief announcement offered a minimal explanation for the cessation of the service.
However, the firm clarified that it was investigating more effective methodologies for recognising AI-generated content.
"We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.”
Worrying trend
From its inception, OpenAI maintained that the detection software was not immune to errors and could not be regarded as "fully reliable".
The discontinuation of the AI Classifier is the latest in a series of challenges faced by OpenAI's offerings.
On July 18, a study conducted by researchers from Stanford and UC Berkeley suggested that the efficacy of OpenAI's flagship product, ChatGPT, was declining with time.
Their findings revealed that ChatGPT-4's ability to accurately identify prime numbers had alarmingly dropped from 97.6% to a mere 2.4% over the past few months.
Moreover, both ChatGPT-3.5 and ChatGPT-4 experienced significant declines in their capacity to generate new lines of code.