Accounting and professional services firm KPMG has issued a complaint in Australia about inaccurate information generated by artificial intelligence (AI) that falsely implicated the consultancy in scandals, according to a report in the Guardian.
Andrew Yates, chief executive officer of KPMG Australia, submitted the complaint to a local parliamentary joint committee on corporations and financial services that processed and approved the AI-generated material.
In his written complaint, Yates said that the livelihoods of more than 10,000 people working at KPMG could be affected “when obviously incorrect information is put on the public record – protected by parliamentary privilege – and reported as fact”.
“We are deeply concerned and disappointed that AI has been relied upon, without comprehensive factchecking,” he wrote.
Academics admitted that they had submitted information about the firm as part of an inquiry that was aggregated by AI and not checked it for factual accuracy.
That submission reportedly wrongly accused the firm of playing a role in a “KPMG 7-Eleven wage-theft scandal” and claimed KPMG had audited the Commonwealth Bank during another scandal, though the firm denied ever auditing the bank.
According to one academic, they had used Google (NASDAQ:GOOGL) Bard to help create two submissions to the parliamentary inquiry, which allegedly created case studies that had not actually happened to illustrate the need for structural reforms.
James Guthrie, an emeritus professor, said in a letter of apology to the Senate that “the use of AI has largely led to these inaccuracies”, adding that there were parts of two submissions that used the Google Bard Large Language model generator.
The AI tool allegedly created fictional case studies and implicated KPMG in scandals that the firm was not involved in, falsely citing partners as being sacked by companies that they hadn’t even worked for.
The parliamentary committee has reported the case to the Senate clerk, saying in a statement that AI “may present serious risks to the integrity and accuracy of work” if it is not rigorously fact-checked.
The committee warned that Google Bard could seriously undermine “the integrity of submissions” and the committee process itself.