In a pivotal move, seven foremost US-based AI companies have agreed during a meeting with President Joe Biden at the White House to voluntarily adopt a series of safeguards designed to mitigate AI-related risks.
The key players in the agreement include Amazon (NASDAQ:AMZN), Anthropic, Google (NASDAQ:GOOGL), Inflection, Meta, Microsoft (NASDAQ:MSFT) and OpenAI.
As part of the agreement, the companies will ensure easy identification of AI-generated content through the use of watermarks and will publicly share their AI capabilities and limitations regularly.
The safeguards include rigorous security testing of AI systems by internal and external experts before launch.
This progressive step arrives amidst heightened concerns over the rapid advancement of AI technologies, particularly with the threat of disinformation and its potential impact on the forthcoming 2024 US presidential election.
OpenAI and other leading AI labs are making a set of voluntary commitments to reinforce the safety, security and trustworthiness of AI technology and our services. An important step in advancing meaningful and effective AI governance around the world. https://t.co/DaHpBLA7rz— OpenAI (@OpenAI) July 21, 2023
Threats to democracy and values
President Biden emphasised the need for vigilance, addressing the potential concerns for AI to be used for disruptive purposes and the "fundamental obligation" companies must ensure their products are safe.
"We must be clear-eyed and vigilant about the threats emerging technologies can pose," he said.
"Social media has shown us the harm that powerful technology can do without the right safeguards in place.
"These commitments are a promising step, but we have a lot more work to do together.
"We must be clear-eyed about the threats that these emerging technologies can pose to our democracy and our values."
Implementing watermark
The seven companies have pledged to implement a 'watermark' system for various forms of content including text, images, audio and videos produced by artificial intelligence.
This endeavour is designed to provide users with a clear indication whenever AI technology has been deployed.
Some observers have voiced concerns about the potential pitfalls of AI, even invoking the image of a catastrophic deception mastered by the technology.
The proposed watermark will be technically incorporated into the content.
Theoretically, it should aid users in identifying deep-fake materials, such as manipulated images or audio that might depict non-existent violent acts, enhance deceptive scams or misrepresent a politician through unflattering distortions.
The specifics of how this watermark will be discernible when the information is shared, however, remain uncertain.
Substantial move
These voluntary measures signify a substantial move towards comprehensive regulation of AI in the US.
The Biden administration has also revealed plans for an executive order addressing AI, alongside intentions to collaborate with international allies to establish a framework for the global development and usage of AI.
Though some leading computer scientists view warnings about the existential risk of AI as exaggerated, the consensus underscores the importance of safeguards and transparency in responsibly steering the future of AI.