Amid rising concerns about deep fakes and the misuse of artificial intelligence (AI), global technology giants including Microsoft (NASDAQ:MSFT) and Adobe (NASDAQ:ADBE) are calling for self-regulation in lieu of robust governmental laws.
Last week, Microsoft launched Copilot, a virtual assistant integrated into Windows 11, giving millions of customers access to AI technology. Adobe also recently introduced its AI platform, Firefly, across its software suite, following successful beta trials that generated over two billion images.
Colette Stallbaumer, general manager for Microsoft 365 and Future of Work, emphasised the importance of human control over AI. "We strongly believe in regulation," Stallbaumer said.
Microsoft is collaborating with governments globally to facilitate responsible AI use. Similarly, Adobe has worked to make its Firefly platform "commercially safe," according to Ely Greenfield, chief technology officer for Digital Media at Adobe.
The surge in AI's capabilities has prompted governments to consider regulation more seriously. The European Union is emerging as a leader in AI regulation, working on legislation that addresses different levels of technology-associated risk.
In contrast, the US government favours a market-driven, self-regulatory approach, recently exemplified by voluntary commitments from tech companies like Adobe, IBM (NYSE:IBM), and Nvidia this month joining Google (NASDAQ:GOOGL) and Microsoft in voluntary commitments to govern AI use, including watermarking AI-generated content.
These initiatives come as a survey reveals that AI applications remain a point of contention amongst Australians. A Dye & Durham survey revealed that about half of the population is uncomfortable with the technology being used in professional sectors like finance and healthcare.
The discourse around AI regulation is expected to continue evolving as the technology increasingly permeates various sectors, from healthcare to finance.