🧐 ProPicks AI October update is out now! See which stocks made the listPick Stocks with AI

Tech Bytes: Deepfakes will soon be undetectable to the average person

Published 05/07/2024, 11:59 am
Updated 05/07/2024, 12:30 pm
© Reuters.  Tech Bytes: Deepfakes will soon be undetectable to the average person
MSFT
-

If you haven’t seen a deepfaked video of someone famous yet, it’s probably only a matter of time. Taylor Swift, Barack Obama, Hilary Clinton, Simon Cowell, Morgan Freeman – it will soon be easier to list who hasn’t had a false video of their image created by artificial intelligence (AI).

Lucky for most of us, these early deepfakes were fairly easy to spot. Something weird in the hands or the hair, rendering errors around the eyes or mouth, a voice or cadence that didn’t sound quite right.

This deepfake of Robert Downey Jr and Tom Holland as the leading actors in Back to the Future was published in 2020.

They have only improved since then.

Tools needed to detect deepfakes

AI’s single greatest strength is its ability to learn.

While AI generated images may once have had wonky, alien-looking hands, they’re getting a lot better.

Left: Midjourney 4’s render of a frankly nightmare-inducing human hand. Right: Midjourney 5’s significantly less uncanny effort.

Of 1,000 people surveyed, 60% thought this video was real:

Alas, it’s not a real place, it was rendered by OpenAI’s Sora program. Could you tell?

Cybersecurity and AI expert Professor Sanjay Jha says it’s fast moving out of regular human reach to tell what’s real and what isn’t.

“We couldn't foresee what was coming out of AI 10 years back. If you interviewed anyone like me, they wouldn't be able to tell you,” Professor Jha said.

According to him, there’s really no point trying to determine if something is a deepfake anymore.

“We need tools and techniques to detect that rather than relying on people.”

As always, the tools to regulate and control technology lag behind its advancement; AI’s capability is accelerating at an eyewatering pace.

The Australian government is designing laws to punish the publication of AI-generated non-consensual sexual images, but at present deepfake laws only exist in Victoria.

Platforms and non-government organisations (NGOs) are also moving to address the problem of AI misuse, but progress is slow.

One potential solution could be a watermark system called Content Credentials that shows the details of how a piece of media was made and its edit history, designed by the Content Authenticity Initiative.

TikTok has opted to use these new watermarks on its platform to label AI content. Instagram has made similar requests of its user base, but asking content creators to apply the watermark themselves leaves the platform open to deception or negligence.

Audio deepfakes present huge challenge

It’s not only your average person falling for deepfakes.

UK engineering company Arup recently very publicly lost £20 million to a deepfake scam that simulated a video call from senior officers of the company, prompting an employee to pay out the scammers as directed.

“Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing and deepfakes,” Arup global chief information officer Rob Greig said.

“What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months.”

There are few tools to detect audio deepfakes – voice calls almost always use compressed audio files, which distort voices and introduce digital artefacts to the sound automatically.

Voice-based scams tend to rely on inducing a strong feeling of panic and urgency in the victim.

Last year, a woman in Arizona almost paid $1 million to scammers who had cloned her daughter's voice, claiming she had been kidnapped and would be harmed if the ransom wasn’t paid.

Her daughter had been on a ski trip far from home at the time, which made the call all too believable.

“I never doubted for one second it was her,” Jennifer DeStefano told US masthead WKYT. “That’s the freaky part that really got me to my core.”

Microsoft’s text-to-speech AI model VALL-E requires just 3 seconds of recorded audio to create a deepfaked clone of a voice, and even free AI services require just 30 seconds.

“I pick up the phone, and I hear my daughter’s voice, and it says, ‘Mom!’ and she’s sobbing,” DeStefano described. “I said, ‘What happened?’ And she said, ‘Mom, I messed up,’ and she’s sobbing and crying.”

“This man gets on the phone, and he’s like, ‘Listen here. I’ve got your daughter.’”

Lucky for DeStefano, one of the women in the room with her at the time also had a daughter on the trip and they confirmed the call was faked before calling the police.

Others have not been so lucky.

TikToker Beth Royce paid $1,000 to a scammer claiming he would kill her sister, while Alabama resident Chelsie Gates paid a similar amount to someone threatening her mother.

How can you protect yourself?

There’s no straightforward answer to how to protect yourself from AI-generated fakes.

“We are in an era of active research in this kind of area and it's a cat and mouse game as usual,” Professor Jha said.

A healthy dose of scepticism seems to be the best defence – any situation driving you to move quickly without thinking things through is worth being suspicious of.

“Say if your boss is calling you and asking you to transfer $200,000 into some account and you are an accountant in charge of the money…ask some questions and so forth to make sure that you get more context for it,” Professor Jha suggests.

“Be vigilant. I would never ask people not to pay attention, always be suspicious, and if you have any doubts, do due diligence.

“Like any powerful tool, AI can be used for construction or destruction. The excitement of innovation must be paired with critical thinking.”

Read more on Proactive Investors AU

Disclaimer

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.
© 2007-2024 - Fusion Media Limited. All Rights Reserved.