Stealing Voices and Personalities: The Dangers Posed by Generative AI
AI advancements are reaching levels where humans have less control over their personality traits.
The evolution of artificial intelligence has reached a stage where it can steal your voice and even your personality. The worst part? There's not so much you can do about it. Currently, there are voice cloning programs and video generators that have little or no barrier to impersonate individuals, a new report finds.
Steady Growth
In recent years, voice cloning AI technology has made remarkable strides, with technology and many services that can effectively mimic a person’s cadence with only a few seconds of sample audio. An excellent example was during the Democratic primaries last year, when robocalls of a fake Joe Biden spammed the phones of voters telling them not to vote. The political consultant who masterminded the scheme was hit with a $6 million fine, and the Federal Communications Commission has since banned AI-generated robocalls.
A survey of the six leading publicly available AI voice cloning tools revealed that five of them have easily bypassable safeguards, making it simple to clone a person’s voice without their consent. What makes it even more frightening is that deepfake audio detection software often struggles to tell the difference between real and synthetic voices.
Dangers Ahead
Few days ago, an AI-generated video of President Trump was released, depicting him as ruling over the affairs of Gaza. The video was pulled down but the president posted it. Millions of people believed the video to be true, showing a thin line between the real deal and generative artificial intelligence.
Generative AI mimics human qualities, such as their appearance, writing and voices, but the industry has few federal regulations in the United States or elsewhere. Most ethical and safety checks in the industry at large are self-imposed. While Biden had included some safety demands in his executive order on AI, which he signed in 2023 and had many AI leaders adopting, President Trump revoked that order when he took office.
Voice cloning technology works by taking an audio sample of a person speaking (which could be obtained) and then extrapolating that person’s voice into a synthetic audio file. Without safeguards in place, anyone who registers an account can simply upload audio of an individual speaking, from platforms such as TikTok or YouTube, and have the service imitate them.
Simple Access
From the survey mentioned, four of the services — ElevenLabs, Speechify, PlayHT and Lovo — only require checking a box saying that the person whose voice is being cloned had given authorization. Resemble AI is another service from the survey, however, it requires recording audio in real time, rather than allowing a person to just upload a recording. Still, Consumer Reports was able to easily circumvent that restriction by simply playing an audio recording from a computer.
Only the sixth service, Descript, could be said to have a relatively effective safeguard. It requires a would-be cloner to record a specific consent statement, which is difficult to falsify, but can still be done by cloning through another service. However, all six services are available to the public through their various websites. Only Eleven Labs and Resemble AI cost money with paid services — respectively $5 and $1 — to create a custom voice clone. The others are free.
Between Safety and Positive Uses
Some of the companies have issued statements on the abuses of their tools, and the serious negative consequences.
“We recognize the potential for misuse of this powerful tool and have implemented robust safeguards to prevent the creation of deepfakes and protect against voice impersonation,” a spokesperson for Resemble AI stated to NBC News.
AI voice cloning can be put to legitimate uses, including helping people with disabilities and creating audio translations of people speaking in different languages. But the potential for harm sticks out like a sore thumb. As noted by Sarah Myers West, co-executive director of the AI Now Institute, a think tank that focuses on the consequences of AI policy.
“This could be used for fraud, scams and disinformation, for example impersonating institutional figures,” West told NBC News.
The public must be vigilant as AI is now getting used in audio-based scams. Music is not left out, as cloned voices have been used to create music without the depicted artist’s permission. This was the case with a viral 2023 song that falsely seemed to be by Drake and the Weeknd. Safeguards in place are inadequate, with many fearing that recent generative AI scams are just the beginning.

