Article content
The most recent criminal case involving artificial intelligence emerged last week at a Maryland high school, where police say a principal was called racist because of a fake recording of his voice.
This case is another reason why everyone – not just politicians and celebrities – should be worried about this increasingly powerful and increasingly fake technology, experts say.
Article content
“Everyone is vulnerable to attacks, and anyone can attack,” said Hany Farid, a professor at the University of California, Berkeley, who focuses on digital forensics and disinformation.
Advertisement 2
Article content
Here’s what you need to know about some of the latest uses of AI for harmful purposes:
AI HAS BECOME VERY ACCESSIBLE
The manipulation of recorded sounds and images is not new. But the ease with which someone can modify information is a recent phenomenon. The same goes for the ability to spread quickly on social media.
The fake audio clip impersonating the director is an example of a subset of artificial intelligence known as generative AI. It can create new hyper-realistic images, videos and audio clips. It’s cheaper and easier to use in recent years, reducing barriers for anyone with an Internet connection.
“Especially in the last year, anyone — and I mean anyone — can access an online service,” said Farid, the Berkeley professor. “And either for free or for a few dollars a month, they can download 30 seconds of someone’s voice.”
Those seconds could come from a voicemail, a social media post or a clandestine recording, Farid said. Machine learning algorithms capture what a person looks like. And the cloned speech is then generated from words typed on a keyboard.
The technology will only become more powerful and easier to use, including for video manipulation, he said.
Article content
Advertisement 3
Article content
WHAT HAPPENED TO MARYLAND?
Baltimore County authorities said Dazhon Darien, the athletic director at Pikesville High, cloned the voice of principal Eric Eiswert.
The fake recording contained racist and anti-Semitic comments, police said. The sound file appeared in an email in some teachers’ inboxes before spreading to social media.
The recording surfaced after Eiswert raised concerns about Darien’s job performance and alleged misuse of school funds, police said.
The fake sound forced Eiswert to take time off work while police guarded his home, authorities said. Angry phone calls flooded the school, while hate messages piled up on social media.
Detectives asked outside experts to analyze the recording. One said it “contained traces of AI-generated content with human editing after the fact,” according to court records.
A second opinion from Farid, the Berkeley professor, revealed that “multiple recordings had been assembled,” according to the records.
Farid told The Associated Press that questions remain about how exactly this recording was created, and he did not confirm that it was entirely generated by AI.
Advertisement 4
Article content
But given AI’s growing capabilities, Farid said Maryland’s case still serves as a « canary in the coal mine » for the need to better regulate the technology.
WHY IS AUDIO SO CONCERNING?
Many cases of AI-generated misinformation have been audio.
Part of the reason is that technology has improved so quickly. Human ears also can’t always identify telltale signs of manipulation, while discrepancies in videos and images are easier to spot.
Some people have cloned the voices of allegedly kidnapped children over the phone to extract ransom from their parents, experts say. Another posed as the managing director of a company in urgent need of funds.
During this year’s New Hampshire primary, AI-generated robocalls impersonated President Joe Biden’s voice and attempted to dissuade Democratic voters from voting. Experts warn of a surge in AI-generated disinformation targeting elections this year.
But worrying trends go beyond audio, like programs that create fake nude images of clothed people without their consent, including minors, experts warn. Singer Taylor Swift was recently targeted.
Advertisement 5
Article content
WHAT CAN BE DONE?
Most providers of AI voice generation technology say they prohibit any harmful use of their tools. But self-application varies.
Some providers require some sort of voice signature or ask users to recite a unique set of phrases before they can clone a voice.
Large tech companies, such as Meta, parent company of Facebook, and OpenAI, creator of ChatGPT, only allow a small group of trusted users to experiment with the technology due to the risk of abuse.
Farid said more needed to be done. For example, all companies should require users to provide phone numbers and credit cards so they can trace files back to those misusing the technology.
Another idea is to require recordings and images to be digitally watermarked.
“You’re changing the audio in a way that’s imperceptible to the human hearing system, but in a way that can be identified by downstream software,” Farid said.
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, said the most effective intervention is law enforcement action against the criminal use of AI. Greater consumer education is also needed.
Advertisement 6
Article content
Another priority should be encouraging responsible conduct among AI companies and social media platforms. But it’s not as simple as banning generative AI.
“It can be complicated to add legal liability because in many cases there can be positive or positive uses of technology,” Givens said, citing translation and book reading programs.
Another challenge is finding international agreement on ethics and guidelines, said Christian Mattmann, director of the Information Retrieval & Data Science group at the University of Southern California.
“People use AI differently depending on what country they are in,” Mattmann said. “And it’s not just about governments, but also about the people. Culture therefore matters.
___
Associated Press journalists Ali Swenson and Matt O’Brien contributed to this report.
Article content