People are using AI to make hateful songs

0:00

Malicious actors are exploiting generative AI music tools to create homophobic, racist, and propagandistic songs — and are even publishing guides to instruct others on how to do so.

According to ActiveFence, a service focused on managing trust and safety operations on online platforms, discussions within “hate speech-related” communities have increased since March on misusing AI music creation tools to produce offensive songs targeting minority groups. These AI-generated songs, shared in forums and discussion boards, are intended to incite hatred against ethnic, gender, racial, and religious groups, while also glorifying martyrdom, self-harm, and terrorism, according to ActiveFence researchers.

Hateful and harmful songs are not new phenomena. However, with accessible free music-generating tools, there is a concern that these songs can now be produced on a larger scale by individuals who previously lacked the resources or expertise — similar to how image, voice, video, and text generators have hastened the spread of misinformation, disinformation, and hate speech.

“These trends are intensifying as more users learn to generate these songs and share them with others,” an ActiveFence spokesperson told Truth Voices. “Threat actors are rapidly identifying specific vulnerabilities to abuse these platforms and generate malicious content.”

Creating “hate” songs

Generative AI music tools like Udio and Suno allow users to add custom lyrics to generated songs. Although these platforms have safeguards to filter out common slurs and derogatory terms, users have discovered workarounds, as noted by ActiveFence.

In one instance highlighted in the report, users in white supremacist forums shared phonetic spellings of minorities and offensive terms, such as “jooz” instead of “Jews” and “say tan” instead of “Satan,” to bypass content filters. Some even suggested altering spacing and spellings for violent acts, like changing “my rape” to “mire ape.”

Truth Voices tested several of these workarounds on Udio and Suno, two popular platforms for creating and sharing AI-generated music. Suno allowed all the attempts, whereas Udio blocked some but not all of the offensive homophones.

When contacted via email, a Udio spokesperson stated that the company prohibits the use of its platform for hate speech. Suno did not respond to the request for comment.

ActiveFence’s research found links to AI-generated songs spreading conspiracy theories about Jewish people and advocating for their mass murder, songs featuring slogans associated with terrorist groups like ISIS and Al-Qaeda, and songs glorifying sexual violence against women.

Impact of song

ActiveFence argues that songs, as opposed to text, carry emotional weight, making them especially powerful tools for hate groups and political warfare. For instance, Rock Against Communism, a series of white power rock concerts in the U.K. in the late ’70s and early ’80s, spawned subgenres of antisemitic and racist “hatecore” music.

“AI makes harmful content more appealing — imagine a harmful narrative about a population expressed through a rhyming song that’s easy to remember,” the ActiveFence spokesperson said. “These songs reinforce group solidarity, indoctrinate peripheral group members, and are also used to shock and offend unaffiliated internet users.”

ActiveFence urges music generation platforms to implement prevention tools and conduct more extensive safety evaluations. “Red teaming can potentially surface some of these vulnerabilities by simulating threat actor behavior,” said the spokesperson. “Better moderation of the input and output might also be useful, as it allows blocking content before it’s shared with users.”

However, these fixes could be short-lived as users discover new ways to bypass moderation. Some AI-generated terrorist propaganda songs identified by ActiveFence were created using Arabic-language euphemisms and transliterations, which the music generators failed to detect, likely due to weaker filters for Arabic.

If AI-generated hateful music follows other AI-generated media trends, its spread could be extensive. Earlier this year, Wired documented how an AI-manipulated clip of Adolf Hitler amassed more than 15 million views on X after being shared by a far-right conspiracy influencer.

A UN advisory body, among other experts, has voiced concerns that racist, antisemitic, Islamophobic, and xenophobic content could be amplified by generative AI.

“Generative AI services enable users without resources or skills to create engaging content and spread ideas that can compete for attention globally,” the spokesperson added. “Threat actors, recognizing the creative potential of these services, are finding ways to bypass moderation and avoid detection — and they have been successful.”

Kyle Wiggers
Kyle Wiggers
Kyle Wiggers is a senior reporter with a special interest in AI. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself.

Latest stories

Ad

Related Articles

Leave a reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!

Ad
Continue on app