97% of Listeners Can’t Distinguish AI Music, Deezer–Ipsos Study Finds

Survey results showing that most listeners cannot distinguish AI-generated music from human-created tracks

Deezer–Ipsos Study Shows 97% of Listeners Can’t Tell AI Music From Human Tracks​


A new global study from Deezer and Ipsos reveals just how convincingly artificial intelligence is blending into the music landscape. According to the survey, 97% of listeners could not reliably distinguish AI-generated songs from tracks written and performed by human artists.

A Global Survey Across Eight Countries​


The study, which included responses from 9,000 participants across eight countries — including the United States, the United Kingdom, and France — highlights a dramatic shift in how listeners perceive (or fail to perceive) the boundaries between human creativity and synthetic output.

AI models used for music generation have become sophisticated enough that most participants were unable to identify which songs were composed or performed by machines. Even listeners who considered themselves “above-average” at identifying production techniques struggled to pick out AI-generated tracks with any consistency.


Listeners Want Transparency, Not Guessing Games​


Despite this inability to differentiate between creators, the survey found strong demand for transparency. Seventy-three percent of respondents said they want platforms and labels to clearly indicate when AI was involved in the songwriting, production, or performance process.

For listeners, the issue is less about rejecting synthetic music and more about wanting to know how it was created. Transparency labels — similar to food ingredient disclosures or “made with AI” tags — are becoming a widely supported concept as generative audio tools proliferate.


A Growing Sense of Surprise Among Listeners​


In one of the more striking findings, 71% of participants said they were surprised at how difficult it was to distinguish human tracks from AI-composed songs. Many respondents reported confidence going into the test — only to discover afterward that they had misidentified most of the clips.

The results mirror a phenomenon already seen in visual and text-based generative AI: once synthetic work achieves a baseline of realism, human pattern recognition struggles to keep up. Music, with its emotional signals and stylistic nuances, appears to be crossing that threshold rapidly.


Implications for Artists, Labels, and Streaming Platforms​


The study raises pressing questions for the music industry. If listeners cannot distinguish creators, how should royalties be allocated? Should AI tracks be labeled in playlists? And what responsibilities do platforms have to inform audiences when they are listening to synthetic performances?

For Deezer — which has already launched tools to detect AI-generated content and deepfake vocals — the findings reinforce the company’s stated goal of promoting transparent and ethical AI-driven music ecosystems.


Conclusion​


The Deezer–Ipsos survey highlights a transformative moment for the music world. As AI becomes indistinguishable from human creativity for most listeners, the industry must confront questions about disclosure, ethics, and the future role of synthetic artists. With nearly all participants unable to tell the difference, the call for transparent labeling is likely to grow louder in the months ahead.



Editorial Team — CoinBotLab

Comments

There are no comments to display

Information

Author
Coinbotlab
Published

More by Coinbotlab

Top