Press note. Music has always been molded by technology. From the multipist recording and the synthesizers to the digital audio and auto-tune work stations, each generation of artists and producers has used new tools to take the sound and narrative to another level.
However, the rhythm of recent advances in generative artificial intelligence has been fast and sometimes disconcerting, especially for creators. In its best version, AI is opening new ways for artists to create music and for listeners to discover it. In its worst version, AI can be used by malicious actors and “content farms” to confuse or deceive listenersflood the ecosystem with low quality content and hinder authentic artists who are building their careers. This type of harmful content generated by the degrade user experience and, often, tries to divert royalties towards malicious actors.
The future of the music industry is being written, and we believe that protecting aggressively against the worst uses of generative AI is essential to free its true potential in favor of artists and producers.
We imagine a future in which artists and producers have control over how, or if, they incorporate AI in their creative processes. As always, we leave those decisions in their hands, while we continue working to protect them from spam, impersonations and deceptions, and offering the listeners greater transparency about the music they listen.
This path is not new to us. During the last decade, we have invested significantly in the fight against spam. In fact, only in the last 12 months, a period marked by the explosion of generative AI tools, we have eliminated more than 75 million tracks with Spotify spam content. IA technology is quickly evolving, and we will continue to implement new policies continuously. Today, our efforts in this area focus on the following:
Strictest rules against impersonation
The problem: we have always had a policy against deceptive content. But AI tools have facilitated more than ever the creation of vocal deepfakes of your favorite artists.
What we announce: we have implemented a new identity impersonation policy that clarifies how we manage the cases related to voice clones generated by AI (and other forms of unauthorized vocal imitation), thus offering artists greater protection and lighter action pathways. The vocal imitation will only be allowed in the music available on Spotify when the imitated artist has expressly authorized its use.
We are also increasing our investments to combat another impersonation tactic: when Uploaders rise fraudulently through the profile of another artist in streaming services, whether music generated by AI or not. We are testing new prevention measures along with the main music distributors so that they can stop these attacks from the origin. For our part, we will also allocate more resources to our process of revision for incorrect content, in order to reduce waiting times and allow artists to report “mismatches” even before the official launch.
Why it is important: the unauthorized use of AI to clone the voice of an artist exploits his identity, undermines his art and threatens the fundamental integrity of his work. Some artists can choose to license their voice for projects with AI, and that is a decision that corresponds only to them. Our work is to ensure that this choice is always in your hands.
Musical spam filter
The problem: Total music payments in Spotify have grown from 1,000 million dollars in 2014 to 10,000 million in 2024. But high payments attract malicious actors. Spam tactics, such as massive loads, duplicates, seo tricks, abuse of artificially short tracks and other forms of low quality content, have become easier to exploit as Ia tools facilitate anyone to generate large volumes of music to anyone.
What we announce: this fall we will implement a new musical spam filter. A system that will identify those who upload content and the clues that use these tactics, will label them and stop recommending them. We want to be careful not to penalize those who do not correspond, so the deployment will be gradual in the coming months, adding new signals to the system as new fraudulent practices arise.
Why it is important: if not controlled, this type of behavior can dilute the Royalties fund and affect the attention that artists who comply with the standards receive. Our new musical spam filter will protect against these behaviors and help prevent those who use them generating royalties that should be distributed to professional artists and composers.
Declarations for using music with credits under industry standards
The problem: many listeners want to have more information about what they are listening to and the role played by AI technology in the music they reproduce. And, for artists who use AI tools responsible in their creative process, a way of sharing does not exist in streaming services if they are using and how they do it. We know that the use of AI is more and more a spectrum, not a “all or nothing”, because some artists and producers use it in certain parts of their productions, but not in others. The industry needs a nuanced approach to transparency in the use of AI, without the obligation to classify each song as “is the” or “is not ia.”
What we announce: we are helping to develop and support the new industry standard for statements to use AI in musical credits, developed through DDEX. As this information is sent by stamps, distributors and musical partners, we will begin to show it in the application. This standard offers artists and rights holders a clear way to indicate where and how AI intervened in the creation of a track, either in voices generated by AI, instrumentation or postproduction. This change seeks to strengthen confidence throughout the platform. It is not about punishing artists who use AI responsible, nor will it affect the way the content is prioritized or promoted in Spotify.
This is an effort that will require a broad consensus in the industry, and we are proud to work in this standard together with a wide variety of partners of the sector, including: Amuse, Audiosalad, Believe, CD Baby, Distrokid, Downtown Artist & Label Services, Empire, Encoding Management Service – EMS GMBH, Fuga, Idol, Idol, Kontor New Media, Labelcamp, Nuemeta, Revelator, Sonosuite, Soundrop and Supply Chain.
Why it is important: when supporting a standard of the industry and contributing to its generalized adoption, we can ensure that listeners see the same information, regardless of which streaming service are using. And, ultimately, this preserves confidence in the entire musical ecosystem, since users understand what is behind the music they listen. We see it as a first key step, which will continue to evolve.
Although AI is changing the way in which part of the music is created, our priorities remain firm. We are investing in tools to protect the identity of the artists, improve the platform and provide the listeners greater transparency. We support the freedom of the artists to use AI in a creative way, while actively fighting their misuse by content and malicious actors. Spotify does not create or have music; It is a platform for licensed music where royalties are paid according to the commitment of the listeners, and all the music is treated equally, regardless of the tools used to create it.
These updates are the most recent of a series of changes that we are implementing to support a more reliable musical ecosystem, for artists, rights holders and listeners. We will continue to introduce improvements as technology evolves.
Vía | Spotify