When Taylor Swift protects her digital double
Taylor Swift is no longer protecting only songs, albums, tours, logos, or slogans.
She has moved the issue somewhere else.
According to Reuters and AP, her company TAS Rights Management filed three trademark applications with the USPTO: two sound marks around the phrases “Hey, it’s Taylor Swift” and “Hey, it’s Taylor,” and one visual mark describing a stage image with a pink guitar, stage outfit, and identifiable setting. The applications are still pending review. (Reuters, AP)
This legal detail could look anecdotal.
It is not.
Because this is no longer only about defending a body of work. It is about defending a presence. A voice. An image. A silhouette. A way of appearing in the world.
In other words: an exploitable identity.
The new piracy does not steal a song. It manufactures a person.
For decades, the main threat to an artist was easy to understand: someone copied a file, pirated an album, distributed a concert recording, or sold fake merchandise.
The problem was economic.
With generative AI, the problem becomes existential.
Tomorrow, the risk will not only be that a Taylor Swift song gets copied. The risk will be that a parallel Taylor Swift gets produced, distributed, monetized, manipulated, exploited, and above all manufactured without consent.
A synthetic voice can sell a product.
A synthetic image can carry a political message.
A synthetic face can appear in advertising.
A synthetic silhouette can perform in a virtual world.
A synthetic style can be industrialized endlessly.
The U.S. Copyright Office has also addressed “digital replicas” of voices and likenesses in its work on AI. (U.S. Copyright Office)
What Taylor Swift is filing today is therefore not only celebrity protection.
It is a weak signal that has become audible.
Intellectual property is no longer enough
Copyright protects a recorded song. Trademark protects distinctive signs. Image rights or personality rights protect certain identity uses depending on the jurisdiction.
But AI blurs the borders.
AI can produce a voice that resembles someone without reusing the original recording exactly. It can produce an image that evokes someone without copying a specific photograph. It can generate a style close enough to create confusion, yet different enough to complicate enforcement.
That is where the case becomes fascinating.
Taylor Swift is not only trying to say: “this is my work.” She seems to be trying to say: “this is my identifiable presence.”
[Inference] This is probably the core of the coming battle: the shift from protecting content to protecting synthesizable identities.
In my book, I explain that protecting an invention or an asset is not enough to block copying. It mainly helps you defend yourself when conflict arrives (my book, chapter 3). That same chapter discusses the limits of patents, copyright, and trade secrets in a protection strategy.
With AI, this logic becomes personal.
Your voice becomes an asset.
Your face becomes an interface.
Your style becomes exploitable data.
Your way of speaking becomes a model.
Your reputation becomes an attack surface.
The artist becomes raw material
We long believed generative AI would compete with creators on production.
It will also compete with them on identity.
That is deeper.
An artist does not only sell songs. An artist sells a relationship. A collective memory. A recognizable voice. A fragility. An attitude. A story.
AI can extract fragments of that presence and recombine them into new forms.
It is useful.
It is powerful.
It is disturbing.
Because a synthetic artist does not need sleep.
It does not ask for royalties if no one represents it.
It does not refuse an advertising campaign.
It does not contradict a sponsor.
It does not get sick.
It does not age.
It does not protest.
The perfect synthetic artist is available, profitable, and obedient.
The problem is that it can be built on the back of a real artist.
Consent becomes the infrastructure of creation
The debate should not foolishly oppose AI and human creation.
AI can serve artists. It can translate, restore, augment, experiment, accelerate, and open new formats.
Matthew McConaughey and Michael Caine, for instance, entered agreements with ElevenLabs for authorized uses of their voices. (The Guardian)
The issue is not: should AI be used?
The issue is: who decides?
An authorized voice clone is innovation.
An imposed voice clone is predation.
A consented synthetic image is brand extension.
An imposed synthetic image is dispossession.
A voice used under contract is an asset.
A voice captured without agreement is extraction.
Consent becomes the new infrastructure of creation.
The law is chasing technology
Lawmakers are starting to understand the scale of the problem.
In the United States, the NO FAKES Act was reintroduced in 2025 to address digital replicas of voices and likenesses. The bill aims to create a federal right allowing individuals to control the use of their image and voice in digital replicas. (Senator Chris Coons)
YouTube also announced its support for the NO FAKES Act in 2025, framing it as a way to protect creators against unauthorized AI uses. (YouTube Blog)
But law walks in dress shoes while technology sprints in carbon-plated sneakers.
Models improve.
Costs fall.
Tools spread.
Platforms arbitrate.
Uses explode.
And courts will have to distinguish imitation, inspiration, parody, transformation, impersonation, confusion, exploitation, and harm.
That will not be simple.
This issue goes beyond artists
It would be tempting to treat Taylor Swift as a case reserved for celebrities.
That would be a mistake.
Artists are simply the first visible ones.
The CEO who speaks on video every week is concerned.
The keynote speaker who goes on stage is concerned.
The expert who publishes regularly is concerned.
The journalist with an identifiable voice is concerned.
The trainer selling content is concerned.
The content creator with a community is concerned.
The salesperson known by clients is concerned.
The professor whose classes are recorded is concerned.
Anyone whose voice, image, or style creates value becomes copyable.
And anyone copyable becomes exploitable.
This is no longer only about global fame. It is about digital footprint.
The more you publish, the more you may feed the machine.
The more visible you are, the more modelable you become.
The stronger your signature, the easier it becomes to imitate.
Protecting identity becomes strategic
Companies have learned to protect brand names, product names, patents, trade secrets, databases, and software.
They will have to learn to protect the human identities that create their value.
The CEO’s voice.
The founder’s image.
The spokesperson’s face.
The internal keynote speaker’s style.
The experts’ signature.
The sales team’s credibility.
The thought leaders’ presence.
[Inference] In the coming years, some organizations will probably need to include voice, likeness, and style rights in contracts, communication policies, AI charters, and cybersecurity systems.
Because a fake video of a leader can move a market.
Because a fake voice message can trigger fraud.
Because fake expert content can damage a reputation.
Because a fake advertisement can deceive customers.
Because a fake political endorsement can create a scandal.
Identity becomes an infrastructure of trust.
Innovation enters through technology and exits through lawyers
Innovation loves arriving in disguise.
It often starts with an impressive demo.
A cloned voice.
A convincing video.
A generated song.
A speaking avatar.
A tool promising time savings.
Then come the questions no one wanted to address at the beginning.
Who owns the voice?
Who authorizes the use?
Who benefits from the value created?
Who carries the responsibility?
Who controls distribution?
Who removes abusive content?
Who compensates the victim?
Who proves origin?
Technology opens the door.
Law installs the locks.
Taylor Swift has just filed her voice at the border of the synthetic era.
She is not closing the door to innovation.
She is reminding us that innovation without consent becomes a machine for confiscating identities.
The next frontier will be personal
We protected brands.
We protected works.
We protected inventions.
We protected databases.
We protected software.
We will have to protect presences.
The next innovation will not only be technological. It will be legal, cultural, and existential.
Because in a world where everything can be generated, value will move toward what can be authenticated.
The real voice.
The real consent.
The real presence.
The real relationship.
The real trust.
Taylor Swift has not only filed trademarks.
She has asked the entire creator economy a question:
Who, tomorrow, will need to protect their voice, image, or style before AI does it for them?
References
- (Reuters) = https://www.reuters.com/legal/litigation/taylor-swift-files-trademark-her-voice-likeness-ward-off-ai-deepfakes-2026-04-27/
- (AP) = https://apnews.com/article/7f56fbafb269d4959009f3ad34e28fc1
- (U.S. Copyright Office) = https://www.copyright.gov/ai/
- (Senator Chris Coons) = https://www.coons.senate.gov/news/press-releases/senators-coons-blackburn-reps-salazar-dean-colleagues-reintroduce-no-fakes-act-to-protect-individuals-and-creators-from-digital-replicas/
- (YouTube Blog) = https://blog.youtube/news-and-events/youtube-supports-the-no-fakes-act/
- (The Guardian) = https://www.theguardian.com/culture/2025/nov/11/matthew-mcconaughey-michael-caine-ai-voice



