In an era where artificial intelligence can generate hyper-realistic deepfakes, companies monetize biometric data, and athletes fight for their rights under name, image and likeness, or NIL, contracts, a fundamental question emerges: Do we truly own our own faces?
We all see our personal identity as sacrosanct, but the pace at which technology is evolving and the legal system desperately trying to catch up challenges our assumption.
While intellectual property laws, privacy regulations and NIL agreements attempt to address these issues, they often lag behind innovation, leaving individuals vulnerable to exploitation. The intersection of AI, NIL and biometric data collection raises profound concerns about whether existing legal frameworks adequately protect personal property rights while fostering innovation.
It’s a given today that deepfake technology has progressed to the point where AI-generated images, videos and audio can be nearly indistinguishable from reality. This advancement raises serious concerns about ownership and consent. If an AI-generated deepfake replicates a person’s likeness without their permission, do they have legal recourse? The answer depends largely on jurisdiction and existing legal frameworks. Some U.S. states have enacted laws criminalizing certain uses of deepfakes, particularly in cases of nonconsensual pornography or election interference.
For instance, California’s AB 602 provides a private right of action for individuals whose likeness is used in deepfake pornography without consent. Similarly, Virginia criminalized the unauthorized distribution of deepfake pornography. But these laws focus on specific harms rather than broader issues of likeness ownership.
No easy answers
A huge problem is that federal law lacks a comprehensive approach here. The First Amendment complicates efforts to regulate deepfakes because courts must balance an individual’s right to control their likeness against free speech protections. For example, if an AI-generated deepfake is used for satire, parody or commentary, it may be legally protected even if it causes reputational harm.
The advent of NIL rights in college athletics represents a significant shift in how individuals can monetize their personal brand. The NCAA’s decision in 2021 to allow athletes to profit from NIL deals was heralded as a win for personal property rights. Yet these agreements also introduce complex legal challenges.
One primary concern is the potential for coercion and unfair contracts. Many young athletes, particularly those without legal representation, may sign NIL contracts that severely limit their ability to control their image.
Some deals include perpetuity clauses, meaning an athlete could unknowingly sign away lifelong rights to their image. In essence, rather than securing ownership over their face, some athletes may end up losing it to corporate interests.
The growing threat of AI in NIL exploitation
Third-party AI-generated NIL exploitation poses a growing threat. If an athlete refuses to sign an NIL deal, what prevents companies from using AI to create deepfake versions of them? While some NIL contracts include exclusivity provisions, they rarely address unauthorized AI-generated likenesses, leaving a loophole for exploitation.
Beyond deepfakes and NIL, biometric data collection presents another critical challenge to personal ownership. From facial recognition technology in airports to social media platforms collecting facial data, corporations and governments have amassed vast databases of personal identifiers. But who owns this data, and what rights do individuals have over it?
Some states have taken legislative action. Illinois’ Biometric Information Privacy Act (BIPA) is one of the strongest laws in the U.S., requiring companies to obtain explicit consent before collecting and storing biometric data. BIPA has led to significant legal battles, including a $650 million settlement from Facebook over its facial recognition practices. But the reality is that federal law offers little protection.
The Fourth Amendment, which protects against unreasonable searches and seizures, does not extend to private companies. As a result, corporations can collect and use biometric data with minimal oversight unless state laws like BIPA apply. Moreover, many companies bury consent clauses deep within terms of service agreements, effectively stripping users of ownership without their full understanding.
Despite these emerging legal battles, current laws fail to comprehensively address the question of whether individuals truly own their own faces. Several key areas require reform. The U.S. lacks a nationwide legal standard for NIL and likeness rights. Establishing a federal right to publicity could help individuals maintain control over their name, image and likeness across all industries.
Lawmakers must also develop clear guidelines on how AI-generated likenesses can be used. While some argue for an outright ban on unauthorized deepfakes, others suggest a licensing model where individuals can opt into or out of AI-generated representations. Laws like BIPA should also serve as a model for national legislation.
Individuals should have the right to opt out of biometric data collection and demand the deletion of their data upon request. Athletes and other public figures need stronger protections against exploitative NIL contracts. Transparency requirements, mandatory legal review periods, and caps on contract duration could prevent individuals from unintentionally signing away their rights.
Striking a balance
While stronger protections are necessary, it is also important to recognize the role of innovation. AI has tremendous potential in industries such as film, advertising and gaming, where digital likenesses can be used for creative purposes. Rather than stifling progress, legal frameworks should strike a balance between protecting individual rights and fostering technological advancements. A possible solution is compensatory licensing models, where companies using AI-generated likenesses must pay royalties to the individuals they replicate. Such a system would preserve personal ownership while allowing businesses to continue innovating.
As technology evolves, so too must our understanding of personal identity and ownership. The rapid rise of AI-generated deepfakes, NIL contracts and biometric data collection presents both opportunities and risks. Without stronger legal protections, individuals risk losing control over their most personal asset — their own face. However, any legal reforms must balance the need for privacy and autonomy with the benefits of innovation.
The future of personal identity in the digital age depends on finding this equilibrium, ensuring that no one is forced to surrender ownership of their likeness to corporations or AI algorithms without consent. Until then, the question remains: Can we ever truly own our own face?
Aron Solomon is the chief strategy officer for Amplify. He holds a law degree and has taught entrepreneurship at McGill University and the University of Pennsylvania, and was elected to Fastcase 50, recognizing the top 50 legal innovators in the world. His writing has been featured in Newsweek, The Hill, Fast Company, Fortune, , CBS News, CNBC, USA Today and many other publications. He was nominated for a Pulitzer Prize for his op-ed in The Independent exposing the NFL’s “race-norming” policies.
Illustration: Dom Guzman
Stay up to date with recent funding rounds, acquisitions, and more with the
Crunchbase Daily.