How Google’s Veo 3 sparked UGC creativity and redefined the boundaries between reality and imagination—my thoughts on the future of the content industry.
“Renaissance” was first coined by French historian Jules Michelet to describe the 16th-century period as “the discovery of the world and of humanity” — Wikipedia
For You Page’s New Favorite Kid
On May 20, 2025, Google unveiled Veo 3 at its annual conference, Google I/O. Within weeks, over 500,000 videos tagged with #veo3 appeared on TikTok, with viral content being shared across all major social media platforms. The content ranged from time travel, satirical interviews, cutting a planet with a knife videos, and AI characters breaking the fourth wall to ask for help from the viewers.
There are so many viral videos created and watched that people started curating daily Veo3 video rankings—just like the NBA’s *Top 5 plays of the night—*sharing the most creative viral videos every day. Videos with millions—even hundreds of millions—of views have given global users a taste of the creative revolution brought by AI-generated videos.
What sparked this UGC creativity boom was the Bigfoot vlogs. It’s hard to trace “patient zero” who started it, but within days of going viral, different Bigfoot characters with various accents, furs, and pets emerged everywhere. I personally followed one with a unique comedic “personality” —by the name of “Speedilla.”
The story of Bigfoot Speedilla begins in Hamburg, Germany, where his creator, Mo, a Syrian immigrant, discovered that AI-generated Bigfoot vlogs could serve as both artistic expression and potential income during his spare time from work.
“Speedilla, he is part of my inner self,” Mo explains when describing the character. “I’m trying to create some kind of dark, satirical comedy.” In Speedilla’s virtual world, comedy is the surface of his content, while satirical reflections form its core. As Bigfoot gets rejected by a woman he’s trying to court and subsequently turns her into steak as a revenge, this 20-second piece receives hundreds of thousands of likes—it’s a collective “middle finger” to mainstream cultural taboos from a group that empathizes with Mo’s creation.
It only takes a couple of viral videos for Speedilla to build tens of thousands of followers and attract brand collaboration offers. But the good times didn’t last long. Mo soon discovered his content being stolen and replicated by others. He asked his followers to report the “infringement” to the platform, but TikTok did not respond to the claims.
As Lifehacker’s Stephen Johnson observed, driven by the profit motives of overnight viral success, countless creators are “putting Bigfoot in all kinds of ridiculous scenarios.” This extends far beyond Bigfoot—TikTok has seen an explosion of AI-generated content featuring Star Wars, Harry Potter, and other film franchises, creating a vast but copyright-ambiguous UGC content ecosystem.
The Copyright Conundrum
These vast copyright gray areas haven’t gone unnoticed. Entertainment giants Disney and Universal recently sued AI company Midjourney, calling its AI image generator a “bottomless pit of plagiarism” that blatantly infringes on their intellectual property libraries. In this 143-page lawsuit, Disney catalogued Midjourney’s blatant copying of their signature characters, including Storm Troopers and Darth Vader from Star Wars, Elsa from Frozen, and the Minions from Despicable Me, just to name a few.
The lawsuit not only accuses the AI model of directly copying character appearances but also charges that its outputs constitute “infringing derivative works.” Disney argues that these images generated from simple prompts are not users’ original works, but unauthorized adaptations of their copyrighted characters, directly violating copyright holders’ exclusive rights to derivative works.
This accusation directly speaks to the core problem creators face: How to define the originality of AI-generated works? If simple AI outputs are “infringing derivative works,” what must creators do to make their works legally recognized as copyrightable original creations?
While this case is still in the process without a final ruling, there are already multiple AI copyright cases serving as precedents. For instance, cases that successfully registered AI-generated content copyrights with the U.S. Copyright Office in 2025: The key lies in proving that human creators exercised “creative control” over the AI generation process, forming “sufficient human creative input”.
Creators can:
-
Provide original input instructions: As in the previously successful “Rose Enigma” case, the creator uploaded an original hand-drawn rose artwork as a foundation, then used text prompts to have AI visually process it. The composition and creative conception of this hand-drawn draft were considered key evidence of human creative control over the final product.
-
Extensively modify and arrange AI-generated content: In another successfully registered case, “A Single Piece of American Cheese,” the creator performed 35 image detail redraws on the AI-generated initial image, adding elements like a third eye and melting cheese, and recomposing the overall picture. The U.S. Copyright Office recognized this active “selection, combination, and arrangement” of AI-generated materials as demonstrating sufficient human originality.
These precedents offer some guidance for AI creators: While the legal framework is still forming, it’s clear that copyright is meant to protect human creativity, not AI’s computational results. For individual creators like Mo, this means simply generating a character is far from enough.
When a generic Bigfoot character was replicated by others, the creator was virtually helpless because he couldn’t prove he had invested sufficient, legally protectable “human creativity” in the character’s generation.
The Misinformation Minefield
Veo 3 faces similar challenges to Midjourney, not only in copyright issues, but its realism and physical accuracy create information risks that cannot be ignored. Testing by TIME magazine found that Veo 3 can generate convincingly realistic “fake news”, including Pakistani crowds burning Hindu temples, Chinese researchers handling bats in virus laboratories, and election workers shredding ballots—extremely inflammatory and controversial content.
This ability to easily manufacture content has pushed both creation and consumption platforms to the forefront, making transparency assurance and prevention of misinformation an urgent “labeling war” requiring deep involvement from both regulatory bodies and tech companies worldwide.
The Labeling War: A Three-Way Game Between Creators, Platforms, and Regulators
Facing escalating misinformation risks, platforms have implemented inconsistent and often inadequate strategies. As TIME magazine reported, only after they contacted Google about misleading videos generated by Veo 3 did Google, as the content creation platform, take reactive measures: adding a tiny, easily croppable, visible watermark to videos.
On the other side, where the content is distributed and watched by billions of people, TikTok, the short video platform famous for viral content, has, since September 2023, required creators to actively label AI-generated content to avoid misleading audiences. But in reality, when scrolling through the TikTok feed today, one will find very limited coverage of this “AI-generated” label, with only a small portion of content displaying this label due to creators’ voluntary disclosure.
The reason behind this is simple: creators all want their content to appear as realistic as possible. Having a prominent label telling viewers “this is AI-generated” would presumably impact video performance heavily, so relying solely on creators’ voluntary reporting isn’t adequate.
Globally, regulatory requirements for AI content labeling are inconsistent. China and the EU take the lead in this space, respectively legislating a mandate that requires content platforms to implement metadata for machine and visible watermarks for humans to identify AI content. The U.S. hasn’t legislated at the federal level, but states like California have taken the lead, with multiple bills in the works. Other regions, including Australia, ASEAN, and the Middle East, have yet to legislate, details see Table 1.
With incomplete regulatory frameworks, social media platforms are proactively becoming de facto rule-makers. China’s Douyin, for example, strictly enforces national mandatory requirements, requiring both standardized metadata and fixed-position visible watermarks for all AI-generated content. Meta’s Instagram and Facebook, Douyin’s sibling TikTok, YouTube, and Snapchat are also actively advancing dual strategies combining metadata (like C2PA standards) and visible labels (like “AI Info,” “Imagined with AI”), but implementation varies.
In contrast, some platforms like X (formerly Twitter) have been slower to respond, not yet introducing AI content labeling policies that promote information transparency. This inconsistent status across platforms reflects tech companies’ attempts to balance user growth, commercial success, and content compliance in the AI race.
Table 1: AI Content Labeling Requirements by Country/Region
Country/Region |
Metadata Marking Requirements |
Visible Watermark Requirements |
Effective Date |
Information Source |
---|---|---|---|---|
China |
Mandatory: All AI-generated content must include metadata for tracking, classification, and platform information. |
Mandatory: All publicly released AI-generated content must have clear (visible) watermarks. |
September 1, 2025 |
China Law Translate, Douyin Regulations |
EU |
Mandatory: Under the AI Act, AI-generated content must have machine-readable metadata (like digital watermarks, C2PA). |
Not explicitly mandated, but future guidelines or industry practices may require. |
August 2026 |
EU AI Act |
USA |
No legal requirement (federal level); but multiple bills have proposed mandatory requirements, and some states (like California) have passed legislation. |
No legal requirement (federal level); but California law and proposed federal bills require visible watermarks. |
California: January 1, 2026 |
FPF, PBS |
Australia |
No legal requirement; best practice is using metadata and provenance standards (C2PA). |
No legal requirement; visible labels are best practice but not mandatory. |
– |
Legal123 |
ASEAN |
Strongly recommended; regional guidelines support digital watermarks and encrypted provenance (C2PA). |
Recommended as best practice, not legally mandatory. |
– |
ASEAN Guide |
UAE/Middle East |
No specific AI legal requirements; ethical guidelines encourage transparency and data provenance. |
No legal requirements; visible labels encouraged under ethical principles. |
– |
Thomson Reuters |
Beyond Viral Content
However, despite the complexity of copyright, misinformation, and platform regulation, Veo 3’s potential is also being explored by professional directors. While countless content creators chase the next “Bigfoot”-style viral moment, Turkish self-made director Öner S. Biberkökü chose a completely different path. His goal wasn’t to create a quick viral moment that would spread on social media, but to use this new technology to tell a story that could deeply move hearts.
Öner’s collaborator is Turkey’s household name, “Queen of Pop” Sezen Aksu, a legendary figure spanning five decades in music, who released a new song, Doğrucu. “I want people to remember their childhood,” Öner mentioned in my interview, “I listened to her songs as a child, her voice has special meaning for Turkish people.”
This project cost tens of thousands of dollars in Veo3 credits alone, plus substantial human resources, completed by Öner’s team (Pepperroot Studio) working around the clock. During creation, Öner and his team deeply explored Veo 3’s technical limitations. He frankly admitted that achieving character consistency was a “nightmare” using this tool alone, forcing them to combine multiple generation methods through careful planning and repeated experimentation to finally achieve the desired outcome.
This approach differs from another music video Öner created 4 months earlier, “Uchigatana” (which I often refer to as “Reborn as a Samurai in Edo”), which pursued spectacle moments, style transfer, and lip sync by an AI character that showcased AI’s capabilities. This new music video with Sezen Aksu deliberately maintained restraint and simplicity. “I wanted a simple music video with only one magical moment—the instant when sparrows lift the woman from a fall,” Öner said.
He prioritized emotional delivery over AI technical showmanship. Ultimately, this “magical moment” served the song’s narrative, completing the work’s concrete expression to aspire for “hope.”
The same creative tool, in different creators’ hands, some people can produce comedic viral content, some people use it to fabricate misinformation, and some would wield its power to call for a nation’s emotional response.
Looking Forward: The AI Renaissance
In addition to Google’s Veo3, the AI video model field is highly competitive with numerous participants. There are platforms like Kling (by Kuaishou) and Dreamina (by ByteDance) that possess vast video training data from their short video platforms, as well as companies like Runway, Higgsfield, and Minimax, etc. continuously iterating products in their respective niche sub-fields.
New technological innovations are announced almost every few days. Even Reddit already has rumors circulating that Veo 4 will be released in December 2025, though no one can accurately predict what capabilities these models will achieve by then.
However, as Google CEO Sundar Pichai said in a podcast interview, the trend from these advanced models is clear—they will enable a dramatic increase in content creation, empowering a broader range of creators—democratizing video creation. He predicts: using AI video tools will become as commonplace as “using Google Docs is today.”
But I believe the core of content creation has never been, and should never be, determined by how advanced our tools are, but by the human spirit with which we wield these tools.
The Veo 3 viral moment represents more than technological progress—it’s a mirror reflecting the essence of human creativity. When a Syrian immigrant in Germany creates artistic expression with an AI Bigfoot, when a Turkish team produces emotionally complete stories at minimal cost, when the boundaries between real and synthetic blur, when you and I can use AI tools to manifest “What You Think Is What You See” we’re not just witnessing a tool’s evolution—we’re observing the transformation across the entire content industry, what some call the AI Renaissance, an era where AI is accelerating the re-discovery and re-definition of humanity’s place in the world.
This emerging AI Renaissance brings us back to our fundamental principles. The question we need to answer isn’t whether AI will change content creation—because it already has. The question is whether we can establish a framework that embraces these tools’ democratizing potential while protecting the value of human creativity.
In that future of AI Renaissance, the stories we choose to tell—and how we tell them—will matter far more than the technology used to bring those stories to life.
References
I. Interview Materials
- Öner S. Biberkökü and Cansın Çetin Kuşluvan (Co-founders of Turkish creative studio Pepperroot Studio). Video conference recordings with the author, June 2025.
- Mo a.k.a. “Speedilla” (AI content creator). Video conference recordings with the author, June 2025.
- Lex Fridman. (June 5, 2025). Transcript for Sundar Pichai: CEO of Google and Alphabet | Lex Fridman Podcast #471. Retrieved June 29, 2025, from https://lexfridman.com/sundar-pichai-transcript/
II. Legal Documents and Reports
- ASEAN. (2025). Expanded ASEAN Guide on AI Governance and Ethics for Generative AI. Retrieved June 28, 2025, from https://asean.org/wp-content/uploads/2025/01/Expanded-ASEAN-Guide-on-AI-Governance-and-Ethics-Generative-AI.pdf
- Disney Enterprises, Inc., et al. v. Midjourney, Inc. (2025). Complaint for Direct Copyright Infringement and Secondary Copyright Infringement. U.S. District Court for the Central District of California.
- European Parliament. (June 8, 2023, updated February 19, 2025). EU AI Act: first regulation on artificial intelligence. Retrieved June 29, 2025, from https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- Future of Privacy Forum. (2024). U.S. Legislative Trends in AI-Generated Content: 2024 and Beyond. Retrieved June 28, 2025, from https://fpf.org/blog/u-s-legislative-trends-in-ai-generated-content-2024-and-beyond/
- U.S. Congress. (2023). S.2691 – AI Labeling Act of 2023. Retrieved June 28, 2025, from https://www.congress.gov/bill/118th-congress/senate-bill/2691/text
- King & Wood Mallesons. (March 10, 2025). Clearing the Fog: Analysis of AIGC Copyrightability in the US and Case Studies. Retrieved June 28, 2025, from https://www.kwm.com/cn/zh/insights/latest-thinking/copyrightability-of-ai-generated-works-in-the-us-and-typical-cases.html
- Digital Policy Alert (2025). China: Mandatory National Standard Labeling method for content generated by artificial intelligence (GB 45438-2025). Standard enters into force on September 1, 2025. Retrieved June 29, 2025, from https://digitalpolicyalert.org/change/10930
III. News Reports and Analysis
- Chow, A. R., & Perrigo, B. (June 3, 2025). Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud. TIME. Retrieved June 27, 2025, from https://time.com/7290050/veo-3-google-misinformation-deepfake/
- Johnson, S. (June 23, 2025). The Out-of-Touch Adults’ Guide to Kid Culture: Bigfoot Vlogs. Lifehacker. Retrieved June 27, 2025, from https://lifehacker.com/entertainment/the-out-of-touch-adults-guide-to-kid-culture-bigfoot-vlogs
- Montgomery, B. (June 11, 2025). Disney and Universal sue AI image creator Midjourney, alleging copyright infringement. The Guardian. Retrieved June 28, 2025, from https://www.theguardian.com/technology/2025/jun/11/disney-universal-ai-lawsuit
- Odilov, S. (December 22, 2024). 2025 Is Poised To Usher In The AI Renaissance—Are You Prepared? Forbes. Retrieved June 28, 2025, from https://www.forbes.com/sites/sherzododilov/2024/12/22/2025-is-poised-to-usher-in-the-ai-renaissanceare-you-prepared/
- Ismail, K. (April 17, 2024). Snapchat to Watermark AI-Generated Images. PetaPixel. Retrieved June 28, 2025, from https://petapixel.com/2024/04/17/snapchat-to-watermark-ai-generated-images/
- Tolentino, J. C. (September 21, 2024). TikTok is watermarking AI-generated content. Here’s what that means. Mashable. Retrieved June 28, 2025, from https://mashable.com/article/tiktok-watermarking-ai-generated-content
IV. Platform and Policy Documents
- China Law Translate. (2024). A Glimpse of the Future? Why China’s Labeling Law May Signal a Global Shift in AI Governance. Retrieved June 28, 2025, from https://www.linkedin.com/pulse/glimpse-future-why-chinas-labeling-law-may-signal-swaner-jd-aigp-homqc
- Douyin. (2024). 抖音关于人工智能生成内容的标识的水印与元数据规范 (Douyin’s Watermark and Metadata Specifications for Labeling AI-Generated Content). Retrieved June 28, 2025, from https://www.chinalawtranslate.com/en/抖音关于人工智能生成内容标识的水印与元数据规/
- Legal123. (n.d.). How to Guide: Legal Issues with AI & ChatGPT in Australia. Retrieved June 28, 2025, from https://legal123.com.au/how-to-guide/legal-issues-ai-chatgpt/
- Meta. (February 2024). Labeling AI-Generated Images on Facebook, Instagram, and Threads. Retrieved June 28, 2025, from https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/
- PBS NewsHour. (June 13, 2023). New bipartisan bill would require labeling of AI-generated videos and audio. Retrieved June 28, 2025, from https://www.pbs.org/newshour/politics/new-bipartisan-bill-would-require-labeling-of-ai-generated-videos-and-audio
- TikTok Newsroom. (September 21, 2023). New labels for disclosing AI-generated content. Retrieved June 28, 2025, from https://newsroom.tiktok.com/en-us/new-labels-for-disclosing-ai-generated-content
- Thomson Reuters. (n.d.). How is AI regulated in the UAE? What lawyers need to know. Retrieved June 28, 2025, from https://insight.thomsonreuters.com/mena/legal/posts/how-is-ai-regulated-in-the-uae-what-lawyers-need-to-know