Michael Vasilkovsky didn’t start his career aspiring to transform how millions of people interact with cutting-edge technology. Yet today, as a Technical Lead specializing in generative AI at Snap, he has established himself at the forefront of a rapidly expanding field that aims to redefine digital experiences—from the creation of intricate 3D assets to the real-time generation of video. “I’m fascinated by how technology can evolve from a purely research-driven endeavor into products that genuinely reshape human creativity,” Vasilkovsky says. His journey spans top-tier research, hands-on product development, and leadership roles, culminating in innovations that serve an expansive user base and push the boundaries of what generative AI can accomplish.
Vasilkovsky’s trajectory began in academic research, where he first encountered computer vision challenges that demanded more than conventional machine learning solutions. He published in prominent forums like CVPR and SIGGRAPH, earning early recognition among the global research community. Over time, he noticed that while many researchers were producing impressive proofs of concept, only a fraction made it into actual products that people could use every day. “Translating research into something consumers find intuitive and valuable is a real challenge,” he reflects. “It’s not enough to create a brilliant model; you need to optimize its performance, build infrastructure around it, and then ensure it can withstand the real-world demands of millions, if not hundreds of millions, of users.”
His professional leap came when he joined Snap in 2021, just as augmented reality was accelerating in popularity. Instead of focusing exclusively on AR filters or lenses, Vasilkovsky set his sights on foundational 3D models, exploring how generative AI could revolutionize asset creation. “A typical 3D model for a game or AR application can take over 40 hours to develop from scratch,” he explains. “We are now cutting that time down to minutes, unlocking entirely new levels of creative freedom.” Indeed, one of his hallmark achievements is leading a suite of generative AI tools that allow both novices and professionals to produce high-grade 3D assets using simple text prompts or images. This dovetails with the growing market appetite for AI-powered content creation; according to the IBM Global AI Adoption Index 2022, 35 percent of businesses today use AI in their operations (Source: IBM).
While Snap’s platform is home to hundreds of millions of daily users, Vasilkovsky’s focus extends beyond the specifics of the company’s ecosystem. He emphasizes the broader implications of generative AI for anyone looking to streamline content production or develop immersive experiences. “We’re seeing a paradigm shift. An artist without extensive technical knowledge can now create a 3D asset or short animated clip by describing what they want in a single sentence,” he points out. “That empowerment fosters new types of creativity and drives economic opportunities for both individual creators and businesses.”
Statistics highlight the accelerating demand for generative AI solutions. A 2022 report by Omdia predicts the generative AI market will surge from an estimated $1.2 billion in 2022 to $10.8 billion by 2027 (Source: Omdia). In part, that explosive growth reflects an increasing awareness across industries—from gaming to e-commerce—of AI’s potential to lower production costs and shorten go-to-market timelines. “If a fashion retailer wants to showcase a new clothing line in a virtual runway setting, historically they’d need specialized 3D artists, hours of rendering time, and complicated workflows,” Vasilkovsky notes. “Now, they can generate and animate digital garments in a fraction of the time and cost.”
His unique vantage point allows him to see how generative AI can be applied to user-driven experiences, particularly those involving augmented or virtual reality. One of his notable projects involves integrating text-to-3D and image-to-3D pipelines into creation platforms, enabling creators to build more compelling visuals that can be rigged to human movements. “Anyone can place these 3D objects into an interactive environment,” he says. “We’ve proven that generative 3D assets can be effectively integrated into consumer-facing experiences with minimal friction. That’s the real breakthrough: bridging research with real-world application.”
A hallmark of Vasilkovsky’s career is his holistic approach to development. While many AI specialists are accustomed to working within the silo of research labs, he manages projects through the entire lifecycle—from paper submissions to deploying a final product. “It’s not just about building the model or the algorithm. You have to think about performance optimization, server loads, user interface design, and the moral or ethical questions around AI-generated content,” he says. His track record of building infrastructure that can handle large-scale use is rooted in practical necessity: Snap’s augmented reality features alone engage a massive audience every day. “When you have a platform that sees hundreds of millions of daily active users, your innovations need to be robust and responsive,” Vasilkovsky adds.
Part of that robustness comes from actively collaborating with cross-functional teams, including data scientists, software engineers, and product managers. He considers these collaborations critical not just for scaling AI models, but also for ensuring they align with human behavior and creative needs. “A specialized model might perform flawlessly in the lab, but if the average user finds it unintuitive or slow, it’s essentially useless,” he remarks. “We discovered that giving creators a text-prompt-based interface significantly lowered the entry barrier. That’s when generative AI starts to truly democratize creation.”
His work has garnered attention in media circles, with outlets like Reuters, TechCrunch, and Maginative covering the launch of advanced generative AI tools and video generation models that incorporate camera control. While these achievements often reference Snap’s ecosystem, Vasilkovsky is cautious not to oversell the company itself. “I don’t view this purely as a Snap story,” he explains. “Yes, we have a large user base and a thriving augmented reality community, but the real narrative is about how generative AI can disrupt and improve content creation across industries.”
Looking ahead, Vasilkovsky sees video as the next major frontier. Existing models often require significant time—sometimes minutes per second of content—to render passable outputs, which is hardly practical in an age of instant media. “Waiting five or six minutes for a short video clip might be acceptable in certain production environments, but it’s not going to spark mainstream adoption,” he asserts. His goal is to dramatically reduce these generation times while preserving and even improving quality. “We want to allow people to create longer, more engaging videos, but that means tackling a host of new challenges. Video generation is inherently more complex than still-image generation: motion dynamics, camera angles, transitions, and even user interactivity come into play.”
To address these complexities, Vasilkovsky envisions new editing tools that go beyond typed prompts. “It’s one thing to describe a scene with words; it’s another to precisely control camera movement or the timing of certain actions,” he says. “I’m looking to merge generative AI with more intuitive user interfaces, so individuals can direct their video content much like a film director. That involves advanced rigging techniques, timeline editing, and real-time previews.”
In discussing these aspirations, he underscores the economic impact. According to a 2022 McKinsey study, 44 percent of organizations surveyed reported cost savings from AI adoption in various business units (Source: McKinsey & Company). As AI-based video generation becomes more efficient, it stands to slash production budgets for marketing agencies, film studios, and freelancers. “The tools we’re building aren’t just flashy demos; they could shift how entire industries operate,” he notes. “That’s a compelling reason to invest heavily in both research and productization. We’re creating new efficiencies, new workflows, and ultimately new opportunities.”
Part of Vasilkovsky’s vision includes democratizing these tools so that smaller businesses and individual creators also benefit. “Too often, innovations get locked behind large enterprises. I’d like to see generative AI-driven video creation become as commonplace as smartphone photography,” he says. “Widespread access is where you see real leaps in creativity and economic development. I want budding entrepreneurs, educators, and everyday social media users to tap into this potential.”
When asked about the human element—inevitable discussions around AI ethics, data usage, and potential displacement of artists—he’s candid that these are meaningful conversations that require ongoing attention. “AI doesn’t replace human creativity; it augments it,” he says. “But there’s a learning curve. We need guidelines and best practices for how people use these generative tools. My hope is that it remains a catalyst for collaboration, not a cause for fear.” He adds that transparency and user education will be crucial. By making the mechanics of AI clearer and user-friendly, he believes that people will better embrace the technology and use it responsibly.
Ultimately, Vasilkovsky measures success by how seamlessly generative AI integrates into people’s creative processes. “We’re at a point where AI can remove technical barriers and open up new forms of expression,” he says. “If our tools inspire someone who’s never touched 3D design or video production to create something original—and even share it with a global audience—then we’ve succeeded.” That democratizing ethos underpins his entire career. From publishing top-tier research to launching real-world products, he has consistently advocated for bridging the gap between cutting-edge technology and everyday experiences. In doing so, he has become one of the leading voices in generative AI, shaping the immediate future of content creation and carving out a blueprint for its long-term societal and economic impact.
“Every new tool or model we build can reshape the creative landscape,” he says. “I want to keep pushing those boundaries—faster video generation, more immersive 3D content, easier editing tools. The end goal is to empower as many people as possible to tell their stories. That’s what excites me most about generative AI: it’s not just for big corporations or specialized studios. It’s for all of us.”