Figma has integrated AI across its design platform, from small tools like auto-naming layers to Figma Make, which can turn a text prompt, image, or design frame into production-ready code that teams can edit together in real time. The result: prototypes that can be built by non-technical staff in hours, and in some cases code precise enough for engineers to move straight into production — all while ensuring the designer stays in control of the final output.
In a conversation with OpenAI, Head of AI Products at Figma, David Kossnick explained how these features grew from earlier design-to-code infrastructure and a philosophy of AI as co-pilot, ensuring every AI-generated element, whether text, image, or code, remains fully editable so designers retain control.
Figma’s AI features build on infrastructure developed before AI was on the organization’s roadmap.
Two components were key. Dev Mode gives developers structured data from design files: CSS snippets, design tokens, and component details — reducing the friction of turning designs into working interfaces.
The Model Code Prototypes (MCP) server extends this by letting developers invoke a coding agent with full design context to generate production-ready frontend code, eliminating manual hand-off steps.
Together, these systems formed a bridge between design and code that AI could readily use.
When Figma introduced Figma Make, the existing design-to-code pipeline meant it could turn a prompt, image, or frame into an interactive application without new infrastructure. Built for rapid prototyping, some teams now use Make to generate code accurate enough for production.
Its accessibility has also led to unexpected uses. One HR staffer with no coding background built a “Who’s Who” game in two hours using data from the company’s HR system. This has since become part of Figma’s onboarding process for new recruits. Kossnick said these kinds of outcomes reflect a broader principle guiding Figma’s AI work:
AI is going to help humans explore much faster, go much further in their ideation, but I think all the human judgement, empathy, craft, taste, is what it means to be the pilot not the copilot.
In practice, Figma applies this “pilot not co-pilot” philosophy by keeping every AI-generated element — text, image, or code, fully editable. Users can refine outputs to match their intent, whether they begin with a prompt, a visual design, or a code snippet. This avoids the locked results common in other tools and ensures that AI accelerates the work without limiting the craft.
That control extends to how people work together. Figma has adapted its multiplayer design model to AI features, so multiple people can work in the same file, see each other’s changes, and prompt the AI together in real time. This allows designers, developers, and other stakeholders to work on the same file at the same time, making changes together while the AI generates or updates content.
AI is also part of shared rituals. In FigJam and Slides, teams use image generation together, for example, creating customised anniversary cards by remixing colleagues’ avatars.
These capabilities are also used to test product ideas or assemble internal tools that might not otherwise be built.
Figma’s approach shows how embedding AI into an existing collaborative platform can lower the barriers to making functional software.
For Kossnick, the value lies in keeping AI as an assistant that speeds the work, while leaving the craft, and the final call, in human hands.