02. Your Design System Has a New Job: How Atlassian and Figma Are Building for AI
Design systems have always mattered, but they have rarely felt urgent. Everyone knows they need one and it is worth the effort in the long run. But there’s always a product deadline, a launch, or something more immediate that has to ship before anyone gets back to the foundational design system that actually keeps everything together.
That tradeoff used to make sense. It doesn’t anymore.
Not because someone finally made the case to prioritize it but because AI changed everything. If your design system only talks to humans, it’s only doing half the job. You’re ignoring its new power users: the AI agents that can spit out more code in a minute than your entire engineering team could write in a month.
If the AI doesn’t have your design system to guide it, it’s just going to make it up. On the surface, everything looks fine. But underneath, it’s gradually breaking apart everything your team spent years building.
What the Design System Actually Does Now
For years, design systems were basically just instruction manuals for people. We used them to make sure everyone was using the same buttons and didn’t mess up the spacing. It was mostly about keeping designers on the same page. The audience was designers and the occasional engineers who actually bothered to check what we intended.
Back then, the whole point was just writing things down. It was about documentation, nothing more.
Today, the job has changed. These AI coding agents look at your existing context to figure out which components to use and how things should actually work. If the design system is available for the agent to find, it uses it. If not? The agent just fills in the blanks based on general training data. It’s a fast track back to 2013, where every site looks like a generic Bootstrap clone because the tools are all pulling from the same basic bucket. Please lets NOT go back to that.
AI doesn’t know about your product. It doesn’t know which component you deprecated last quarter, or that your button radius is 6px not 8px, or that you never use that particular pattern because it failed user testing eighteen months ago. To a stranger, the code it spits out looks fine. But to you, it’s obviously wrong. It’s ‘off’ in that annoying way that’s hard to put into words, and it creates a mountain of technical debt that you’ll be stuck cleaning up later.
The design system’s job has changed: not just telling humans what to do, but giving AI enough context to do it better.
The Teams Building This Now
Atlassian is one of the clearest examples of what it looks like to actually solve this problem rather than talk about solving it.
When their design system team started watching AI coding tools generate interfaces, they noticed the quality gap immediately. The AI had access to their component library, technically. But access isn’t context. It could see that a component called Button existed. It couldn’t understand the full design intent behind it: when to use the primary variant versus the subtle one, what patterns the design system team had deliberately excluded, what accessibility requirements were baked into the component’s behavior.
Their solution was to build what they call “agentic content”: structured documentation written specifically for AI to read and reason from, not for humans to skim. Not a README. Instructions, with enough specificity that an AI agent could make the right decision without escalating to a designer.
They also built a direct connection between their design system and the AI coding tools their engineers use. (In technical terms, this is an MCP server.) The idea is straightforward: when an engineer’s AI agent generates code, it can see the design system the same way it sees the codebase. Component names, token values, usage rules, anti-patterns: all of it available at the moment code gets generated, not discovered later in a design review.
What came out of that work was fewer hallucinated components, fewer design review cycles spent catching AI-generated inconsistencies, and engineers who could move faster without constantly checking in with design.
Figma is tackling this from a platform perspective. Their MCP server, announced at Schema 2025, brings design file context directly into developer environments. The design system stops being a static document that someone has to translate, it becomes something the AI can read directly.
Three Things That Have to Change
Building this layer isn’t just a technical project. It requires a different way of thinking about what the design system is for.
The first thing is writing documentation for AI, not just for humans. This means going back through your component library and asking a different question for each entry: if an AI agent only had this documentation to work from, would it make the right decision? Not a reasonable decision; the right one. The one your design team would make. Most design systems fail this test badly, not because the documentation is poor, but because it was never written with this reader in mind.
Docs alone won’t get you there, though. You need a shared prompt library: a set of pre-built prompts that designers and engineers can use to interact with your design system through AI. These aren’t general prompts, they’re specific to your product, your patterns, and your conventions. “Create a notification component following our design system guidelines” is a different prompt than “create a notification component.” The first gets you something reviewable. The second gets you a guess. A prompt library means you don’t have to reinvent the wheel every time you want the AI to behave. It makes the interaction repeatable and teachable, which saves you from having to fix the same mistakes every other day.
But prompts still depend on a human remembering to use them. The harder problem is the technical bridge that gives AI direct, automatic access to your design system’s context, without anyone having to copy-paste a prompt or point the agent to the right doc. That’s what Atlassian built. The technical side might look different depending on your tools, but the core idea doesn’t change: put the design system in the room where the code actually gets generated, rather than having it show up after the fact in a design review.
The Strategic Shift This Creates
Look, you could frame this as a technical project. Build some tools, clean up the docs, the AI works better. Fine.
But here’s what’s actually happening. The design team that owns this infrastructure is writing the rules that every AI coding tool in the organization has to follow. Not guidelines that engineers might read. Rules that get enforced automatically, every time code gets generated.
Think about what that means. Engineering teams running AI tools are operating inside the design system’s constraints whether they realize it or not. Product managers who only care about shipping speed are getting outputs shaped by decisions the design team already made. For years, designers had to fight for a seat at the table, beg people to follow the specs, chase down inconsistencies after they’d already shipped. This is different. The influence moves upstream. It’s baked into the generation layer, not applied after the damage is already done.
This is the reframe that took me a while to see clearly. Design systems have always been how we help our teams do more with less. That was true when the audience was fifty designers who needed to stay in sync. Once every AI tool in the company starts pulling from your design system, you’re not helping fifty people stay consistent. You’re shaping every piece of UI the organization produces.
The Window Is Open Right Now
Not every team is going to see this coming. Building this now isn’t just about being “first,” it’s about writing the AI playbook for your company before a generic model writes a bad one for you.
Designers already know how to build playbooks. We’ve been doing it for years to keep humans in sync. Instead of a PDF that a human might glance at once, you’re building a living set of rules that an AI actually follows at scale, automatically, every single time an engineer hits ‘generate.’ And unlike your human colleagues, the AI will actually read it.
Remember the tradeoff from the top of this article? The one where the design system kept getting pushed to the bottom of the list because there was always something more urgent to ship? That tradeoff used to make sense because the cost was slow. A little drift here, a few inconsistencies there. You could catch up later. Now the AI is generating that drift at a speed no team can clean up after. The tradeoff doesn’t work anymore. It hasn’t for a while.
Leslie Sultani is a design leader writing about AI, design practice, and organizational change.
Further Reading
Turning Handoffs into Handshakes: Integrating Design Systems for AI Prototyping at Scale — Lewis-Ethan Healey & Kylor Hall, Atlassian. The source behind the Atlassian case study in this article: how their design system team built agentic content and MCP infrastructure for AI-generated code.
Design Systems and AI: Why MCP Servers Are the Unlock — Ana Boyer, Figma. The case for MCP servers as the technical bridge that makes design system context available to AI coding agents at generation time.
Schema 2025: Design Systems for a New Era — Figma’s recap of Schema 2025, including the Dev Mode MCP server announcement.
Designers’ Workflow for Shipping Code — Eduardo Sonnino, Atlassian. A practitioner’s account of how the designer-to-engineer workflow changes when AI is generating the code.
Storybook MCP for React — Kyle Gach, Storybook. Storybook’s MCP server gives AI agents direct access to component metadata, usage patterns, and test guardrails, the same bridge Atlassian and Figma built, from the component documentation side.


