The Design System Is the Same for Humans and AI. Until It Isn’t.
This is the first piece in this series where I'm writing from something I built, not something I observed.
Partway through building the design system for my own portfolio site, I noticed something I hadn’t expected. The choices I kept making were shifting based on a question I wasn’t consciously asking: who’s going to read this?
When I pictured a designer opening the Figma file, I wanted tokens named for legibility, specimen pages showing components in context, usage notes written in plain language with visual examples. When I pictured an AI agent reading the same system to generate a chart or a deck slide in my voice, something different happened. I went heavier on the naming structure, added aliases that encoded intent rather than value, wrote descriptions on every component, and built a separate rules document for the things that must never change.
The two sets of choices didn’t just feel different. They pulled in opposite directions. And sitting with that long enough made something visible: AI agents and human designers don’t just have different preferences for how a system is organized. They have different cognitive modes entirely, and optimizing a design system for one doesn’t give you the other for free.
Two Readers, Two Cognitive Modes
A human designer reading a design system brings context. Years of pattern recognition, product knowledge, the ability to look at a component and infer from experience when to use it and when not to. They skim. They interpret. They work from examples and fill gaps from what they already know about design conventions and your specific product.
An AI agent does none of that. It queries against what’s explicitly in the system. If a token is named gray-900, the agent knows it’s a dark gray. What it doesn’t know is that gray-900 is your foreground color for primary text, that you never use it for disabled states, that your visual identity depends on using it only at full opacity and never tinted. That context lives in the heads of the designers who built the system, in decisions made two product cycles ago, in the implicit rules everyone follows without ever writing down. The agent doesn’t have access to any of it.
This is the actual problem. Not that AI can’t use a design system. It’s that most design systems were built for readers who could infer. AI can’t infer. It queries. And when it queries against sparse, implicit documentation, it fills the gaps from general training data, which doesn’t know your product, your voice, or the specific token your team deprecated last quarter.
What Changes When You Build for Both
Building my own system with both audiences in mind, I made four choices I wouldn’t have made if I were only designing for a human team.
Token aliasing - Instead of a flat list of presentational values, the system has two layers: primitives that hold the raw numbers, and semantic tokens that encode the intent. color/semantic/foreground. color/semantic/border. motion/duration/slow. The semantic layer is what components consume and what the agent reads. The name carries the meaning. An agent parsing color/semantic/foreground knows something about where that token belongs that it couldn’t infer from a hex value alone. The rule in the system is explicit: components consume semantic tokens, never primitives. If something needs a color the semantic set doesn’t cover, you add a semantic token. You don’t reach for a primitive and hope the agent figures it out.
Component descriptions - Every component in the library has a written description in plain text. Not a visual example, not a usage diagram, a sentence that makes the intent explicit. Designers read these as intent documentation. Agents treat them as decision rules. The same text serves both readers, but I wouldn’t have written it at all if I weren’t building for an audience that can’t rely on visual context to fill in what I didn’t say.
Flat component tree - Deep nesting hides structure from agents that parse by traversal. I kept the tree shallow, which also turned out to make the system easier for human designers to navigate. That pattern kept showing up: the discipline that makes a system legible to AI tends to make it more legible to people too. The gaps agents can’t bridge are usually the same gaps that junior designers quietly paper over by asking someone who’s been on the team longer.
There’s something the four choices above don’t say: during active iteration, the file looked nothing like any of this. Layers unnamed. Components nested three and four levels deep because that’s how the thinking happened, in stages, one decision stacked on the last. That’s not sloppiness. That’s design work. You’re moving fast, testing ideas, and the file structure reflects the mess of actually figuring something out. The problem is that mess is exactly what breaks AI consumption. An agent parsing a deeply nested, unnamed component tree isn’t going to infer your intent from the chaos. It’s going to produce outputs that drift from the system in ways that are hard to trace back to anything specific. What I hadn’t expected is that AI is also what solves this. You can ask it to do the cleanup pass: rename the layers, flatten the nesting, surface the implicit decisions and make them explicit. The extra step is real, but it doesn’t have to be a human step.
Rules file - A separate Markdown document at the root of the repository, labeled explicitly as an AI consumption file. It encodes the system’s invariants: what never changes, what’s forbidden without justification, what each semantic token means and why. For a designer, it’s a reference for edge cases and onboarding. For an agent, it’s the operating context that makes everything else parseable. The first line reads: “Read this before generating any visual, component, page, or marketing artifact in Leslie’s voice.”
The Discipline That Transfers
Semantic aliases, written descriptions, shallow trees, and a companion rules file: none of these turned out to be concessions to AI. They’re just good systems practice that most teams skip because human designers can compensate for the gaps. When someone who built the system is always nearby to answer questions, you don’t feel the cost of leaving things implicit. The cost only becomes visible when the reader can’t ask.
AI doesn’t ask. It works from what’s there. And the teams finding this out the hard way are the ones whose AI-generated output keeps drifting from the system: right colors, wrong token. Close component, wrong variant. Technically not broken, but obviously off to anyone who knows the product.
The discipline of building for a reader that can’t infer is also the discipline of building for anyone who wasn’t in the room when the decisions were made. That’s not a new design systems problem. It’s been invisible until now because the readers who could compensate for it were always humans.
The Gap Most Systems Have
Most design systems I’ve encountered will fail AI consumption not because they’re poorly designed but because they were never designed with this reader in mind. The documentation is sparse because everyone assumed the reader could infer. Token names are presentational rather than semantic because the team was moving fast and everyone already knew what the tokens meant. Component descriptions are visual rather than verbal because designers communicate in images.
None of that is negligence. It’s rational given the original audience. The problem is the audience has changed, and most systems haven’t caught up.
Teams at Atlassian and Figma are building this layer intentionally now, and if you read Your Design System Has a New Job earlier in this series, this is the structural change that makes that argument possible in practice.This is the structural change that makes that possible. Not a different system for AI, the same system, built with more explicit intention about what it’s actually communicating and to whom.
A design system that only talks to humans is doing half the job. The half that’s missing is becoming more expensive every month as more of what ships gets generated rather than drawn.
The system I built is live at library.lesliesultani.com, and the rules file that governs AI consumption is at the root of the repository. You can see the Figma design system here.
When I wrote the first line of that rules file, I was talking to Claude. But I was also clarifying something for myself. Making a system legible to an agent that can’t infer forces you to be explicit about things you’ve been leaving implicit for years. The token names have to carry meaning. The component descriptions have to be verbal, not just visual. The invariants have to be written down somewhere a reader can actually find them.
That clarity, it turns out, helps everyone who reads the system. Not just the agents. The humans too.
Leslie Sultani is a design leader writing about the intersection of AI, design practice, and organizational change.
Further Reading
Design Systems and AI: Why MCP Servers Are the Unlock — Ana Boyer, Figma. The technical foundation for making design system context available to AI agents at generation time — the infrastructure argument that pairs with this article’s structural one.
Turning Handoffs into Handshakes: Integrating Design Systems for AI Prototyping at Scale — Lewis-Ethan Healey & Kylor Hall, Atlassian. How Atlassian built agentic content and structured documentation for AI-generated code — the enterprise version of the approach described here.
Why Your Design System Is the Most Important Asset in the AI Era — Romina Kavcic, The Design System Guide. The economics: 41% of new code in 2025 was AI-generated. When code is cheap and understanding is expensive, the design system is the architecture that matters.
Agentic AI, Design Systems & Figma: A Practical Guide — Christine Vallaure, UX Collective. Practical implementation of agentic design system thinking — semantic tokens, consistent naming, and complete component states as AI infrastructure.

