The AI-Native Design Series: Articles, Resources, and the Reading List That Shaped Them
I’ve been researching and writing about what happens when design organizations take AI seriously at a structural level. Not the tools. The harder question underneath: what changes about how teams work, how decisions get made, who does what, and where human judgment actually matters when AI is embedded in the practice.
So far this has led me to eight articles deep and it will continue to unfold as I uncover more. Each one started with something I noticed in real organizations, followed the thread, and tried to make it useful for other design leaders sitting with the same questions.
This page is the home base. The full table of contents for the series, updated as new articles publish, plus the annotated reading list of every source that shaped my thinking along the way. Bookmark it if you want one URL for all of it.
The AI-Native Design Series
Article 0: What Does Design Look Like When AI Changes Everything? The question most designers aren’t asking out loud. When AI handles execution, what are designers actually for? The answer is clearer than the anxiety around it suggests.
Article 1: What “AI-Native Design” Actually Means (And Why Most Teams Are Getting It Wrong). Three stages: AI-aware, AI-augmented, AI-native. Most teams think they’re at stage three because they bought the right licenses. The distance between buying a camera and knowing how to see.
Article 2: Your Design System Has a New Job. Design systems used to talk to humans. Now their most important audience is AI agents generating code at scale. If your system isn’t legible to machines, the machines are making it up. Atlassian and Figma are already building for this.
Article 3: The Framework for Where Human Judgment Still Lives. A Stakes × Novelty matrix that maps any piece of design work into four zones, each with a different relationship between human judgment and AI. Plus four checkpoints that don’t move regardless of how the work is categorized.
Article 4: Why AI Fluency Isn’t a Training Problem. 96% of designers using AI today taught themselves. No curriculum, no certification, no one measuring their completion rate. So why does every organization default to training when teams aren't adopting AI? The problem isn't what most design leaders think it is.
Article 5: The Design System Is the Same for Humans and AI. Until It Isn’t. The idea sounds clean: one system for humans and machines. In practice, it doesn’t hold the way you expect. I tested this by building my own portfolio using the same principles.
Article 6: What AI Did to the Design Process. AI didn't kill the design process. It didn't just compress it either. It redistributed it. Three specific shifts are happening at once in the teams moving fastest, and most teams are only seeing one of them. That's why their AI investment keeps producing output without producing progress.
Article 7: When AI Decides and Humans Sign Off. There's a design principle underneath every high-stakes AI product: AI is the decision support. The human is the decision maker. Most products violate that contract by design. Four documented cases across law, healthcare, autonomous systems, and criminal justice show what it costs when the design didn't account for that. The gap between "human in the loop" and human judgment is a design problem. Most AI companies aren't treating it as one.
[Coming Soon] AI-Native Design at Every Stage: What Changes and What Goes Wrong Where do you start with AI? The answer depends almost entirely on the size and shape of the organization asking. A four-person startup and a two-thousand-person enterprise aren’t solving the same problem when they say they want to be AI-native. What actually changes across startups, mid-size companies, and large enterprises, and what’s the specific failure mode at each stage?
[Coming Soon] The Trust Problem AI Creates That Nobody in Design Is Talking About. What happens to users when AI is inside the product making decisions about what they see, what they're offered, and what they can do? Is trust architecture becoming the most consequential design challenge of this moment, and why aren't more teams treating it that way?
[Coming Soon] What a Design Sprint Looks Like Now. The five-day design sprint was designed in 2016. What does it look like when AI is embedded in the process, and how does the process change?
[Coming Soon] What AI Does to the Cost of a Pivot. Pivots have always been the right strategic move made at the wrong time. What happens to that calculus when AI changes the economics underneath the decision?
… more unfolding
The Reading List
These are the sources that kept showing up in my research. Not a generic collection. Every one of these shaped how I think about AI-native design, and most of them informed specific arguments in the series. I’ve organized them by theme and added my notes on what each one actually says and why it matters.
The Research
State of AI in Design 2025 — Foundation Capital & Designer Fund. The closest thing to a census of where design teams actually stand with AI. The finding that 96% of designers using AI are self-taught reframed my entire thinking about the fluency problem. The 84%/39% gap between exploration and delivery adoption shaped the argument.
State of the Designer 2026 — Figma. Figma's annual survey of 906 designers across five regions. The headline numbers: 72% now use generative AI tools, 98% increased their usage in the past year, and 91% say AI improves the quality of their outputs. But the finding I keep coming back to is this: designers who are increasing their AI usage are 25% more likely to report rising job satisfaction than those who aren't. The flip side is just as telling: 40% of designers whose AI usage stayed flat say their job is getting worse. The craft data is equally important. Companies that increased their emphasis on craft saw 67% of designers report higher satisfaction, and leadership attention to design work drove that number to 60%. If you're making the argument that judgment and craft become more valuable when AI handles execution, this is your evidence.
State of UX 2026: Design Deeper to Differentiate — Nielsen Norman Group. NNg’s annual assessment calls 2025 the year of “post-hype AI” and names trust as the defining design challenge of 2026. If you read one industry report this year for strategic planning, this is the one.
2025 Design in Tech Report: Autodesigners on Autopilot — John Maeda. The eleventh annual report identifies the shift from UX to AX (Agent Experience). Maeda’s argument that designers must now design for AI agents, not just human users, runs parallel to the design systems argument.
Four Shifts Designers Can’t Ignore in the Age of AI — Ben Blumenrose, Designer Fund. Companion piece to the State of AI in Design report, distilling four operational and cultural shifts design leaders are facing right now. Written from an investor and design-leader lens.
Design Systems as AI Infrastructure
Agents, Meet the Figma Canvas — Matt Colyer, Figma. Figma opens its canvas to AI agents via MCP server beta, letting agents create and modify real design assets using existing components, variables, and tokens. Introduces "Skills," which are Markdown files that encode team design decisions so agents produce on-brand outputs consistently. Works with Claude Code, Codex, Cursor, and Copilot. The most significant design systems infrastructure announcement since MCP servers launched.
Agentic Design Systems in 2026 — Brad Frost, bradfrost.com. Brad Frost, creator of Atomic Design, coins “DS+AI” and argues agents should assemble UIs using the same components human teams use. Defines two dimensions every agentic design system needs: coverage (clear examples, states, and constraints for agents) and validation (tests and human sign-off before anything ships). Foundational piece from the person who defined modern design system thinking.
Agentic AI, Design Systems & Figma: A Practical Guide — Christine Vallaure, UX Collective. Practical implementation guide following Brad Frost’s Agentic Design Systems demo. Makes a useful distinction: “This is the opposite of vibe coding. The agent is not inventing; it is following.” Covers what design system basics like semantic tokens, consistent naming, and complete states actually require to function as agentic infrastructure.
AI Design Systems Conference 2026 — Into Design Systems. Sold-out conference with 1,000-plus attendees and 21 experts from WhatsApp, GitHub, Figma, Adobe, Miro, and Atlassian. Sessions covered agentic design systems, machine-readable systems for MCP and LLMs, design systems as AI infrastructure, and encoding governance in agentic systems. Recordings are available and represent the most concentrated body of thinking on this topic from a single event.
Uber Automates Design Documentation with Agentic Systems — InfoQ. Case study of Uber’s uSpec system, which uses AI agents and the Figma Console MCP to automate component design specifications at scale. The agent crawls Figma component trees, extracts tokens and variants, and auto-generates platform-specific accessibility specs across seven stacks and three accessibility frameworks. The strongest enterprise-scale case study available for design systems combined with AI agents in production.
Vibe Design Is Real: Inside Google Stitch’s March 2026 Update — Tommaso Nervegna, Sorted Pixels. Deep analysis of Google Stitch’s introduction of DESIGN.md, a Markdown-based design system format built to be readable by AI agents. The core argument: “The design system of 2026 isn’t a Figma library with documentation. It’s a DESIGN.md file that travels between your design agent, your coding agent, and your prototyping environment. If your system can’t be read by a machine, it’s already legacy.” Also covers MCP integration and what agent-interoperability looks like in practice.
Storybook MCP for React — Kyle Gach, Storybook. Storybook’s MCP addon gives AI coding agents structured component metadata, so they build with your system instead of inventing their own version of it. The benchmarks tell the story: 12.8% better code quality, 2.76x faster generation, 27% fewer tokens. If you want to see what “design systems as AI infrastructure” looks like in production, start here.
Towards an Agentic Design System — Cristian Morales Achiardi, Design Systems Collective. Benchmarked six agent configurations and proved that structured, machine-readable design system infrastructure delivers 2x speed and 54% more accuracy at the same token cost. The most rigorous evidence I’ve seen that this investment pays off.
Why Your Design System Is the Most Important Asset in the AI Era — Cristian Morales Achiardi, The Design System Guide. Covers MCP versus CLI approaches, practical server setup, and the five context layers agents need to reason about components. His core point is hard to argue with: a design system is infrastructure, not a side project.
Design Systems and AI: Why MCP Servers Are the Unlock — Ana Boyer, Figma. The technical foundation for making design system context available to AI agents at generation time. Boyer’s argument shaped how I think about the difference between access and context.
Turning Handoffs into Handshakes — Lewis-Ethan Healey & Kylor Hall, Atlassian. How Atlassian’s design system team built agentic content and MCP infrastructure so AI-generated code actually reflects design intent.
Schema 2025: Design Systems for a New Era — Figma. Figma’s recap of the conference where they announced the Dev Mode MCP server.
Designers’ Workflow for Shipping Code — Eduardo Sonnino, Atlassian. A practitioner’s account of how the designer-to-engineer workflow changes when AI is generating the code. Useful for anyone trying to understand what the day-to-day actually looks like.
From Products to Systems: The Agentic AI Shift — John Moriarty, UX Collective. A design leader at DataRobot on designing for AI agents as a new user type alongside humans. Discusses agent-aware design system documentation in production and the tension between agent autonomy and user control.
The Role Shift
The Design Process Is Dead. Here’s What’s Replacing It. — Jenny Wen, Lenny’s Newsletter. Jenny Wen, Head of Design for Claude at Anthropic, argues the classic discover, diverge, converge design process has broken down. When engineers spin up AI coding agents and ship working versions before designers finish exploring, traditional workflows can’t keep pace. Design has split into two modes: supporting rapid implementation and upstream direction-setting on what to build. One of the most-discussed design and AI pieces of early 2026.
Operating as an AI-Native Product Designer in 2026 — Tom Scott & Vitor Amaral, Verified Insider. First-person account of AI-native design practice at Intercom. The designer’s role shifts from making to deciding: AI produces solid starting points, so the value moves to evaluation and judgment. Details a concrete daily workflow including a coded design system as the vocabulary for AI agents, Figma Code Connect mapping components to code, and a dedicated route for prototyping in the real app. One of the more grounded accounts of what this actually looks like day to day.
Outcome-Oriented Design: Designing in the Era of AI — Kate Moran & Sarah Gibbons, Nielsen Norman Group. Introduces outcome-oriented design as a structural replacement for traditional single-interface design. Designers now define adaptive frameworks that respond to individual user goals rather than optimizing for the average user. The shift moves design from prescribing a fixed UI to building systems that flex based on user context and desired outcome.
The Evolution of UX Design in the Age of AI Platforms: From Creator to Choreographer — Ken Olewiler, UXmatters. The source behind the creator-to-choreographer framing I reference in Article 0: What Does Design Look Like When AI Changes Everything. Olewiler’s argument that editorial judgment and taste are the skills that survive automation has held up well.
How Figma Integrates AI to Transform Design and Empower Creatives — David Kossnick, via OpenAI. Where the “vision carriers” concept comes from. Kossnick describes designers who understand where the product needs to go and can hold the long arc of user experience while everyone else is deep in the immediate problem.
5 Design Skills to Sharpen in the AI Era — Figma. Research-backed breakdown of which design skills compound in value as AI handles more execution. Based on the State of the Designer 2026 research.
The Death of Product Development as We Know It — Julie Zhuo. Co-written with Henry Modisett, head of design at Perplexity. Zhuo argues the traditional engineering/product/design team structure is dying. When building is cheap, everyone becomes a "builder" and role boundaries dissolve. Taste becomes the differentiator. I think she's right that the lines are blurring, but a designer, a PM, and an engineer will always look at the same problem through different lenses. That friction is what keeps teams from shipping the wrong thing faster. Remove the distinct perspectives and you just get a room full of people who think the same way. Worth reading and arguing with.
Shifting from Craft to Judgment in the Age of AI — Ravi Mehta, Atlassian. Captures the ratio shift better than almost anything else I’ve read. It used to be fifty-fifty between building well and deciding what to build. Now it’s closer to ninety-ten. The building got easier. The deciding didn’t.
The Future-Proof Designer — Nielsen Norman Group. Seven experts, 150-plus years of combined experience, all converging on the same theme: designers offer judgment built on expertise that AI can’t replicate. If you’re trying to articulate your value to a skeptical executive, the framing here will help.
Taste Is the New Bottleneck — Ivan Googol Medeiros, Designative. Goes beyond individual skill to explore taste as governance: who trains the systems that shape what good looks like? The most intellectually ambitious piece I’ve found on why taste and judgment are organizational concerns, not just personal ones.
Navigating the Next Decade as a Product Designer in Tech — Siva Sabaretnam, Atlassian. A practitioner’s perspective on how design’s advocacy role for users becomes more important, not less, as AI gets embedded in products.
How Service Design Will Evolve with AI Agents — Nielsen Norman Group. AI agents become new actors in service ecosystems. Previews “outcome-oriented design” where users specify results rather than performing steps. Forward-looking on how design practice must structurally evolve.
Trust Architecture
Identifying Necessary Transparency Moments in Agentic AI (Part 1) — Victor Yocco, Smashing Magazine. Introduces the Decision Node Audit, a structured method for mapping backend AI logic to the user interface. Uses an insurance company case study to identify which moments in an agent workflow require active transparency versus a simple log entry. The design challenge it names is finding the balance between the black box and the data dump. A companion to Yocco’s two earlier pieces, and the most methodologically specific of the three.
How to Design for Trust in the Age of AI Agents — World Economic Forum. Proposes a layered trust stack for AI agent autonomy: legible reasoning paths, bounded agency, goal transparency, contestability and override, and governance by design. Argues trust should be earned rather than engineered, through cognitive resonance rather than emotional persuasion. One of the cleaner frameworks available for thinking about trust as a designed architecture rather than a communication problem.
How to Get Your Customers to Trust AI — Ashley Reichheld, Sebastian Goodwin & Courtney Sherman, Harvard Business Review. Addresses the tension at the center of most AI product decisions: transparency is supposed to build trust, but companies regularly say too much and too little at the same time. Proposes embedding transparency within a broader trust framework, customizing disclosures for different audiences, and treating transparency as an ongoing process rather than a one-time disclosure.
The Trust Problem: Why Designing for AI Agents Is the Hardest UX Challenge of 2026 — Pedro del Rio, Medium. Traditional UX was built for passive systems; AI agents are proactive. They anticipate, decide, and act. Argues every agentic system needs a credible stop button and that designers must define clear thresholds between “act and report” and “ask before acting.” Makes the case that trust is a design decision, not a product or engineering one.
Dark Patterns in AI: How 2026 Made Them Harder to See — Stuti Mazumdar, Think Design. Documents the shift from static UI dark patterns to AI-systemic manipulation: sycophantic assistants, endless “helpful” loops, and AI subtly altering user intent during rewriting. Key point: “Two people could look at what seems like the same interface and be experiencing very different levels of persuasion” because of AI personalization. Useful for understanding the newer category of trust harm that isn’t visible in traditional dark pattern audits.
Dark Patterns UX: Manipulation Psychology 2026 — AgileSoftLabs. Covers the regulatory enforcement context: EU Digital Services Act Article 25 enforcement began in Q1 2026, Amazon’s $2.5 billion dark pattern settlement, and the finding that 97% of popular EU apps contain at least one dark pattern. Identifies AI-powered dark patterns as “an entirely new enforcement challenge, personalizing manipulation at machine speed and scale.”
Trust by Design: UX, AI, and Transparency Politics — Critical Playground. Theoretical treatment of trust as a design problem, not a branding or disclosure problem. References Google PAIR’s work and argues transparency is relational: what counts as clear varies across cultural contexts. “The politics of AI are increasingly negotiated at the UX layer.”
The Psychology of Trust in AI: A Guide to Measuring and Designing for User Confidence — Victor Yocco, Smashing Magazine. The most actionable trust framework I’ve found for practitioners. Defines the calibrated trust spectrum (from Active Distrust to Automation Bias) and gives you concrete measurement methods organized around four pillars: Ability, Benevolence, Integrity, and Predictability.
Beyond the Black Box: Practical XAI for UX Practitioners — Victor Yocco, Smashing Magazine. The companion piece, with mockup-level design patterns for explainability: “Because” statements, interactive factor sliders, source attribution, confidence visualizations. Introduces AI journey mapping for identifying where trust is most at risk.
When Should We Trust AI? Magic-8-Ball Thinking and AI Hallucinations — Nielsen Norman Group. Coins “magic-8-ball thinking,” the tendency to accept AI outputs without questioning them. Cites research showing AI legal tools report inaccurate information 17-33% of the time. Good ammunition for the conversation about why trust architecture isn’t optional.
Designing for Autonomy: UX Principles for Agentic AI — UXmatters. Reframes UX responsibility for agentic systems from “Is this usable?” to “Is this system behaving in alignment with human goals even when no one is watching?” Provides concrete questions for when to act versus wait, intervene versus observe.
Building AI Fluency
How Do Workers Develop Good Judgment in the AI Era? — David S. Duncan, Harvard Business Review. Research finding: AI helped experienced practitioners more than less-experienced ones, because judgment is the bottleneck, not information access. Building judgment requires clarifying who makes decisions, exposing people to consequences, restoring stretch experiences, and using simulations and case-based learning. Directly relevant to why training programs fail to build fluency and what to do instead.
AI’s Blind Spot: The Human Judgment Reckoning — Workday. Survey data on the gap between what leaders think is happening and what employees experience: 66% of leaders believe they are prioritizing skills training; only 36% of employees agree. Only 1 in 6 leaders is ready to use AI as a partner for complex decisions rather than a content generator. Useful data for making the case that the fluency problem is a culture problem, not a curriculum problem.
AI Product Builders Week: How hands-on experimentation is shaping Atlassian’s future — Atlassian. Over a thousand employees building with AI tools together for a week. The tool was never the point. The shared experience of trying together was. This shaped how I think about fluency-building.
A Design Technologist’s Take on AI Builders Week — Atlassian. The practitioner view from inside Atlassian’s program. What actually changes when teams build together with AI rather than train in isolation.
From Memo to Movement: Shopify’s Cultural Adoption of AI — First Round Capital. Goes past Tobi Lütke’s viral memo to show what actually happened inside the company. Three things that surprised me: they gave everyone access to the best AI tools (not just technical teams), they removed friction and cost barriers before expecting adoption, and they started with legal as a partner rather than a blocker. The biggest surprise: support and revenue teams adopted AI faster than engineering.
Creating Psychological Safety in the AI Era — MIT Technology Review × Infosys. 83% of business leaders say psychological safety directly impacts the success of AI initiatives. 22% of leaders have hesitated to lead AI projects because they’re afraid of being blamed if things go wrong. If you’re wondering why your team isn’t experimenting, this might be why.
AI Fluency: The New Product Superpower — Atlassian. Five frameworks for raising AI fluency across teams, plus named anti-patterns to avoid: tool tourism, automation theater, and prompt gatekeeping. Practical and specific.
Process Compression and Strategic Agility
Design Process Isn’t Dead, It’s Compressed — Sarah Gibbons, Nielsen Norman Group. The definitive response to the “throw out the design process” crowd. What looks like skipping steps is experienced designers running compressed versions. Exploring, making, learning, and refining can happen in a single afternoon now. The process is still there. It just moves faster.
Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts — Nielsen Norman Group. Rigorous evaluation of AI prototyping tools using real design scenarios. The key finding: AI gets you to about 60% fast. The last 40%, the part that requires judgment, remains human. Essential reading for setting realistic expectations about AI-accelerated workflows.
How to Get Your Entire Team Prototyping with AI — Colin Matthews, Lenny’s Newsletter. Practical playbook for making AI prototyping work across a team, not just for the one person who figured out the tools. Covers component libraries, team workflows, and handoff processes.
Why Your AI Product Needs a Different Development Lifecycle — Aishwarya Reganti & Kiriti Badam, Lenny’s Newsletter. Based on 50-plus AI implementations at OpenAI, Google, Amazon, and Databricks. Argues that AI products fundamentally break the assumptions of traditional sprint cadences. If you’re a design leader wondering why your team’s planning process feels off, this might explain it.
Why Product Judgment Matters More Than Velocity in the AI Era — Productboard Captures the central paradox: as the cost of building drops, the responsibility to choose well increases. Velocity stops being the differentiator. Judgment is.
You Don’t Have an AI Strategy Problem. You Have 6 Design Decisions to Make. — MC Dean. Reframes AI strategy as a design problem: agency boundaries, failure tolerance, human overrides, feedback loops, what to refuse to automate. Sharp and specific.
Organizational Transformation (Beyond Design)
Building Next-Horizon AI-Native Experiences — McKinsey & Company. McKinsey argues the AI adoption problem is experiential, not technical. Proposes AI-native design patterns for embedding human judgment into AI interaction models. Organizations need to create with clarity, bring depth to workflows, and build for cocreation. References McKinsey’s 2018 Business Value of Design report and argues those principles have to evolve for the AI era.
The Design of AI in 2026: Strategy, Power Shifts, and the Cost of Pretending You Understand AI — Arpy Dragffy Guerrero & Brittany Hobbs, Product Impact Pod. Synthesis of 49 podcast episodes on AI and design strategy. The central observation: the gap in 2026 is between organizations that restructured how they create value and those that simply layered AI onto existing products. Includes a warning worth sitting with: organizations are “deleting the pipeline that produces senior judgment” in their pursuit of short-term efficiency.
Strategy Summit 2026: Why AI Means Radical Change — Tsedal Neeley, HBR IdeaCast. Harvard Business School professor Tsedal Neeley presents the “30% rule”: every worker needs baseline AI fluency, not just technical teams, and not expertise but baseline capability. AI requires radical organizational change, not just tool adoption. Covers three vectors of AI value and treats fluency as a culture problem requiring minimum technology and change capability thresholds.
Strategy Summit 2026: Inventive Strategy and the ‘Unbossed’ Organization — Rita McGrath, HBR IdeaCast. Columbia Business School professor Rita McGrath argues organizational design is increasingly enmeshed with strategy. Hierarchical structures built for mass production are becoming obsolete. Advocates for “unbossed” organizations where people experiment freely with AI, using the electricity analogy: you’d give people tools to experiment with electricity, not create an “electricity strategy.”
Strategy Summit 2026: Why AI Transformation Needs a Human Touch — Nigel Vaz, HBR IdeaCast. Publicis Sapient CEO Nigel Vaz on why enterprise AI initiatives fail: incentives, talent strategies, and trust aren’t factored in. The argument: the very processes organizations optimized for past success are what limit their ability to get value from AI, particularly around people, context, and how goals are set.
Is the Org Chart Dead in the Age of AI? — Fortune. LinkedIn’s Aneesh Raman argues org charts are holding back innovation and that companies need worker-led AI experimentation cutting across departments. LinkedIn replaced its Associate PM program with an “Associate Product Builder” role that merges coding, design, and PM skills. A concrete case study of role boundaries dissolving in practice at scale.
Top Leadership Experts Sound the Alarm: Bosses Are Choosing Tech Over People — Fortune. Deloitte data: 93% of AI budgets go to IT, only 7% to designing how humans and AI work together. Lara Abrash, Chair of Deloitte U.S., says that ratio “is not the right level of effort.” Harvard’s Linda Hill argues AI demands a whole new style of leadership. Strong data point for making the case that the human side of AI transformation is structurally underfunded.
The Last Mile Problem Slowing AI Transformation — Karim R. Lakhani, Jared Spataro & Jen Stave, Harvard Business Review. Identifies seven organizational frictions slowing AI ROI: proliferation of pilots, productivity gaps, process debt, identity and tribal knowledge problems, agentic governance, architectural complexity, and the efficiency trap. Frames this as the point where technical capability has to meet organizational design. The seven frictions map directly to challenges design organizations face during AI transformation.
Six Shifts to Build the Agentic Organization of the Future — McKinsey Not design-specific, but directly transferable. Maps the organizational shifts needed for AI-native transformation: leaner structures, human-plus-agent teams, new roles. If you’re a design leader making the case for restructuring, McKinsey’s framing gives you executive-level language.
Rebuilding Intercom for the AI Era — Des Traynor, Pigment Perspectives Podcast. Intercom scrapped their roadmap within 72 hours of ChatGPT launching and completely rebuilt their product. Traynor’s framework of “delay versus dilution” names the two failure modes companies fall into during a pivot. A masterclass in strategic agility.
The Capability Maturity Model for AI in Design — Jakob Nielsen, UX Tigers. Six levels of AI design maturity, from basic tool use to autonomous UI generation. Level 6 envisions designers as architects of systems, defining constraints and guardrails while AI generates interfaces. A useful diagnostic for figuring out where your organization actually stands.
From Managing People to Managing AI — Julie Zhuo, Lenny’s Podcast. Zhuo argues the three core skills of managing people translate directly to managing AI agents. Key listening for any design leader thinking about what their role becomes as AI takes on more of the team’s execution.
Applying This in Practice
[Coming soon]. The sources above are mostly about what’s changing and why. This section will focus on the what-to-do-about-it: practical resources for IC designers and design leaders looking to apply AI-native thinking in their day-to-day work.
How I Use This List
I don’t expect anyone to read all of this. If you’re a design leader trying to figure out where to start, here’s how I’d prioritize:
For the big picture, read the NNg State of UX 2026 and the State of AI in Design 2025. Those two together give you the full picture.
If your immediate problem is design systems, start with the Storybook MCP piece and the Morales Achiardi benchmarks. They’ll give you the evidence you need to make the investment case.
If your immediate problem is team adoption, read the Shopify piece from First Round and the MIT Technology Review research on psychological safety. Culture first, tools second.
If you’re trying to articulate why design matters more (not less) in this moment, the NNg Future-Proof Designer piece and Mehta’s craft-to-judgment argument are the strongest ammunition I’ve found.
This list will grow as the series continues. I’ll update it with each new article.
Leslie Sultani is a design leader writing about the intersection of AI, design practice, and organizational change. Former CPO, UX engineer, and founder of a FinTech AI platform. The AI-Native Design Series is published on Substack, LinkedIn, and Medium.

