<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AI Native Design Series - Leslie Sultani]]></title><description><![CDATA[How design and organizations change when AI becomes part of how product gets built.]]></description><link>https://www.ainativedesignseries.com</link><generator>Substack</generator><lastBuildDate>Wed, 13 May 2026 18:39:19 GMT</lastBuildDate><atom:link href="https://www.ainativedesignseries.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Leslie Sultani]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[lesliesultani@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[lesliesultani@substack.com]]></itunes:email><itunes:name><![CDATA[Leslie Sultani]]></itunes:name></itunes:owner><itunes:author><![CDATA[Leslie Sultani]]></itunes:author><googleplay:owner><![CDATA[lesliesultani@substack.com]]></googleplay:owner><googleplay:email><![CDATA[lesliesultani@substack.com]]></googleplay:email><googleplay:author><![CDATA[Leslie Sultani]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[When AI Decides and Human Signs Off]]></title><description><![CDATA[The design problem most AI companies aren't solving]]></description><link>https://www.ainativedesignseries.com/p/when-ai-decides-and-human-signs-off</link><guid isPermaLink="false">https://www.ainativedesignseries.com/p/when-ai-decides-and-human-signs-off</guid><dc:creator><![CDATA[Leslie Sultani]]></dc:creator><pubDate>Thu, 30 Apr 2026 05:32:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rNXn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rNXn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rNXn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png 424w, https://substackcdn.com/image/fetch/$s_!rNXn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png 848w, https://substackcdn.com/image/fetch/$s_!rNXn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png 1272w, https://substackcdn.com/image/fetch/$s_!rNXn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rNXn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png" width="1200" height="628" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:628,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:36346,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://lesliesultani.substack.com/i/195957178?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rNXn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png 424w, https://substackcdn.com/image/fetch/$s_!rNXn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png 848w, https://substackcdn.com/image/fetch/$s_!rNXn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png 1272w, https://substackcdn.com/image/fetch/$s_!rNXn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb41c87d8-e4a7-4817-9bc7-0c694fc82593_1200x628.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There's a design principle underneath every high-stakes AI product: AI is the decision support. The human is the decision maker. Those are different jobs. The AI surfaces information, surfaces risk, surfaces patterns a person couldn&#8217;t find alone. The human takes that and decides what to do. That is the contract: the AI provides the evidence, but the human owns the decision.</p><p>Most AI products in legal, healthcare, and criminal justice contexts are violating that contract by design. Not maliciously. They just weren&#8217;t built around it. They were built around making AI output look good and feel fast, and then a human approval step was added. The result is a product that looks like decision support but functions like liability transfer. The AI gets the credit. The human gets the exposure.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ainativedesignseries.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The gap between those two things is where people get hurt, sanctioned, and sued.</p><div><hr></div><h3><strong>What the Contract Actually Requires</strong></h3><p>When I say AI is decision support and the human is the decision maker, I mean something specific.</p><p>Decision support means the AI&#8217;s job is to make the human&#8217;s judgment better: surfacing what they couldn&#8217;t see alone, flagging what they might miss, organizing what would otherwise take days into something they can actually work with. The human&#8217;s job is to take all of that, add the context and judgment and experience the AI doesn&#8217;t have, and make a call they can stand behind.</p><p>For that to work, the human has to be in a real position to evaluate what they&#8217;re looking at. Not technically present. Not nominally responsible. Actually equipped to engage with the evidence, form a view, and own the outcome.</p><p>That&#8217;s not what most products are designed to produce. I&#8217;ve been watching how AI-assisted decision workflows get built, and the pattern is consistent: the product team builds a good model, the UX team builds a clean interface for the output, somewhere in the spec there&#8217;s a review step, usually a button, sometimes a confirmation modal. Legal signs off because a human is technically approving each action. The product ships.</p><p>What doesn&#8217;t get designed: what the human actually needs to evaluate what they&#8217;re looking at.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YEVD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YEVD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png 424w, https://substackcdn.com/image/fetch/$s_!YEVD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png 848w, https://substackcdn.com/image/fetch/$s_!YEVD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png 1272w, https://substackcdn.com/image/fetch/$s_!YEVD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YEVD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png" width="1456" height="604" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:604,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Diagram with two columns separated by a vertical line. Left column: AI &#8212; Decision Support. Three items: surfaces what the human couldn't find alone, flags what they might miss, organizes what would take days into something workable. Right column: Human &#8212; Decision Maker. Three items: applies context and judgment the AI doesn't have, owns the outcome, signs the record. Headline reads: Two jobs. Most AI products treat them as one.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Diagram with two columns separated by a vertical line. Left column: AI &#8212; Decision Support. Three items: surfaces what the human couldn't find alone, flags what they might miss, organizes what would take days into something workable. Right column: Human &#8212; Decision Maker. Three items: applies context and judgment the AI doesn't have, owns the outcome, signs the record. Headline reads: Two jobs. Most AI products treat them as one." title="Diagram with two columns separated by a vertical line. Left column: AI &#8212; Decision Support. Three items: surfaces what the human couldn't find alone, flags what they might miss, organizes what would take days into something workable. Right column: Human &#8212; Decision Maker. Three items: applies context and judgment the AI doesn't have, owns the outcome, signs the record. Headline reads: Two jobs. Most AI products treat them as one." srcset="https://substackcdn.com/image/fetch/$s_!YEVD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png 424w, https://substackcdn.com/image/fetch/$s_!YEVD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png 848w, https://substackcdn.com/image/fetch/$s_!YEVD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png 1272w, https://substackcdn.com/image/fetch/$s_!YEVD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80152816-30b8-4387-8416-6ef89cb595a7_2400x996.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Most high-stakes AI products have a design principle buried inside them: AI surfaces information, a human decides what to do with it. Those are different jobs. Most products are designed as if they're the same one</figcaption></figure></div><div><hr></div><h3><strong>How the Contract Gets Broken</strong></h3><p>The research on what happens next is not ambiguous.</p><p>When an authoritative-looking recommendation is on the screen, people defer to it, especially under time pressure and mental strain. Kate Goddard and colleagues identified this in clinical decision support in 2012 and the finding has replicated across domains ever since. The phenomenon has a name: <strong>automation bias</strong>. It&#8217;s the tendency to accept a system&#8217;s output rather than interrogate it, not out of carelessness but because the design routes around judgment rather than engaging it.</p><p>The 2023 JAMA study put this directly to the test. Randomized, 457 clinicians across 13 states. Standard AI predictions improved diagnostic accuracy by 4.4 percentage points. Systematically biased AI predictions reduced it. And the explanations, image-based saliency maps (visual heatmaps) showing why the AI flagged what it flagged, didn&#8217;t help. When the model was wrong, showing clinicians the reasoning behind that wrong answer didn&#8217;t protect them from agreeing with it anyway.</p><p>The explanation failed because the human was no longer engaged in evaluating the case. They were evaluating whether to trust the system. Those are different cognitive tasks, and the design had already answered the second one by the time the explanation appeared.</p><p>A 2023 study titled &#8220;<strong><a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0298037">Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy</a></strong>&#8220; found something worse: introducing a human reviewer actually increased how often people followed the AI&#8217;s recommendation, because the presence of a human made participants feel the decision had already been vetted. The human wasn&#8217;t catching errors. The human was providing cover.</p><p>Zana Bu&#231;inca and Krzysztof Gajos at Harvard tested whether forcing users to form their own view before seeing the AI&#8217;s answer would change this. It did. Requiring a prior commitment reduced how often people followed the AI even when it was wrong. The participants hated those designs. They gave them worse ratings even as they made better decisions with them. People do not enjoy being made to think, particularly when the screen is offering them a shortcut. Frictionless feels like good UX. In high-stakes decisions, frictionless is often the failure mode.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gsdx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gsdx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png 424w, https://substackcdn.com/image/fetch/$s_!gsdx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png 848w, https://substackcdn.com/image/fetch/$s_!gsdx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png 1272w, https://substackcdn.com/image/fetch/$s_!gsdx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gsdx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png" width="744" height="343" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/efe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:343,&quot;width&quot;:744,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Two-row comparison diagram on a dark background titled 'How the contract gets broken.' Top row labeled 'WHAT THE PRODUCT RECORDS' shows three steps across three columns: 01 AI Output (recommendation surfaced, confidence score displayed), 02 Human Approves (click recorded, timestamp logged), 03 Official Record ('a human reviewed and approved this'). A red divider label reads 'VS. WHAT ACTUALLY HAPPENED.' Bottom row shows: 01 AI Output (recommendation on screen, human under time pressure), 02 Automation Bias (human defers, design offered a shortcut), 03 Liability Transfers in red text (human is named, not equipped to decide).&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Two-row comparison diagram on a dark background titled 'How the contract gets broken.' Top row labeled 'WHAT THE PRODUCT RECORDS' shows three steps across three columns: 01 AI Output (recommendation surfaced, confidence score displayed), 02 Human Approves (click recorded, timestamp logged), 03 Official Record ('a human reviewed and approved this'). A red divider label reads 'VS. WHAT ACTUALLY HAPPENED.' Bottom row shows: 01 AI Output (recommendation on screen, human under time pressure), 02 Automation Bias (human defers, design offered a shortcut), 03 Liability Transfers in red text (human is named, not equipped to decide)." title="Two-row comparison diagram on a dark background titled 'How the contract gets broken.' Top row labeled 'WHAT THE PRODUCT RECORDS' shows three steps across three columns: 01 AI Output (recommendation surfaced, confidence score displayed), 02 Human Approves (click recorded, timestamp logged), 03 Official Record ('a human reviewed and approved this'). A red divider label reads 'VS. WHAT ACTUALLY HAPPENED.' Bottom row shows: 01 AI Output (recommendation on screen, human under time pressure), 02 Automation Bias (human defers, design offered a shortcut), 03 Liability Transfers in red text (human is named, not equipped to decide)." srcset="https://substackcdn.com/image/fetch/$s_!gsdx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png 424w, https://substackcdn.com/image/fetch/$s_!gsdx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png 848w, https://substackcdn.com/image/fetch/$s_!gsdx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png 1272w, https://substackcdn.com/image/fetch/$s_!gsdx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe8f4fe-cb55-4757-890e-1c9492950ca9_744x343.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Two versions of every AI-assisted decision in a high-stakes product: what the official record shows, and what actually happened. The gap between them is where accountability gets transferred without judgment.</figcaption></figure></div><div><hr></div><h3><strong>Four Cases Where the Contract Broke</strong></h3><p>The broken contract isn&#8217;t theoretical. Each of the following cases involves a domain where someone was designated the decision maker. Each shows what it costs when the product wasn&#8217;t built around what that role actually required.</p><p><strong>Legal.</strong> In 2023, a New York attorney named Steven Schwartz submitted a brief citing six cases that ChatGPT had fabricated. &#8220;Varghese v. China Southern Airlines, 925 F.3d 1339 (11th Cir. 2019)&#8221; does not exist. Schwartz later said he had been operating under the belief that the tool &#8220;could not possibly be fabricating cases on its own.&#8221; The court imposed $5,000 in sanctions and required him to mail copies of the order to every real judge whose name appeared on a fake opinion. That was the beginning. By late 2025, researcher Damien Charlotin had tracked 1,356 documented incidents of AI hallucinations in legal filings, with sanctions in individual cases reaching $30,000. In California, at least one court has begun suggesting that opposing counsel may have a duty to detect the other side&#8217;s AI-generated fakes. The lawyer was the decision maker. The design gave them no way to verify the input they were deciding on.</p><p><strong>Healthcare.</strong> IBM&#8217;s Watson for Oncology spent three years and more than $62 million at MD Anderson before an internal audit revealed the product couldn&#8217;t sync with Epic, was running on outdated drug protocols, and was producing treatment recommendations that weren&#8217;t based on current evidence. A physician at Jupiter Hospital described the product to IBM leadership in terms that don&#8217;t belong in a published article. Watson Health was sold to private equity in 2022. Separately: Epic&#8217;s sepsis prediction tool was deployed across hundreds of US hospitals on vendor-claimed performance numbers that looked credible. When Michigan Medicine researchers ran their own external validation, they found the model missed 67% of sepsis cases at the recommended threshold, generated alerts on 18% of all hospitalized patients, and correctly flagged only 7% of the cases clinicians had missed. Hundreds of clinicians were nominally the decision makers on sepsis while relying on a tool whose real-world performance they had no way to interrogate.</p><p><strong>Autonomous systems.</strong> In August 2025, a Florida jury found Tesla one-third liable for a fatal crash involving a Model S in Autopilot mode. The driver admitted fault. Tesla argued no system in existence could have prevented this crash. The jury disagreed, finding that the way Autopilot had been marketed and designed shaped how drivers actually used it, and that designing a system in a way that trains users to trust it more than the technology warrants is itself a contributing factor. For years the standard answer to &#8220;who is responsible when Autopilot is engaged?&#8221; was &#8220;the driver; the system requires constant supervision.&#8221; The jury pushed back. When a product&#8217;s design trains the human to hand over judgment they were supposed to keep, the product shares the outcome.</p><p><strong>Criminal justice.</strong> Eric Loomis was sentenced in Wisconsin in 2013 partly on the basis of a high-risk score from COMPAS, a proprietary recidivism prediction tool. The algorithm&#8217;s inputs and logic were not disclosed. Loomis couldn&#8217;t examine what the model had used to produce the score or challenge it on those grounds. The Wisconsin Supreme Court ruled in <em>State v. Loomis</em> that judges could continue using COMPAS so long as it wasn&#8217;t the &#8220;sole basis&#8221; for sentencing. The problem with that standard is a well-documented bias called anchoring: once an authoritative-looking number is in the room, it shapes decisions even when the decision maker believes they&#8217;re reasoning independently of it. ProPublica&#8217;s 2016 investigation found COMPAS was nearly twice as likely to falsely flag Black defendants as future offenders compared to white defendants at the same risk level. The judge was the decision maker. The design put an unexaminable number on the page and then assumed the human could reason around it.</p><p>The domains are different. The pattern is the same: someone was designated the decision maker, and the product wasn&#8217;t designed around what that role actually required.</p><div><hr></div><h3><strong>Why the Contract Keeps Breaking</strong></h3><p>Automation bias explains the immediate failure. There&#8217;s a slower one that compounds it.</p><p>In aviation, the FAA spent years documenting what happened to pilots who relied heavily on autopilot: their ability to fly manually deteriorated. The 2013 Asiana 214 crash at San Francisco International, where investigators found the crew overly reliant on automation and lacking proficiency in manual flight at low speed, led the FAA to issue guidance directing pilots to hand-fly more often during low-workload phases. The goal was to preserve the skills they would need precisely when the automation failed.</p><p>Medicine is running the same experiment now. A multicenter randomized trial of colonoscopy with AI polyp detection found that when endoscopists returned to non-AI procedures after sustained AI use, their adenoma detection rate dropped from 28.4% to 22.4%. The tool was making them better at colonoscopy while they used it and measurably worse when they didn&#8217;t. In radiology, giving radiologists incorrect AI suggestions increased their false-positive recalls by up to 12%, even when they were explicitly told the AI might be wrong.</p><p>A product that removes the hard parts of a job, the reading, the independent reasoning, the forming of a view before consulting any reference, is also removing the experience that builds the judgment the decision maker is supposed to bring. The contract assumes a human with the expertise to evaluate AI support. Over time, a badly designed product dulls the expertise it depends on. That&#8217;s not a side effect. It&#8217;s a design choice, made by default.</p><div><hr></div><h3><strong>What the Regulation Is Encoding</strong></h3><p>Regulators have started writing the contract into law.</p><p>The EU AI Act&#8217;s Article 14 on human oversight requires that high-risk AI systems be designed so that overseeing humans can &#8220;remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system.&#8221; The regulation uses the term &#8220;automation bias&#8221; explicitly. That&#8217;s a legislative acknowledgment that putting a human in front of an approval button is not the same as designing a product that enables a human to actually make a decision. High-risk system obligations under Article 14 take effect August 2, 2026, meaning both providers and deployers must demonstrate that oversight is substantive: that the human assigned to approve is actually in a position to decide, not just named on the audit trail.</p><p>The DoD Directive on autonomous weapons made a related move. The directive doesn&#8217;t require a human at every engagement decision. What it requires is &#8220;appropriate levels of human judgment,&#8221; phrasing chosen specifically to distinguish substantive decision-making from a person whose presence is technically logged.</p><p>In financial services, audit trail requirements from NIST, the federal standards body, the SEC, and banking regulators are moving toward the same standard: every AI-influenced decision must be reconstructable. Not just &#8220;a human approved this,&#8221; but what data the model used, what was missing, what the human considered, and what their stated rationale was. The record has to answer the question a regulator will ask in an enforcement action two years from now.</p><p>Most products aren&#8217;t built to produce that record. They&#8217;re built to produce a timestamp and a user ID. That&#8217;s not a decision trail. That&#8217;s evidence that someone was present.</p><div><hr></div><h3><strong>What Honoring the Contract Actually Requires</strong></h3><p>Here&#8217;s the question that follows from the design principle. If AI is decision support and the human is the decision maker, what does the human actually need to do that job?</p><p>It&#8217;s not a confidence score. It&#8217;s not a saliency map. It&#8217;s not a summary paragraph the AI generated about the same analysis it just ran.</p><p>When I founded and built an AI-powered fintech platform for early-stage investment due diligence, this was the central design question. The product surfaced financials, risk signals, and thesis fit for investors making real capital allocation decisions. We had the technical capability to let the AI recommend whether to invest. We chose not to. The AI surfaced insights, showed investors what the numbers actually meant, flagged the risks, and gave them everything they needed to form a view. It never said &#8220;this is a good investment.&#8221; That was deliberate. In high-stakes financial decisions, if the investor doesn&#8217;t understand why they&#8217;re making a call, the tool has failed even if the answer is correct. I delayed certain AI features specifically to ensure outputs were explainable and trust-building before shipping, not because we couldn&#8217;t build them faster, but because the human&#8217;s judgment had to stay in the room.</p><p>That principle generalizes. Here&#8217;s what it actually looks like in product decisions.</p><p>The human needs the underlying evidence, not just the conclusion. The contract clause they can click into, the source document passage they can read, the comparable transaction they can verify. Harvey AI&#8217;s Vault, Kira Systems&#8217; source-anchored smart fields, Spellbook&#8217;s redline review where every AI suggestion requires an explicit accept or reject: these products put the source one click from the finding. The summary is faster. The source is what lets the human actually stand behind the call.</p><p>The human needs to form a view before the AI shows its answer, at least in cases that warrant it. Committing to a position first is what preserves independent reasoning. That&#8217;s an uncomfortable design choice. It makes the workflow slower and the experience harder. It also produces better decisions, and for products in high-stakes domains, that tradeoff needs to be named explicitly rather than defaulted away because usability testing rewards frictionless.</p><p>The human needs uncertainty expressed in a form they can reason with. Most products display abstract confidence scores like "12% confidence" or "high reliability," which the brain often processes as a static grade to be ignored. However, research by Cao, Liu, and Huang found that calibrated uncertainty only improves reliance behavior when expressed as a frequency. Telling a clinician that "in 100 patients like this one, 12 would have this condition" forces a mental simulation of real-world outcomes. It transforms a percentage into a scenario that actually demands human judgment.</p><p>The human needs their reasoning on the record, not just their approval. &#8220;Approved&#8221; is not a decision trail. &#8220;Approved because the risk clause in section 4.2 is standard for this counterparty type, flagged for legal review&#8221; is. In domains where the decision will be reviewed later, the product needs to capture what the human actually thought at the moment of decision, as part of the UX itself, not as a compliance mechanism bolted on afterward.</p><p>The human needs to keep the underlying skill. If the workflow removes the work that builds judgment, the reading, the independent assessment, the forming of a view from evidence, it also removes what makes the decision maker&#8217;s role meaningful. The contract requires an expert. A product that deskills its users is quietly canceling the contract it depends on.</p><p>The fastest test for whether a product is built this way is simple. Find the approval step. Ask what the user would need to know if they had to explain that decision in a deposition a year from now. If the answer isn&#8217;t already on the screen, it&#8217;s not decision support. It&#8217;s a paper trail.</p><p>While courtroom sanctions and medical audits leave a visible trail of a broken contract, the highest stakes show up where there is no trail at all. Consider an offline navigation app used by hunters or search and rescue teams. You are deep in remote terrain with no signal, limited battery, and changing conditions, following a route the system suggested twenty minutes ago that you cannot verify. At that point, you are not clicking approve, you are betting your safety on it. There is no source document to inspect, no second system to cross check, no fallback. The product is not supporting the decision, it becomes the only evidence you have. That is where the contract is most fragile, because if the system is wrong, the user does not just lose confidence, they lose time, options, and in some cases their safety. Trust here is not a UX layer, it is a survival mechanism, and the design either acts as a lifeline or quietly becomes the trap.</p><div><hr></div><h3><strong>The AI-Native Reframe</strong></h3><p>AI-augmented product design asks: how do we display the model&#8217;s output clearly?</p><p>AI-Native design asks: what does this specific human need, in this specific moment, to make this decision and defend it later?</p><p>Those aren&#8217;t the same question. The first produces good output display. The second produces something that actually honors the contract.</p><p>The products that have gotten this right share a few characteristics. They&#8217;re slower in ways that feel purposeful rather than broken. They put the source one click from every claim. They require the human to engage with the evidence before routing around them with a summary. They record what the human actually decided and why, not just whether they clicked. They were designed around the moment someone has to answer for a decision, not just the moment they make one.</p><p>The broken-contract pattern holds until it doesn&#8217;t. It breaks when a sanctions order names the lawyer who approved something they couldn&#8217;t verify. It breaks when a malpractice case establishes that a clinician who processed 200 AI alerts in an hour wasn&#8217;t meaningfully deciding anything. It breaks when a board inquiry asks for the decision trail and finds timestamps with no rationale attached.</p><p>The teams building in high-stakes domains right now are operating as if the appearance of oversight is enough, or as if the law will move slowly enough to allow a course correction later. The empirical record, the regulatory calendar, and the case law are all moving in the same direction. What they&#8217;re moving toward is a standard that asks a simple question: was the human actually in a position to decide?</p><p>Most current products aren&#8217;t designed to answer yes.</p><div><hr></div><p><em>Leslie Sultani is a design leader and player-coach writing about the intersection of AI, design practice, and organizational change. Former CPO, UX engineer, and founder of a FinTech AI platform. Read the full AI-Native Design Series at <strong><a href="https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7451839239010287616">LinkedIn</a></strong>, <strong><a href="https://lesliesultani.substack.com/">Substack</a></strong> or <strong><a href="https://medium.com/@lesliesultani">Medium</a></strong>.</em></p><div><hr></div><h3><strong>Further Reading</strong></h3><ul><li><p><strong><a href="https://arxiv.org/abs/2102.09692?utm_source=leslie_sultani">&#8220;To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-Assisted Decision-Making&#8221;</a></strong> &#8212; Zana Bu&#231;inca, Maja Barbara Malaya, and Krzysztof Z. Gajos, ACM CHI 2021. The foundational experiment on cognitive forcing functions and why people rate worst the designs that produce their best decisions.</p></li><li><p><strong><a href="https://jamanetwork.com/journals/jama/fullarticle/2812908/?utm_source=leslie_sultani">&#8220;Measuring the Impact of AI in the Diagnosis of Hospitalized Patients&#8221;</a></strong> &#8212; Sarah Jabbour et al., JAMA 2023. Randomized trial showing that AI explanations don&#8217;t protect clinicians from systematically biased models. The accompanying editorial by Khera, Simon, and Ross is worth reading alongside it.</p></li><li><p><strong><a href="https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/?utm_source=leslie_sultani">Mata v. Avianca, Inc., S.D.N.Y. 2023</a></strong> &#8212; The sanctions order in the ChatGPT hallucination case that started a wave of legal AI scrutiny.</p></li><li><p><strong><a href="https://artificialintelligenceact.eu/article/14/?utm_source=leslie_sultani">&#8220;Article 14: Human Oversight&#8221;</a></strong> &#8212; EU Artificial Intelligence Act. The regulatory text that names automation bias explicitly and requires products to be designed against it.</p></li><li><p><strong><a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing?utm_source=leslie_sultani">&#8220;Machine Bias&#8221;</a></strong> &#8212; Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica 2016. The COMPAS investigation. The question it raised about who a system is actually serving hasn&#8217;t been resolved.</p></li><li><p><strong><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC3240751/?utm_source=leslie_sultani">&#8220;Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators&#8221;</a></strong> &#8212; Kate Goddard, Abdul Roudsari, and Jeremy Wyatt, JAMIA 2012. The foundational systematic review. Still the clearest account of how the phenomenon works and what design moves actually mitigate it.</p></li><li><p><strong><a href="https://www.damiencharlotin.com/hallucinations/?utm_source=leslie_sultani">AI Hallucination Cases Database</a></strong> &#8212; Damien Charlotin. A live, continuously updated tracker of documented incidents where courts have found parties relied on AI-hallucinated content in legal filings. The most comprehensive public record of how the legal case count is growing.</p></li><li><p><strong><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC8218233/?utm_source=leslie_sultani">&#8220;External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients&#8221;</a></strong> &#8212; Andrew Wong, Karandeep Singh et al., JAMA Internal Medicine 2021. The Michigan Medicine external validation of Epic&#8217;s sepsis model. The source for the 67% miss rate, 18% alert burden, and 7% of clinician-missed cases flagged figures cited in this article.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ainativedesignseries.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[What AI Did to the Design Process]]></title><description><![CDATA[Every few weeks, someone declares the design process dead.]]></description><link>https://www.ainativedesignseries.com/p/what-ai-did-to-the-design-process</link><guid isPermaLink="false">https://www.ainativedesignseries.com/p/what-ai-did-to-the-design-process</guid><dc:creator><![CDATA[Leslie Sultani]]></dc:creator><pubDate>Mon, 20 Apr 2026 13:08:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!h7YV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h7YV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h7YV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png 424w, https://substackcdn.com/image/fetch/$s_!h7YV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png 848w, https://substackcdn.com/image/fetch/$s_!h7YV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!h7YV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h7YV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png" width="1456" height="910" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:48240,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://lesliesultani.substack.com/i/194794105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!h7YV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png 424w, https://substackcdn.com/image/fetch/$s_!h7YV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png 848w, https://substackcdn.com/image/fetch/$s_!h7YV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!h7YV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9908-9934-4a94-868e-56f07899bec6_1600x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The three shifts rebuilding the design process, happening at once in the teams moving fastest.</figcaption></figure></div><p>Every few weeks, someone declares the design process dead.</p><p>The most visible recent version came from Jenny Wen, who head of design for Claude at Anthropic. She wrote that the classic discover, diverge, converge sequence cannot survive a world where engineers spin up multiple AI coding agents and ship working versions before designers finish exploring. A week later, Sarah Gibbons at NN/g published the counter: the process isn&#8217;t dead, it&#8217;s compressed. The same discovery, distillation, and refinement still happens, just in an afternoon instead of a month.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ainativedesignseries.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>They&#8217;re both right. And they&#8217;re both missing something.</p><p>The design process didn&#8217;t die, and it didn&#8217;t just get faster. It got rebuilt around what AI is actually good at and what it isn&#8217;t. Three specific things have changed, and all three are happening at once in the teams that are moving fastest. Most teams aren&#8217;t noticing all three. That&#8217;s why their AI investment keeps producing output without producing progress.</p><div><hr></div><h3><strong>The First Shift: Compression</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RXY1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RXY1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png 424w, https://substackcdn.com/image/fetch/$s_!RXY1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png 848w, https://substackcdn.com/image/fetch/$s_!RXY1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png 1272w, https://substackcdn.com/image/fetch/$s_!RXY1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RXY1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png" width="1456" height="910" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Three rows of horizontal bars. Row A, the old process, four equal black segments labeled research, design, prototype, test. Row B, compressed only, shows the same four phases shrunken into the first quarter of the bar with empty space for the rest. Row C, compressed and reinvested, shows the compressed phases plus three red segments labeled problem framing, user observation, and judgment on AI output.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Three rows of horizontal bars. Row A, the old process, four equal black segments labeled research, design, prototype, test. Row B, compressed only, shows the same four phases shrunken into the first quarter of the bar with empty space for the rest. Row C, compressed and reinvested, shows the compressed phases plus three red segments labeled problem framing, user observation, and judgment on AI output." title="Three rows of horizontal bars. Row A, the old process, four equal black segments labeled research, design, prototype, test. Row B, compressed only, shows the same four phases shrunken into the first quarter of the bar with empty space for the rest. Row C, compressed and reinvested, shows the compressed phases plus three red segments labeled problem framing, user observation, and judgment on AI output." srcset="https://substackcdn.com/image/fetch/$s_!RXY1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png 424w, https://substackcdn.com/image/fetch/$s_!RXY1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png 848w, https://substackcdn.com/image/fetch/$s_!RXY1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png 1272w, https://substackcdn.com/image/fetch/$s_!RXY1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c2468e-0fa0-4bc4-9e7c-a02b8be1cd6b_1488x930.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The gap between Row B and Row C is where most AI investments stop returning.</figcaption></figure></div><p>The most obvious change is that the phases of the process have collapsed in time.</p><p>Processing a week of research used to take another week. It now takes an afternoon. Competitive analysis used to take two weeks. A well-constructed prompt can produce it in a morning. Wireframe exploration, first-draft copy, visual directions, information architecture: every phase that was primarily about generating or processing information has compressed by roughly an order of magnitude.</p><p>The temptation at this point is to run the old process faster. Do what you used to do, just compress the timeline. Most teams do this. It&#8217;s the first mistake.</p><p>What actually happens in organizations moving well is that the time compression gets reinvested. The hours you used to spend making sense of research become hours you spend on problem framing. The days you used to spend on prototyping become days you spend watching users interact with the prototype. The teams getting the most out of compression aren&#8217;t the ones running a two-week sprint in three days. They&#8217;re the ones running a two-week sprint in three days and using the other nine to do the parts that were always rushed.</p><p>That&#8217;s the thing most playbooks miss. Compression without reinvestment is just a speed-up. Teams that treat AI as a way to finish earlier end up producing the same quality of work in less time. Teams that treat AI as a way to spend more time on the parts that matter produce work of a different quality entirely.</p><div><hr></div><h3><strong>The Second Shift: Inversion</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Cy7U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Cy7U!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png 424w, https://substackcdn.com/image/fetch/$s_!Cy7U!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png 848w, https://substackcdn.com/image/fetch/$s_!Cy7U!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png 1272w, https://substackcdn.com/image/fetch/$s_!Cy7U!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Cy7U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png" width="1456" height="910" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Two rows of five connected boxes. Top row labeled traditional sequence: user, problem, solution, prototype, tech. The tech box on the right is red. Bottom row labeled AI-native sequence: capability, prototype, user, experience, refine. The capability box on the left is red.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Two rows of five connected boxes. Top row labeled traditional sequence: user, problem, solution, prototype, tech. The tech box on the right is red. Bottom row labeled AI-native sequence: capability, prototype, user, experience, refine. The capability box on the left is red." title="Two rows of five connected boxes. Top row labeled traditional sequence: user, problem, solution, prototype, tech. The tech box on the right is red. Bottom row labeled AI-native sequence: capability, prototype, user, experience, refine. The capability box on the left is red." srcset="https://substackcdn.com/image/fetch/$s_!Cy7U!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png 424w, https://substackcdn.com/image/fetch/$s_!Cy7U!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png 848w, https://substackcdn.com/image/fetch/$s_!Cy7U!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png 1272w, https://substackcdn.com/image/fetch/$s_!Cy7U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4475d644-2ca6-47f0-bce4-c5d404a3c14f_1488x930.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">In AI-heavy products, the position of &#8216;technology&#8217; moves from last to first.</figcaption></figure></div><p>In certain kinds of products, especially AI-heavy ones, the sequence itself has flipped.</p><p>Henry Modisett, VP of Design at Perplexity, has described this directly. His team starts with capability exploration, not with user research. They build rough prototypes first, sometimes just command-line implementations, to see what the underlying model can actually do. Only then do they start thinking about the experience layer.</p><p>This reverses something that&#8217;s been structural to product design for decades. The sequence used to be: understand the user, define the problem, generate solutions, prototype, test. Technology came last, as a constraint or an enabler for the solution the team had designed. In AI-heavy products, this order doesn&#8217;t work, because the capability space itself is changing faster than any user research can track. You can&#8217;t design an experience around what an AI can do without first knowing what an AI can do, and that knowledge is only available through prototyping.</p><p>Karri Saarinen at Linear has made the same argument from a different angle. In &#8220;Design for the AI Age,&#8221; he writes that designers have to work forward to understand capabilities rather than backwards from user needs. He calls it &#8220;design is search.&#8221; The whole process is exploratory, not a waterfall. You&#8217;re searching through capability space and user need space at the same time, looking for the overlap. The prototype isn&#8217;t the end of the process. It&#8217;s the instrument you use to explore the process.</p><p>The inversion doesn&#8217;t apply to every product. A team designing a checkout flow still starts with user needs. But any team building with AI at the core of the experience has to know what the AI can do before they can design how users will relate to it. Teams that don&#8217;t invert this sequence end up designing experiences for capabilities that don&#8217;t exist yet, or missing capabilities that would have changed the whole frame.</p><div><hr></div><h3><strong>The Third Shift: Parallelization</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9gpm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944c7490-0329-441e-ab58-56f125578e36_1488x930.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9gpm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944c7490-0329-441e-ab58-56f125578e36_1488x930.png 424w, https://substackcdn.com/image/fetch/$s_!9gpm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944c7490-0329-441e-ab58-56f125578e36_1488x930.png 848w, https://substackcdn.com/image/fetch/$s_!9gpm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944c7490-0329-441e-ab58-56f125578e36_1488x930.png 1272w, https://substackcdn.com/image/fetch/$s_!9gpm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944c7490-0329-441e-ab58-56f125578e36_1488x930.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9gpm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944c7490-0329-441e-ab58-56f125578e36_1488x930.png" width="1456" height="910" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/944c7490-0329-441e-ab58-56f125578e36_1488x930.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Left side shows sequential process as five stacked boxes connected vertically: research, design, build, test, ship. Right side shows parallel process as five horizontal lanes running across a timeline from T0 to T1. Lanes for research, design, engineering, and testing show overlapping work segments. A fifth AI agent lane in red runs nearly continuously across the full timeline.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Left side shows sequential process as five stacked boxes connected vertically: research, design, build, test, ship. Right side shows parallel process as five horizontal lanes running across a timeline from T0 to T1. Lanes for research, design, engineering, and testing show overlapping work segments. A fifth AI agent lane in red runs nearly continuously across the full timeline." title="Left side shows sequential process as five stacked boxes connected vertically: research, design, build, test, ship. Right side shows parallel process as five horizontal lanes running across a timeline from T0 to T1. Lanes for research, design, engineering, and testing show overlapping work segments. A fifth AI agent lane in red runs nearly continuously across the full timeline." srcset="https://substackcdn.com/image/fetch/$s_!9gpm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944c7490-0329-441e-ab58-56f125578e36_1488x930.png 424w, https://substackcdn.com/image/fetch/$s_!9gpm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944c7490-0329-441e-ab58-56f125578e36_1488x930.png 848w, https://substackcdn.com/image/fetch/$s_!9gpm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944c7490-0329-441e-ab58-56f125578e36_1488x930.png 1272w, https://substackcdn.com/image/fetch/$s_!9gpm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944c7490-0329-441e-ab58-56f125578e36_1488x930.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The AI agent lane is active the entire time. That changes what &#8216;handoff&#8217; means.</figcaption></figure></div><p>The old process was a sequence. Research, then design, then build, then test, then ship. Each phase gated the next.</p><p>What&#8217;s happening now in the most advanced teams is that these phases are running concurrently, across different people and different AI agents. A designer is prototyping while research is still coming in. An engineer is building a working version while the designer is still iterating on direction. Testing is happening continuously, not at the end. The handoff between phases, which used to be the critical moment of coordination, is disappearing in teams that have gone furthest with AI adoption.</p><p>Intercom is the clearest public example of what this looks like at scale. Their senior design leader Thom Rimmer has said that every designer at Intercom now has a development environment and ships to production. There&#8217;s no designer-to-engineer handoff in the old sense, because the designer is doing the work that used to be handed off. Tom Scott&#8217;s reporting on AI-native product designers in 2026 describes the full stack: an AI-first editor like Cursor, an AI coding agent, a coded design system, Figma Code Connect mapping components to code, and a GitHub review workflow shared with engineering. The process isn&#8217;t sequential anymore. It&#8217;s one continuous workflow with AI agents and humans operating on the same artifact at the same time.</p><p>The organizational implication of parallelization is bigger than people realize. If the sequential process is gone, then the traditional team structure that was built around it is also gone. The separate design, product, and engineering functions, each with their own artifacts and handoffs, were a coordination mechanism for a sequential process. When the sequence collapses, the separation does too. That part of the conversation is the one that gets most emotional and most politically charged inside organizations, because it touches how people think about their roles. The operational evidence is clear in the teams that are furthest along.</p><div><hr></div><h3><strong>What Hasn&#8217;t Changed</strong></h3><p>Three shifts get most of the attention. The things that haven&#8217;t changed deserve more than they usually get.</p><p>Problem framing hasn&#8217;t changed. The moment where a team looks at everything they know and says &#8220;this is the right question to solve&#8221; is still a human moment. AI can produce research, generate options, surface patterns, and compare hypotheses. It cannot tell a team which problem is worth their organizational attention. The framing question requires strategic judgment that isn&#8217;t in any prompt, and it&#8217;s the part of the process that compresses least of all. If anything, good framing has become more important, because the downstream process can now execute faster on the wrong problem.</p><p>Real user empathy hasn&#8217;t changed either. Watching a person struggle with something you thought was obvious, hearing the words they use to describe what they&#8217;re seeing, noticing the moment they hesitate: these are things AI cannot do for the team. It can distill what users said in a research session. It cannot replace the embodied experience of having been in the room. The teams that still put their designers in front of real users, not in front of AI summaries of real users, notice things the others miss.</p><p>Judgment about AI output is new enough that teams are still discovering how important it is. An AI-generated prototype can look polished and test poorly, because something about the interaction model is subtly wrong in a way that only a designer with deep product knowledge can catch. The skill of recognizing what looks right but isn&#8217;t is a human skill, and it gets harder as AI output gets more polished. Teams that over-trust AI output at this step are the ones producing what critics have started calling &#8220;AI slop&#8221;: work that looks fine at a glance and fails in use.</p><p>Responsibility for shipping hasn&#8217;t changed. Someone has to decide that the work is ready and own the consequences if it isn&#8217;t. No AI agent owns the consequences of a decision. The person shipping is still answerable for the quality of what ships, and in teams where the designer now ships directly to production, that responsibility has expanded rather than contracted.</p><div><hr></div><h3><strong>The Pattern Underneath</strong></h3><p>The design process didn&#8217;t die. It didn&#8217;t just get compressed either. What happened is more interesting than either framing.</p><p>The work got redistributed. Phases that used to be sequential now run concurrently. Activities that were once rare have become routine. The judgment calls designers made quietly in the past are now the central, visible work of the practice. The teams that have internalized this are structuring their practice around it. The teams that haven&#8217;t are running an old process faster and wondering why the speed isn&#8217;t turning into better outcomes.</p><p>Here&#8217;s the diagnostic question for any team that wants to know where they actually stand. Look at what your team spent time on last quarter. If the distribution looks similar to what it would have looked like three years ago, just with fewer hours in each bucket, you&#8217;re running the old process compressed. If it looks different, if more time has shifted toward problem framing and user observation and judgment on AI output, and less toward the production work that AI now does well, you&#8217;ve started the rebuild.</p><p>Most teams are still running the old process compressed. A smaller number have started rebuilding. An even smaller number have rebuilt enough that the shape of their work would be unrecognizable to someone from the pre-AI era.</p><p>The design process didn&#8217;t die. Jenny Wen was right about that. It didn&#8217;t just get compressed, either. Sarah Gibbons was right about that. What happened is that the process stopped being a sequence of steps and became something closer to a set of simultaneous practices, distributed across humans and AI agents, with the weight of the work concentrated in the moments that actually require human judgment.</p><p>That&#8217;s the rebuild. Most organizations haven&#8217;t finished it yet. The ones that do will have a different kind of design practice than the one the industry inherited.</p><div><hr></div><p>Leslie Sultani is a design leader and player-coach writing about the intersection of AI, design practice, and organizational change. Former CPO, UX engineer, and founder of a FinTech AI platform. Read the full AI-Native Design Series at <strong><a href="https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7451839239010287616">LinkedIn</a></strong>, <strong><a href="https://lesliesultani.substack.com/">Substack</a></strong> or <strong><a href="https://medium.com/@lesliesultani">Medium</a></strong>.</p><div><hr></div><h3><strong>Further Reading</strong></h3><ul><li><p><strong><a href="https://www.lennysnewsletter.com/p/the-design-process-is-dead?utm_source=leslie_sultani">The Design Process Is Dead. Here&#8217;s What&#8217;s Replacing It.</a></strong> &#8212; Lenny Rachitsky interviewing Jenny Wen, head of design for Claude at Anthropic. The most widely-shared argument for why the traditional discover-diverge-converge process can&#8217;t survive in AI-native product teams. The starting point for this article.</p></li><li><p><strong><a href="https://www.nngroup.com/articles/design-process-isnt-dead/?utm_source=leslie_sultani">Design Process Isn&#8217;t Dead, It&#8217;s Compressed</a></strong> &#8212; Sarah Gibbons and Huei-Hsin Wang, NN/g. The definitive response to the &#8220;throw out the design process&#8221; crowd. What looks like skipping steps is experienced designers running compressed versions. The compression argument in this article.</p></li><li><p><strong><a href="https://linear.app/now/design-for-the-ai-age?utm_source=leslie_sultani">Design for the AI Age</a></strong> &#8212; Karri Saarinen, Linear. Saarinen&#8217;s case for why designers need to reinvent their processes by working forward from capability rather than backward from user needs. Source for the &#8220;design is search&#8221; framing.</p></li><li><p><strong><a href="https://linear.app/quality/04?utm_source=leslie_sultani">Henry Modisett on Quality</a></strong> &#8212; Linear&#8217;s Conversations on Quality series featuring Henry Modisett, VP of Design at Perplexity, on fast iteration and the capability-first approach at Perplexity.</p></li><li><p><strong><a href="https://www.nngroup.com/articles/ai-prototyping/?utm_source=leslie_sultani">Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts</a></strong> &#8212; Huei-Hsin Wang and Megan Brown, NN/g. Rigorous evaluation of what AI prototyping tools actually produce when they meet real design scenarios.</p></li><li><p><strong><a href="https://verifiedinsider.substack.com/p/operating-as-an-ai-native-product?utm_source=leslie_sultani">Operating as an AI-Native Product Designer in 2026</a></strong> &#8212; Tom Scott, Verified Insider. The practitioner account behind the parallelization section, with specific details on the AI-native product designer&#8217;s toolkit and workflow.</p></li><li><p><strong><a href="https://verifiedinsider.substack.com/p/design-at-intercom?utm_source=leslie_sultani">Design at Intercom</a></strong> &#8212; Tom Scott interviewing Thom Rimmer, Verified Insider. Intercom&#8217;s senior design leader on why every designer at the company now ships to production, and how the handoff collapsed.</p></li><li><p><strong><a href="https://www.nngroup.com/articles/generative-ui/?utm_source=leslie_sultani">Generative UI and Outcome-Oriented Design</a></strong> &#8212; Kate Moran and Sarah Gibbons, NN/g. The bigger shift behind this article: from designing interfaces to designing adaptive systems that respond to user goals.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ainativedesignseries.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Design System Is the Same for Humans and AI. Until It Isn’t.]]></title><description><![CDATA[This is the first piece in this series where I'm writing from something I built, not something I observed.]]></description><link>https://www.ainativedesignseries.com/p/the-design-system-is-the-same-for</link><guid isPermaLink="false">https://www.ainativedesignseries.com/p/the-design-system-is-the-same-for</guid><dc:creator><![CDATA[Leslie Sultani]]></dc:creator><pubDate>Wed, 15 Apr 2026 02:38:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WGwJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F047240ad-410e-429c-a18f-944345dd91a3_600x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;f2648df5-1995-43e9-9f32-937d6159983a&quot;,&quot;duration&quot;:null}"></div><p>Partway through building the design system for <a href="https://lesliesultani.com/">my own portfolio site</a>, I noticed something I hadn&#8217;t expected. The choices I kept making were shifting based on a question I wasn&#8217;t consciously asking: who&#8217;s going to read this?</p><p>When I pictured a designer opening the Figma file, I wanted tokens named for legibility, specimen pages showing components in context, usage notes written in plain language with visual examples. When I pictured an AI agent reading the same system to generate a chart or a deck slide in my voice, something different happened. I went heavier on the naming structure, added aliases that encoded intent rather than value, wrote descriptions on every component, and built a separate rules document for the things that must never change.</p><p>The two sets of choices didn&#8217;t just feel different. They pulled in opposite directions. And sitting with that long enough made something visible: AI agents and human designers don&#8217;t just have different preferences for how a system is organized. They have different cognitive modes entirely, and optimizing a design system for one doesn&#8217;t give you the other for free.</p><div><hr></div><h3><strong>Two Readers, Two Cognitive Modes</strong></h3><p>A human designer reading a design system brings context. Years of pattern recognition, product knowledge, the ability to look at a component and infer from experience when to use it and when not to. They skim. They interpret. They work from examples and fill gaps from what they already know about design conventions and your specific product.</p><p>An AI agent does none of that. It queries against what&#8217;s explicitly in the system. If a token is named <code>gray-900</code>, the agent knows it&#8217;s a dark gray. What it doesn&#8217;t know is that <code>gray-900</code> is your foreground color for primary text, that you never use it for disabled states, that your visual identity depends on using it only at full opacity and never tinted. That context lives in the heads of the designers who built the system, in decisions made two product cycles ago, in the implicit rules everyone follows without ever writing down. The agent doesn&#8217;t have access to any of it.</p><p>This is the actual problem. Not that AI can&#8217;t use a design system. It&#8217;s that most design systems were built for readers who could infer. AI can&#8217;t infer. It queries. And when it queries against sparse, implicit documentation, it fills the gaps from general training data, which doesn&#8217;t know your product, your voice, or the specific token your team deprecated last quarter.</p><div><hr></div><h3><strong>What Changes When You Build for Both</strong></h3><p>Building my own system with both audiences in mind, I made four choices I wouldn&#8217;t have made if I were only designing for a human team.</p><p><strong>Token aliasing - </strong>Instead of a flat list of presentational values, the system has two layers: primitives that hold the raw numbers, and semantic tokens that encode the intent. <code>color/semantic/foreground</code>. <code>color/semantic/border. motion/duration/slow. </code>The semantic layer is what components consume and what the agent reads. The name carries the meaning. An agent parsing <code>color/semantic/foreground</code> knows something about where that token belongs that it couldn&#8217;t infer from a hex value alone. The rule in the system is explicit: components consume semantic tokens, never primitives. If something needs a color the semantic set doesn&#8217;t cover, you add a semantic token. You don&#8217;t reach for a primitive and hope the agent figures it out.</p><p><strong>Component descriptions </strong>-<strong> </strong>Every component in the library has a written description in plain text. Not a visual example, not a usage diagram, a sentence that makes the intent explicit. Designers read these as intent documentation. Agents treat them as decision rules. The same text serves both readers, but I wouldn&#8217;t have written it at all if I weren&#8217;t building for an audience that can&#8217;t rely on visual context to fill in what I didn&#8217;t say.</p><p><strong>Flat component tree</strong> - Deep nesting hides structure from agents that parse by traversal. I kept the tree shallow, which also turned out to make the system easier for human designers to navigate. That pattern kept showing up: the discipline that makes a system legible to AI tends to make it more legible to people too. The gaps agents can&#8217;t bridge are usually the same gaps that junior designers quietly paper over by asking someone who&#8217;s been on the team longer.</p><p>There&#8217;s something the four choices above don&#8217;t say: during active iteration, the file looked nothing like any of this. Layers unnamed. Components nested three and four levels deep because that&#8217;s how the thinking happened, in stages, one decision stacked on the last. That&#8217;s not sloppiness. That&#8217;s design work. You&#8217;re moving fast, testing ideas, and the file structure reflects the mess of actually figuring something out. The problem is that mess is exactly what breaks AI consumption. An agent parsing a deeply nested, unnamed component tree isn&#8217;t going to infer your intent from the chaos. It&#8217;s going to produce outputs that drift from the system in ways that are hard to trace back to anything specific. What I hadn&#8217;t expected is that AI is also what solves this. You can ask it to do the cleanup pass: rename the layers, flatten the nesting, surface the implicit decisions and make them explicit. The extra step is real, but it doesn&#8217;t have to be a human step.</p><p><strong>Rules file</strong> - A separate Markdown document at the root of the repository, labeled explicitly as an AI consumption file. It encodes the system&#8217;s invariants: what never changes, what&#8217;s forbidden without justification, what each semantic token means and why. For a designer, it&#8217;s a reference for edge cases and onboarding. For an agent, it&#8217;s the operating context that makes everything else parseable. The first line reads: &#8220;Read this before generating any visual, component, page, or marketing artifact in Leslie&#8217;s voice.&#8221;</p><div><hr></div><h3><strong>The Discipline That Transfers</strong></h3><p>Semantic aliases, written descriptions, shallow trees, and a companion rules file: none of these turned out to be concessions to AI. They&#8217;re just good systems practice that most teams skip because human designers can compensate for the gaps. When someone who built the system is always nearby to answer questions, you don&#8217;t feel the cost of leaving things implicit. The cost only becomes visible when the reader can&#8217;t ask.</p><p>AI doesn&#8217;t ask. It works from what&#8217;s there. And the teams finding this out the hard way are the ones whose AI-generated output keeps drifting from the system: right colors, wrong token. Close component, wrong variant. Technically not broken, but obviously off to anyone who knows the product.</p><p>The discipline of building for a reader that can&#8217;t infer is also the discipline of building for anyone who wasn&#8217;t in the room when the decisions were made. That&#8217;s not a new design systems problem. It&#8217;s been invisible until now because the readers who could compensate for it were always humans.</p><div><hr></div><h3><strong>The Gap Most Systems Have</strong></h3><p>Most design systems I&#8217;ve encountered will fail AI consumption not because they&#8217;re poorly designed but because they were never designed with this reader in mind. The documentation is sparse because everyone assumed the reader could infer. Token names are presentational rather than semantic because the team was moving fast and everyone already knew what the tokens meant. Component descriptions are visual rather than verbal because designers communicate in images.</p><p>None of that is negligence. It&#8217;s rational given the original audience. The problem is the audience has changed, and most systems haven&#8217;t caught up.</p><p>Teams at Atlassian and Figma are building this layer intentionally now, and if you read <em><strong><a href="https://open.substack.com/pub/lesliesultani/p/your-design-system-has-a-new-job?utm_campaign=post-expanded-share&amp;utm_medium=web">Your Design System Has a New Job</a></strong></em> earlier in this series, this is the structural change that makes that argument possible in practice.This is the structural change that makes that possible. Not a different system for AI, the same system, built with more explicit intention about what it&#8217;s actually communicating and to whom.</p><p>A design system that only talks to humans is doing half the job. The half that&#8217;s missing is becoming more expensive every month as more of what ships gets generated rather than drawn.</p><div><hr></div><p>The system I built is live at <strong><a href="http://library.lesliesultani.com/">library.lesliesultani.com</a></strong>, and the rules file that governs AI consumption is at the root of the repository. You can see the Figma design system <strong><a href="https://www.figma.com/design/zxniWcF88D1f9o57nl1xjR">here</a></strong>.</p><p>When I wrote the first line of that rules file, I was talking to Claude. But I was also clarifying something for myself. Making a system legible to an agent that can&#8217;t infer forces you to be explicit about things you&#8217;ve been leaving implicit for years. The token names have to carry meaning. The component descriptions have to be verbal, not just visual. The invariants have to be written down somewhere a reader can actually find them.</p><p>That clarity, it turns out, helps everyone who reads the system. Not just the agents. The humans too.</p><div><hr></div><p><em>Leslie Sultani is a design leader writing about the intersection of AI, design practice, and organizational change.</em></p><div><hr></div><h3><strong>Further Reading</strong></h3><ul><li><p><strong><a href="https://www.figma.com/blog/design-systems-ai-mcp/?utm_source=leslie_sultani">Design Systems and AI: Why MCP Servers Are the Unlock</a></strong> &#8212; Ana Boyer, Figma. The technical foundation for making design system context available to AI agents at generation time &#8212; the infrastructure argument that pairs with this article&#8217;s structural one.</p></li><li><p><strong><a href="https://www.atlassian.com/blog/design/turning-handoffs-into-handshakes-integrating-design-systems-for-ai-prototyping-at-scale?utm_source=leslie_sultani">Turning Handoffs into Handshakes: Integrating Design Systems for AI Prototyping at Scale</a></strong> &#8212; Lewis-Ethan Healey &amp; Kylor Hall, Atlassian. How Atlassian built agentic content and structured documentation for AI-generated code &#8212; the enterprise version of the approach described here.</p></li><li><p><strong><a href="https://learn.thedesignsystem.guide/p/why-your-design-system-is-the-most?utm_source=leslie_sultani">Why Your Design System Is the Most Important Asset in the AI Era</a></strong> &#8212; Romina Kavcic, The Design System Guide. The economics: 41% of new code in 2025 was AI-generated. When code is cheap and understanding is expensive, the design system is the architecture that matters.</p></li><li><p><strong><a href="https://uxdesign.cc/agentic-ai-design-systems-figma-a-practical-guide-6ab0b681718d?utm_source=leslie_sultani">Agentic AI, Design Systems &amp; Figma: A Practical Guide</a></strong> &#8212; Christine Vallaure, UX Collective. Practical implementation of agentic design system thinking &#8212; semantic tokens, consistent naming, and complete component states as AI infrastructure.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[04. Why AI Fluency Isn't a Training Problem]]></title><description><![CDATA[When design leaders talk about building AI fluency on their teams, the conversation almost always ends up in the same place: training.]]></description><link>https://www.ainativedesignseries.com/p/04-why-ai-fluency-isnt-a-training</link><guid isPermaLink="false">https://www.ainativedesignseries.com/p/04-why-ai-fluency-isnt-a-training</guid><dc:creator><![CDATA[Leslie Sultani]]></dc:creator><pubDate>Mon, 13 Apr 2026 14:05:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!h40R!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h40R!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h40R!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg 424w, https://substackcdn.com/image/fetch/$s_!h40R!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg 848w, https://substackcdn.com/image/fetch/$s_!h40R!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!h40R!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h40R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg" width="1184" height="672" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:672,&quot;width&quot;:1184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:464479,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://lesliesultani.substack.com/i/194034131?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!h40R!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg 424w, https://substackcdn.com/image/fetch/$s_!h40R!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg 848w, https://substackcdn.com/image/fetch/$s_!h40R!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!h40R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88246c07-34a7-4a60-aa25-38ada76e6049_1184x672.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When design leaders talk about building AI fluency on their teams, the conversation almost always ends up in the same place: training.</p><p>Build a curriculum, run workshops, bring in an expert, send people to a course, create a certification path and measure completion rates.</p><p>What if the training isn&#8217;t failing because it&#8217;s bad? What if training is just the wrong solution to the actual problem?</p><p>The actual problem isn&#8217;t that designers don&#8217;t know how to use AI. The State of AI in Design report found that 96% of designers using AI today are self-taught. They figured it out on their own, without a curriculum, without a certification, without anyone measuring their completion rate. The knowledge exists. It&#8217;s spreading on its own.</p><p>What isn&#8217;t spreading on its own is the confidence to use what you know. The organizational permission to experiment without penalty. The safety to try something in front of your team and have it not work and not have that mean something bad about you.</p><p>That&#8217;s not a training problem. That&#8217;s a culture problem. And it requires a different intervention.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pcbR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pcbR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png 424w, https://substackcdn.com/image/fetch/$s_!pcbR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png 848w, https://substackcdn.com/image/fetch/$s_!pcbR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png 1272w, https://substackcdn.com/image/fetch/$s_!pcbR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pcbR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png" width="1456" height="799" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:799,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:138853,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://lesliesultani.substack.com/i/194034131?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pcbR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png 424w, https://substackcdn.com/image/fetch/$s_!pcbR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png 848w, https://substackcdn.com/image/fetch/$s_!pcbR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png 1272w, https://substackcdn.com/image/fetch/$s_!pcbR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23bd46ee-b84b-4c0f-9343-995962575901_2160x1186.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div><hr></div><h2>What Actually Gets in the Way</h2><p>When you ask designers why they aren&#8217;t using AI more in their work, the answers cluster around a few things.</p><p>They don&#8217;t know if it&#8217;s allowed. Not in a policy sense, but in an unspoken cultural sense. Will using AI on this project be seen as cutting corners? Will my manager wonder if the work is really mine? Is there an official tool, and am I supposed to wait for it? What is the right process?</p><p>They don&#8217;t want to look like they don&#8217;t know what they&#8217;re doing. AI tools are unfamiliar. Prompting is a skill that takes time to develop. In an environment where competence is visible and mistakes are noticed, the rational move is to stick with what you know.</p><p>The moment that changes everything hasn&#8217;t happened yet. Most designers who become enthusiastic AI users can point to a specific moment: the first time a prompt generated something that actually surprised them, the first time synthesis that would have taken two days took two hours. Before that moment, AI is abstract. After it, it&#8217;s a tool they reach for. The problem is that without the right conditions, that moment never happens.</p><p>Training doesn&#8217;t fix any of these things. A workshop teaches you how the tool works, but it doesn&#8217;t give you permission, restore psychological safety, or create the first genuine &#8220;aha&#8221; moment. Those come from culture.</p><div><hr></div><h2>What Works Instead</h2><p>The organizations making the fastest progress have figured out three things.</p><p><strong>Grant Permission Publicly</strong></p><p>Not a policy document. A signal from leadership that experimentation is valued over polish in early-stage work, that trying AI and failing is more useful than not trying at all, that the goal right now is learning and the team&#8217;s job is to generate that learning as fast as possible. This sounds simple. It is rarely done. Most design cultures are implicitly oriented toward demonstrating competence, not demonstrating curiosity. Changing that orientation requires explicit intervention from whoever leads the team.</p><p><strong>Design Practice As Play</strong></p><p>Figma ran something called the Great Figma Bake Off: a company-wide competition where teams built projects using AI tools, with live jam sessions across time zones and public showcasing of what people made. Nobody was evaluated on the quality of the output. The whole point was to get hands on the tools in a context where failure was interesting rather than consequential. Atlassian ran their AI Product Builders Week on a similar premise: a week of building, sharing, and learning together, with the outputs shared publicly inside the company. Over a thousand designers participated. The tool was never the point. The shared experience of trying together was.</p><p><strong>Cultivate Champions</strong></p><p>In every organization that has built genuine AI fluency at scale, there&#8217;s a small group of people who went deeper earlier. Not because they were assigned to. Because they were curious, or stumbled into the right project, or had a manager who gave them room to explore. These people become the informal infrastructure for fluency-building: the ones colleagues come to when they&#8217;re stuck, the ones who share what they&#8217;re learning in Slack channels or design crits, the ones who help others find their first &#8220;aha&#8221; moment. Organizations that identify these people and support them intentionally, rather than accidentally, accelerate the whole team&#8217;s development. Six to ten percent of the team with genuine depth is enough to change the culture.</p><div><hr></div><h2>The Mandate Trap</h2><p>It&#8217;s worth being direct about what doesn&#8217;t work, because it&#8217;s the most common approach.</p><p>Mandating AI tool adoption doesn&#8217;t build fluency. It builds compliance. Designers who are required to use AI will find the path of least resistance: they&#8217;ll use it in ways that are easy to demonstrate, that check the box, that don&#8217;t require them to actually change how they work. The learning that comes from genuine experimentation, from trying something because you&#8217;re curious and then following where it leads, doesn&#8217;t happen under mandate.</p><p>Measuring adoption doesn&#8217;t build fluency either. If the metric is &#8220;what percentage of the team used an AI tool this week,&#8221; you&#8217;ll get the metric. You won&#8217;t get the change in practice that makes the metric meaningful.</p><p>The organizations that measure fluency well are measuring different things: the quality of outputs over time, the rate at which AI-assisted work is replacing lower-value manual work, the diversity of use cases the team is exploring. Those are lagging indicators, but they&#8217;re the right ones.</p><div><hr></div><h2>The Deeper Thing</h2><p>There&#8217;s something underneath all of this that&#8217;s worth naming.</p><p>Design teams that build genuine AI fluency do it because their leaders are curious about AI themselves. Not performing curiosity. Not requiring it of others while remaining personally detached. Actually using the tools, sharing what they found, showing the team that the leader is also in the middle of figuring this out.</p><p>The signal that matters most isn&#8217;t the training budget or the completion certificate. It&#8217;s whether the person at the top of the design org treats AI as a live question they&#8217;re figuring out alongside the team, or as a mandate they&#8217;re delivering from a distance.</p><p>Teams follow what leaders do, not what they ask for. No certification path substitutes for that. That&#8217;s not specific to AI. It&#8217;s just how culture works.</p><div><hr></div><p><em>Leslie Sultani is a design leader writing about the intersection of AI, design practice, and organizational change.</em></p><div><hr></div><h2>Further Reading</h2><ul><li><p><a href="https://www.stateofaidesign.com/?utm_source=leslie_sultani">State of AI in Design 2025</a> &#8212; Foundation Capital &amp; Designer Fund. The source for the finding that 96% of designers using AI today are self-taught, and the data on where adoption is and isn&#8217;t happening.</p></li><li><p><a href="https://www.workday.com/en-us/perspectives/hr/2026/03/human-judgment-reckoning.html?utm_source=leslie_sultani">AI&#8217;s Blind Spot: The Human Judgment Reckoning</a> &#8212; Workday. Survey data that makes the culture-not-curriculum argument concrete: 66% of leaders believe they are prioritizing skills training; only 36% of employees agree. The perception gap is the problem, not the training content.</p></li><li><p><a href="https://hbr.org/2026/02/how-do-workers-develop-good-judgment-in-the-ai-era?utm_source=leslie_sultani">How Do Workers Develop Good Judgment in the AI Era?</a> &#8212; David S. Duncan, Harvard Business Review. The finding that AI helped experienced practitioners more than less-experienced ones because judgment is the bottleneck, not information access. Directly explains why training programs solve the wrong problem and what organizations need to do instead.</p></li><li><p><a href="https://www.atlassian.com/blog/inside-atlassian/ai-product-builders-week?utm_source=leslie_sultani">AI Product Builders Week: How Hands-On Experimentation Is Shaping Atlassian&#8217;s Future</a> &#8212; Atlassian&#8217;s account of the week-long program where over a thousand employees built with AI tools together. The model behind the structured experimentation approach in this article.</p></li><li><p><a href="https://www.atlassian.com/blog/design/a-design-technologists-take-on-ai-builders?utm_source=leslie_sultani">A Design Technologist&#8217;s Take on AI Builders Week</a> &#8212; Atlassian. A practitioner&#8217;s perspective from inside the program, on what actually changes when teams build together with AI rather than train in isolation.</p></li><li><p><a href="https://hbr.org/podcast/2026/03/strategy-summit-2026-why-ai-means-radical-change?utm_source=leslie_sultani">Strategy Summit 2026: Why AI Means Radical Change</a> &#8212; Tsedal Neeley, HBR IdeaCast. Introduces the &#8220;30% rule&#8221;: every worker needs baseline AI fluency, not just technical teams. Treats fluency as a culture and organizational change problem, not a skills gap.</p></li><li><p><a href="https://hbr.org/podcast/2026/03/strategy-summit-2026-inventive-strategy-and-the-unbossed-organization?utm_source=leslie_sultani">Strategy Summit 2026: Inventive Strategy and the &#8216;Unbossed&#8217; Organization</a> &#8212; Rita McGrath, HBR IdeaCast. The electricity analogy: you wouldn&#8217;t create an &#8220;electricity strategy,&#8221; you&#8217;d give people tools to experiment. Directly parallels the permission argument in this article.</p></li><li><p><a href="https://www.figma.com/blog/skills-for-the-ai-era/?utm_source=leslie_sultani">5 Design Skills to Sharpen in the AI Era</a> &#8212; Figma. What skills teams should be building fluency in, based on research across the design industry.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[The AI-Native Design Series: Articles, Resources, and the Reading List That Shaped Them]]></title><description><![CDATA[I&#8217;ve been researching and writing about what happens when design organizations take AI seriously at a structural level.]]></description><link>https://www.ainativedesignseries.com/p/the-ai-native-design-series-articles</link><guid isPermaLink="false">https://www.ainativedesignseries.com/p/the-ai-native-design-series-articles</guid><dc:creator><![CDATA[Leslie Sultani]]></dc:creator><pubDate>Wed, 01 Apr 2026 19:51:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WGwJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F047240ad-410e-429c-a18f-944345dd91a3_600x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve been researching and writing about what happens when design organizations take AI seriously at a structural level. Not the tools. The harder question underneath: what changes about how teams work, how decisions get made, who does what, and where human judgment actually matters when AI is embedded in the practice.</p><p>So far this has led me to eight articles deep and it will continue to unfold as I uncover more. Each one started with something I noticed in real organizations, followed the thread, and tried to make it useful for other design leaders sitting with the same questions.</p><p>This page is the home base. The full table of contents for the series, updated as new articles publish, plus the annotated reading list of every source that shaped my thinking along the way. Bookmark it if you want one URL for all of it.</p><div><hr></div><h2>The AI-Native Design Series</h2><p><strong><a href="https://open.substack.com/pub/lesliesultani/p/what-does-design-look-like-when-ai?utm_campaign=post-expanded-share&amp;utm_medium=web">Article 0: What Does Design Look Like When AI Changes Everything?</a></strong> The question most designers aren&#8217;t asking out loud. When AI handles execution, what are designers actually for? The answer is clearer than the anxiety around it suggests. </p><p><strong><a href="https://open.substack.com/pub/lesliesultani/p/what-ai-native-design-actually-means?utm_campaign=post-expanded-share&amp;utm_medium=web">Article 1: What &#8220;AI-Native Design&#8221; Actually Means (And Why Most Teams Are Getting It Wrong)</a>.</strong> Three stages: AI-aware, AI-augmented, AI-native. Most teams think they&#8217;re at stage three because they bought the right licenses. The distance between buying a camera and knowing how to see. </p><p><strong><a href="https://open.substack.com/pub/lesliesultani/p/your-design-system-has-a-new-job?utm_campaign=post-expanded-share&amp;utm_medium=web">Article 2: Your Design System Has a New Job.</a></strong> Design systems used to talk to humans. Now their most important audience is AI agents generating code at scale. If your system isn&#8217;t legible to machines, the machines are making it up. Atlassian and Figma are already building for this. </p><p><strong><a href="https://open.substack.com/pub/lesliesultani/p/the-framework-for-knowing-where-human?utm_campaign=post-expanded-share&amp;utm_medium=web">Article 3: The Framework for Where Human Judgment Still Lives</a>.</strong> A Stakes &#215; Novelty matrix that maps any piece of design work into four zones, each with a different relationship between human judgment and AI. Plus four checkpoints that don&#8217;t move regardless of how the work is categorized. </p><p><strong><a href="https://open.substack.com/pub/lesliesultani/p/04-why-ai-fluency-isnt-a-training?utm_campaign=post-expanded-share&amp;utm_medium=web">Article 4: Why AI Fluency Isn&#8217;t a Training Problem</a>.</strong> 96% of designers using AI today taught themselves. No curriculum, no certification, no one measuring their completion rate. So why does every organization default to training when teams aren't adopting AI? The problem isn't what most design leaders think it is.</p><p><strong><a href="https://open.substack.com/pub/lesliesultani/p/the-design-system-is-the-same-for?utm_campaign=post-expanded-share&amp;utm_medium=web">Article 5: The Design System Is the Same for Humans and AI. Until It Isn&#8217;t.</a> </strong>The idea sounds clean: one system for humans and machines. In practice, it doesn&#8217;t hold the way you expect. I tested this by building my own portfolio using the same principles.</p><p><strong><a href="https://open.substack.com/pub/lesliesultani/p/what-ai-did-to-the-design-process?r=1qesgx&amp;utm_campaign=post&amp;utm_medium=web">Article 6: What AI Did to the Design Process.</a></strong> AI didn't kill the design process. It didn't just compress it either. It redistributed it. Three specific shifts are happening at once in the teams moving fastest, and most teams are only seeing one of them. That's why their AI investment keeps producing output without producing progress.</p><p><strong><a href="https://lesliesultani.substack.com/p/when-ai-decides-and-human-signs-off">Article 7: When AI Decides and Humans Sign Off</a>. </strong>There's a design principle underneath every high-stakes AI product: AI is the decision support. The human is the decision maker. Most products violate that contract by design. Four documented cases across law, healthcare, autonomous systems, and criminal justice show what it costs when the design didn't account for that. The gap between "human in the loop" and human judgment is a design problem. Most AI companies aren't treating it as one.</p><p><em>[Coming Soon] </em><strong>AI-Native Design at Every Stage: What Changes and What Goes Wrong</strong> Where do you start with AI? The answer depends almost entirely on the size and shape of the organization asking. A four-person startup and a two-thousand-person enterprise aren&#8217;t solving the same problem when they say they want to be AI-native. What actually changes across startups, mid-size companies, and large enterprises, and what&#8217;s the specific failure mode at each stage?</p><p><em>[Coming Soon] </em><strong>The Trust Problem AI Creates That Nobody in Design Is Talking About.</strong> What happens to users when AI is inside the product making decisions about what they see, what they're offered, and what they can do? <em>Is trust architecture becoming the most consequential design challenge of this moment, and why aren't more teams treating it that way?</em></p><p><em>[Coming Soon] </em><strong>What a Design Sprint Looks Like Now.</strong> The five-day design sprint was designed in 2016. <em>What does it look like when AI is embedded in the process, and how does the process change?</em></p><p><em>[Coming Soon] </em><strong>What AI Does to the Cost of a Pivot.</strong> Pivots have always been the right strategic move made at the wrong time. <em>What happens to that calculus when AI changes the economics underneath the decision?</em></p><p>&#8230; more unfolding</p><div><hr></div><h2>The Reading List</h2><p>These are the sources that kept showing up in my research. Not a generic collection. Every one of these shaped how I think about AI-native design, and most of them informed specific arguments in the series. I&#8217;ve organized them by theme and added my notes on what each one actually says and why it matters.</p><div><hr></div><h3>The Research</h3><p><strong><a href="https://www.stateofaidesign.com/?utm_source=leslie_sultani">State of AI in Design 2025</a></strong> &#8212; Foundation Capital &amp; Designer Fund. The closest thing to a census of where design teams actually stand with AI. The finding that 96% of designers using AI are self-taught reframed my entire thinking about the fluency problem. The 84%/39% gap between exploration and delivery adoption shaped the argument.</p><p><strong><a href="https://www.figma.com/reports/state-of-the-designer-2026/?utm_source=leslie_sultani">State of the Designer 2026</a></strong> &#8212; Figma. Figma's annual survey of 906 designers across five regions. The headline numbers: 72% now use generative AI tools, 98% increased their usage in the past year, and 91% say AI improves the quality of their outputs. But the finding I keep coming back to is this: designers who are increasing their AI usage are 25% more likely to report rising job satisfaction than those who aren't. The flip side is just as telling: 40% of designers whose AI usage stayed flat say their job is getting worse. The craft data is equally important. Companies that increased their emphasis on craft saw 67% of designers report higher satisfaction, and leadership attention to design work drove that number to 60%. If you're making the argument that judgment and craft become more valuable when AI handles execution, this is your evidence.</p><p><strong><a href="https://www.nngroup.com/articles/state-of-ux-2026/?utm_source=leslie_sultani">State of UX 2026: Design Deeper to Differentiate</a></strong> &#8212; Nielsen Norman Group. NNg&#8217;s annual assessment calls 2025 the year of &#8220;post-hype AI&#8221; and names trust as the defining design challenge of 2026. If you read one industry report this year for strategic planning, this is the one.</p><p><strong><a href="https://johnmaeda.medium.com/autodesigners-on-autopilot-88c5b07609b9?utm_source=leslie_sultani">2025 Design in Tech Report: Autodesigners on Autopilot</a></strong> &#8212; John Maeda. The eleventh annual report identifies the shift from UX to AX (Agent Experience). Maeda&#8217;s argument that designers must now design for AI agents, not just human users, runs parallel to the design systems argument.</p><p><strong><a href="https://designerfund.com/blog/four-shifts-designers-cant-ignore-in-the-age-of-ai?utm_source=leslie_sultani">Four Shifts Designers Can&#8217;t Ignore in the Age of AI</a></strong> &#8212; Ben Blumenrose, Designer Fund. Companion piece to the State of AI in Design report, distilling four operational and cultural shifts design leaders are facing right now. Written from an investor and design-leader lens.</p><div><hr></div><h3>Design Systems as AI Infrastructure</h3><p><strong><a href="https://www.figma.com/blog/the-figma-canvas-is-now-open-to-agents/?utm_source=leslie_sultani">Agents, Meet the Figma Canvas</a> &#8212; </strong>Matt Colyer, Figma. Figma opens its canvas to AI agents via MCP server beta, letting agents create and modify real design assets using existing components, variables, and tokens. Introduces "Skills," which are Markdown files that encode team design decisions so agents produce on-brand outputs consistently. Works with Claude Code, Codex, Cursor, and Copilot. The most significant design systems infrastructure announcement since MCP servers launched.</p><p><strong><a href="https://bradfrost.com/blog/post/agentic-design-systems-in-2026/?utm_source=leslie_sultani">Agentic Design Systems in 2026</a> </strong>&#8212; Brad Frost, bradfrost.com. Brad Frost, creator of Atomic Design, coins &#8220;DS+AI&#8221; and argues agents should assemble UIs using the same components human teams use. Defines two dimensions every agentic design system needs: coverage (clear examples, states, and constraints for agents) and validation (tests and human sign-off before anything ships). Foundational piece from the person who defined modern design system thinking.</p><p><strong><a href="https://uxdesign.cc/agentic-ai-design-systems-figma-a-practical-guide-6ab0b681718d?utm_source=leslie_sultani">Agentic AI, Design Systems &amp; Figma: A Practical Guide</a> </strong>&#8212; Christine Vallaure, UX Collective. Practical implementation guide following Brad Frost&#8217;s Agentic Design Systems demo. Makes a useful distinction: &#8220;This is the opposite of vibe coding. The agent is not inventing; it is following.&#8221; Covers what design system basics like semantic tokens, consistent naming, and complete states actually require to function as agentic infrastructure.</p><p><strong><a href="https://www.intodesignsystems.com?utm_source=leslie_sultani">AI Design Systems Conference 2026</a> &#8212;</strong> Into Design Systems. Sold-out conference with 1,000-plus attendees and 21 experts from WhatsApp, GitHub, Figma, Adobe, Miro, and Atlassian. Sessions covered agentic design systems, machine-readable systems for MCP and LLMs, design systems as AI infrastructure, and encoding governance in agentic systems. Recordings are available and represent the most concentrated body of thinking on this topic from a single event.</p><p><strong><a href="https://www.infoq.com/news/2026/03/uber-ai-design/?utm_source=leslie_sultani">Uber Automates Design Documentation with Agentic Systems</a> &#8212; </strong>InfoQ. Case study of Uber&#8217;s uSpec system, which uses AI agents and the Figma Console MCP to automate component design specifications at scale. The agent crawls Figma component trees, extracts tokens and variants, and auto-generates platform-specific accessibility specs across seven stacks and three accessibility frameworks. The strongest enterprise-scale case study available for design systems combined with AI agents in production.</p><p><strong><a href="https://nervegna.substack.com/p/vibe-design-is-real-inside-google?utm_source=leslie_sultani">Vibe Design Is Real: Inside Google Stitch&#8217;s March 2026 Update</a> </strong>&#8212; Tommaso Nervegna, Sorted Pixels. Deep analysis of Google Stitch&#8217;s introduction of DESIGN.md, a Markdown-based design system format built to be readable by AI agents. The core argument: &#8220;The design system of 2026 isn&#8217;t a Figma library with documentation. It&#8217;s a DESIGN.md file that travels between your design agent, your coding agent, and your prototyping environment. If your system can&#8217;t be read by a machine, it&#8217;s already legacy.&#8221; Also covers MCP integration and what agent-interoperability looks like in practice.</p><p><strong><a href="https://storybook.js.org/blog/storybook-mcp-for-react/?utm_source=leslie_sultani">Storybook MCP for React</a></strong> &#8212; Kyle Gach, Storybook. Storybook&#8217;s MCP addon gives AI coding agents structured component metadata, so they build with your system instead of inventing their own version of it. The benchmarks tell the story: 12.8% better code quality, 2.76x faster generation, 27% fewer tokens. If you want to see what &#8220;design systems as AI infrastructure&#8221; looks like in production, start here.</p><p><strong><a href="https://www.designsystemscollective.com/towards-an-agentic-design-system-c7e0a6469bb1?utm_source=leslie_sultani">Towards an Agentic Design System</a></strong> &#8212; Cristian Morales Achiardi, Design Systems Collective. Benchmarked six agent configurations and proved that structured, machine-readable design system infrastructure delivers 2x speed and 54% more accuracy at the same token cost. The most rigorous evidence I&#8217;ve seen that this investment pays off.</p><p><strong><a href="https://learn.thedesignsystem.guide/p/why-your-design-system-is-the-most?utm_source=leslie_sultani">Why Your Design System Is the Most Important Asset in the AI Era</a></strong> &#8212; Cristian Morales Achiardi, The Design System Guide. Covers MCP versus CLI approaches, practical server setup, and the five context layers agents need to reason about components. His core point is hard to argue with: a design system is infrastructure, not a side project.</p><p><strong><a href="https://www.figma.com/blog/design-systems-ai-mcp/?utm_source=leslie_sultani">Design Systems and AI: Why MCP Servers Are the Unlock</a></strong> &#8212; Ana Boyer, Figma. The technical foundation for making design system context available to AI agents at generation time. Boyer&#8217;s argument shaped how I think about the difference between access and context.</p><p><strong><a href="https://www.atlassian.com/blog/design/turning-handoffs-into-handshakes-integrating-design-systems-for-ai-prototyping-at-scale?utm_source=leslie_sultani">Turning Handoffs into Handshakes</a></strong> &#8212; Lewis-Ethan Healey &amp; Kylor Hall, Atlassian. How Atlassian&#8217;s design system team built agentic content and MCP infrastructure so AI-generated code actually reflects design intent. </p><p><strong><a href="https://www.figma.com/blog/schema-2025-design-systems-recap/?utm_source=leslie_sultani">Schema 2025: Design Systems for a New Era</a></strong> &#8212; Figma. Figma&#8217;s recap of the conference where they announced the Dev Mode MCP server. </p><p><strong><a href="https://www.atlassian.com/blog/design/designers-workflow-for-shipping-code?utm_source=leslie_sultani">Designers&#8217; Workflow for Shipping Code</a></strong> &#8212; Eduardo Sonnino, Atlassian. A practitioner&#8217;s account of how the designer-to-engineer workflow changes when AI is generating the code. Useful for anyone trying to understand what the day-to-day actually looks like.</p><p><strong><a href="https://uxdesign.cc/from-products-to-systems-the-agentic-ai-shift-eaf6a7180c43?utm_source=leslie_sultani">From Products to Systems: The Agentic AI Shift</a></strong> &#8212; John Moriarty, UX Collective. A design leader at DataRobot on designing for AI agents as a new user type alongside humans. Discusses agent-aware design system documentation in production and the tension between agent autonomy and user control.</p><div><hr></div><h3>The Role Shift</h3><p><strong><a href="https://www.lennysnewsletter.com/p/the-design-process-is-dead?utm_source=leslie_sultani">The Design Process Is Dead. Here&#8217;s What&#8217;s Replacing It</a>. </strong>&#8212; Jenny Wen, Lenny&#8217;s Newsletter. Jenny Wen, Head of Design for Claude at Anthropic, argues the classic discover, diverge, converge design process has broken down. When engineers spin up AI coding agents and ship working versions before designers finish exploring, traditional workflows can&#8217;t keep pace. Design has split into two modes: supporting rapid implementation and upstream direction-setting on what to build. One of the most-discussed design and AI pieces of early 2026.</p><p><strong><a href="https://verifiedinsider.substack.com/p/operating-as-an-ai-native-product?utm_source=leslie_sultani">Operating as an AI-Native Product Designer in 2026</a> </strong>&#8212; Tom Scott &amp; Vitor Amaral, Verified Insider. First-person account of AI-native design practice at Intercom. The designer&#8217;s role shifts from making to deciding: AI produces solid starting points, so the value moves to evaluation and judgment. Details a concrete daily workflow including a coded design system as the vocabulary for AI agents, Figma Code Connect mapping components to code, and a dedicated route for prototyping in the real app. One of the more grounded accounts of what this actually looks like day to day.</p><p><strong><a href="https://www.nngroup.com/articles/outcome-oriented-design/?utm_source=leslie_sultani">Outcome-Oriented Design: Designing in the Era of AI</a> </strong>&#8212; Kate Moran &amp; Sarah Gibbons, Nielsen Norman Group. Introduces outcome-oriented design as a structural replacement for traditional single-interface design. Designers now define adaptive frameworks that respond to individual user goals rather than optimizing for the average user. The shift moves design from prescribing a fixed UI to building systems that flex based on user context and desired outcome.</p><p><strong><a href="https://www.uxmatters.com/mt/archives/2025/06/the-evolution-of-ux-design-in-the-age-of-ai-platformsfrom-creator-to-choreographer.php?utm_source=leslie_sultani">The Evolution of UX Design in the Age of AI Platforms: From Creator to Choreographer</a></strong> &#8212; Ken Olewiler, UXmatters. The source behind the creator-to-choreographer framing I reference in <a href="https://lesliesultani.substack.com/p/what-does-design-look-like-when-ai?r=1qesgx">Article 0: What Does Design Look Like When AI Changes Everything</a>. Olewiler&#8217;s argument that editorial judgment and taste are the skills that survive automation has held up well.</p><p><strong><a href="https://openai.com/index/figma-david-kossnick?utm_source=leslie_sultani">How Figma Integrates AI to Transform Design and Empower Creatives</a></strong> &#8212; David Kossnick, via OpenAI. Where the &#8220;vision carriers&#8221; concept comes from. Kossnick describes designers who understand where the product needs to go and can hold the long arc of user experience while everyone else is deep in the immediate problem.</p><p><strong><a href="https://www.figma.com/blog/skills-for-the-ai-era?utm_source=leslie_sultani">5 Design Skills to Sharpen in the AI Era</a></strong> &#8212; Figma. Research-backed breakdown of which design skills compound in value as AI handles more execution. Based on the State of the Designer 2026 research.</p><p><strong><a href="https://lg.substack.com/p/the-death-of-product-development?utm_source=leslie_sultani">The Death of Product Development as We Know It</a></strong> &#8212; Julie Zhuo. Co-written with Henry Modisett, head of design at Perplexity. Zhuo argues the traditional engineering/product/design team structure is dying. When building is cheap, everyone becomes a "builder" and role boundaries dissolve. Taste becomes the differentiator. I think she's right that the lines are blurring, but a designer, a PM, and an engineer will always look at the same problem through different lenses. That friction is what keeps teams from shipping the wrong thing faster. Remove the distinct perspectives and you just get a room full of people who think the same way. Worth reading and arguing with.</p><p><strong><a href="https://www.atlassian.com/blog/artificial-intelligence/shift-from-craft-to-judgement-ai?utm_source=leslie_sultani">Shifting from Craft to Judgment in the Age of AI</a></strong> &#8212; Ravi Mehta, Atlassian. Captures the ratio shift better than almost anything else I&#8217;ve read. It used to be fifty-fifty between building well and deciding what to build. Now it&#8217;s closer to ninety-ten. The building got easier. The deciding didn&#8217;t.</p><p><strong><a href="https://www.nngroup.com/articles/future-proof-designer?utm_source=leslie_sultani">The Future-Proof Designer</a></strong> &#8212; Nielsen Norman Group. Seven experts, 150-plus years of combined experience, all converging on the same theme: designers offer judgment built on expertise that AI can&#8217;t replicate. If you&#8217;re trying to articulate your value to a skeptical executive, the framing here will help.</p><p><strong><a href="https://www.designative.info/2026/02/01/taste-is-the-new-bottleneck-design-strategy-and-judgment-in-the-age-of-agents-and-vibe-coding?utm_source=leslie_sultani">Taste Is the New Bottleneck</a></strong> &#8212; Ivan Googol Medeiros, Designative. Goes beyond individual skill to explore taste as governance: who trains the systems that shape what good looks like? The most intellectually ambitious piece I&#8217;ve found on why taste and judgment are organizational concerns, not just personal ones.</p><p><strong><a href="https://www.atlassian.com/blog/design/navigating-the-next-decade-as-a-product-designer-in-tech?utm_source=leslie_sultani">Navigating the Next Decade as a Product Designer in Tech</a></strong> &#8212; Siva Sabaretnam, Atlassian. A practitioner&#8217;s perspective on how design&#8217;s advocacy role for users becomes more important, not less, as AI gets embedded in products.</p><p><strong><a href="https://www.nngroup.com/articles/service-design-evolve-ai-agents?utm_source=leslie_sultani">How Service Design Will Evolve with AI Agents</a></strong> &#8212; Nielsen Norman Group. AI agents become new actors in service ecosystems. Previews &#8220;outcome-oriented design&#8221; where users specify results rather than performing steps. Forward-looking on how design practice must structurally evolve.</p><div><hr></div><h3>Trust Architecture</h3><p><strong><a href="https://www.smashingmagazine.com/2026/04/identifying-necessary-transparency-moments-agentic-ai-part1/?utm_source=leslie_sultani">Identifying Necessary Transparency Moments in Agentic AI</a> (Part 1) </strong>&#8212; Victor Yocco, Smashing Magazine. Introduces the Decision Node Audit, a structured method for mapping backend AI logic to the user interface. Uses an insurance company case study to identify which moments in an agent workflow require active transparency versus a simple log entry. The design challenge it names is finding the balance between the black box and the data dump. A companion to Yocco&#8217;s two earlier pieces, and the most methodologically specific of the three.</p><p><strong><a href="https://www.weforum.org/stories/2026/02/how-to-design-for-trust-in-the-age-of-ai-agents/?utm_source=leslie_sultani">How to Design for Trust in the Age of AI Agents</a> &#8212; </strong>World Economic Forum. Proposes a layered trust stack for AI agent autonomy: legible reasoning paths, bounded agency, goal transparency, contestability and override, and governance by design. Argues trust should be earned rather than engineered, through cognitive resonance rather than emotional persuasion. One of the cleaner frameworks available for thinking about trust as a designed architecture rather than a communication problem.</p><p><strong><a href="https://hbr.org/2026/01/how-to-get-your-customers-to-trust-ai?utm_source=leslie_sultani">How to Get Your Customers to Trust AI</a> </strong>&#8212; Ashley Reichheld, Sebastian Goodwin &amp; Courtney Sherman, Harvard Business Review. Addresses the tension at the center of most AI product decisions: transparency is supposed to build trust, but companies regularly say too much and too little at the same time. Proposes embedding transparency within a broader trust framework, customizing disclosures for different audiences, and treating transparency as an ongoing process rather than a one-time disclosure.</p><p><strong><a href="https://itsadelriodesign.medium.com/the-trust-problem-why-designing-for-ai-agents-is-the-hardest-ux-challenge-of-2026-cae49374abf5?utm_source=leslie_sultani">The Trust Problem: Why Designing for AI Agents Is the Hardest UX Challenge of 2026</a> </strong>&#8212; Pedro del Rio, Medium. Traditional UX was built for passive systems; AI agents are proactive. They anticipate, decide, and act. Argues every agentic system needs a credible stop button and that designers must define clear thresholds between &#8220;act and report&#8221; and &#8220;ask before acting.&#8221; Makes the case that trust is a design decision, not a product or engineering one.</p><p><strong><a href="https://think.design/blog/dark-patterns-in-ai-2026/?utm_source=leslie_sultani">Dark Patterns in AI: How 2026 Made Them Harder to See</a> </strong>&#8212; Stuti Mazumdar, Think Design. Documents the shift from static UI dark patterns to AI-systemic manipulation: sycophantic assistants, endless &#8220;helpful&#8221; loops, and AI subtly altering user intent during rewriting. Key point: &#8220;Two people could look at what seems like the same interface and be experiencing very different levels of persuasion&#8221; because of AI personalization. Useful for understanding the newer category of trust harm that isn&#8217;t visible in traditional dark pattern audits.</p><p><strong><a href="https://www.agilesoftlabs.com/blog/2026/03/dark-patterns-ux-manipulation?utm_source=leslie_sultani">Dark Patterns UX: Manipulation Psychology 2026</a> </strong>&#8212; AgileSoftLabs. Covers the regulatory enforcement context: EU Digital Services Act Article 25 enforcement began in Q1 2026, Amazon&#8217;s $2.5 billion dark pattern settlement, and the finding that 97% of popular EU apps contain at least one dark pattern. Identifies AI-powered dark patterns as &#8220;an entirely new enforcement challenge, personalizing manipulation at machine speed and scale.&#8221;</p><p><strong><a href="https://criticalplayground.org/designing-trust-in-ai/?utm_source=leslie_sultani">Trust by Design: UX, AI, and Transparency Politics</a> </strong>&#8212; Critical Playground. Theoretical treatment of trust as a design problem, not a branding or disclosure problem. References Google PAIR&#8217;s work and argues transparency is relational: what counts as clear varies across cultural contexts. &#8220;The politics of AI are increasingly negotiated at the UX layer.&#8221;</p><p><strong><a href="https://www.smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence?utm_source=leslie_sultani">The Psychology of Trust in AI: A Guide to Measuring and Designing for User Confidence</a></strong> &#8212; Victor Yocco, Smashing Magazine.  The most actionable trust framework I&#8217;ve found for practitioners. Defines the calibrated trust spectrum (from Active Distrust to Automation Bias) and gives you concrete measurement methods organized around four pillars: Ability, Benevolence, Integrity, and Predictability. </p><p><strong><a href="https://www.smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners?utm_source=leslie_sultani">Beyond the Black Box: Practical XAI for UX Practitioners</a></strong> &#8212; Victor Yocco, Smashing Magazine. The companion piece, with mockup-level design patterns for explainability: &#8220;Because&#8221; statements, interactive factor sliders, source attribution, confidence visualizations. Introduces AI journey mapping for identifying where trust is most at risk.</p><p><strong><a href="https://www.nngroup.com/articles/ai-magic-8-ball?utm_source=leslie_sultani">When Should We Trust AI? Magic-8-Ball Thinking and AI Hallucinations</a></strong> &#8212; Nielsen Norman Group. Coins &#8220;magic-8-ball thinking,&#8221; the tendency to accept AI outputs without questioning them. Cites research showing AI legal tools report inaccurate information 17-33% of the time. Good ammunition for the conversation about why trust architecture isn&#8217;t optional.</p><p><strong><a href="https://www.uxmatters.com/mt/archives/2025/12/designing-for-autonomy-ux-principles-for-agentic-ai.php?utm_source=leslie_sultani">Designing for Autonomy: UX Principles for Agentic AI</a></strong> &#8212; UXmatters. Reframes UX responsibility for agentic systems from &#8220;Is this usable?&#8221; to &#8220;Is this system behaving in alignment with human goals even when no one is watching?&#8221; Provides concrete questions for when to act versus wait, intervene versus observe.</p><div><hr></div><h3>Building AI Fluency</h3><p><strong><a href="https://hbr.org/2026/02/how-do-workers-develop-good-judgment-in-the-ai-era?utm_source=leslie_sultani">How Do Workers Develop Good Judgment in the AI Era?</a> </strong>&#8212; David S. Duncan, Harvard Business Review. Research finding: AI helped experienced practitioners more than less-experienced ones, because judgment is the bottleneck, not information access. Building judgment requires clarifying who makes decisions, exposing people to consequences, restoring stretch experiences, and using simulations and case-based learning. Directly relevant to why training programs fail to build fluency and what to do instead.</p><p><strong><a href="https://www.workday.com/en-us/perspectives/hr/2026/03/human-judgment-reckoning.html?utm_source=leslie_sultani">AI&#8217;s Blind Spot: The Human Judgment Reckoning</a> </strong>&#8212; Workday. Survey data on the gap between what leaders think is happening and what employees experience: 66% of leaders believe they are prioritizing skills training; only 36% of employees agree. Only 1 in 6 leaders is ready to use AI as a partner for complex decisions rather than a content generator. Useful data for making the case that the fluency problem is a culture problem, not a curriculum problem.</p><p><strong><a href="https://www.atlassian.com/blog/inside-atlassian/ai-product-builders-week?utm_source=leslie_sultani">AI Product Builders Week: How hands-on experimentation is shaping Atlassian&#8217;s future</a></strong> &#8212; Atlassian. Over a thousand employees building with AI tools together for a week. The tool was never the point. The shared experience of trying together was. This shaped how I think about fluency-building.</p><p><strong><a href="https://www.atlassian.com/blog/design/a-design-technologists-take-on-ai-builders?utm_source=leslie_sultani">A Design Technologist&#8217;s Take on AI Builders Week</a></strong> &#8212; Atlassian. The practitioner view from inside Atlassian&#8217;s program. What actually changes when teams build together with AI rather than train in isolation.</p><p><strong><a href="https://www.firstround.com/ai/shopify?utm_source=leslie_sultani">From Memo to Movement: Shopify&#8217;s Cultural Adoption of AI</a></strong> &#8212; First Round Capital. Goes past Tobi L&#252;tke&#8217;s viral memo to show what actually happened inside the company. Three things that surprised me: they gave everyone access to the best AI tools (not just technical teams), they removed friction and cost barriers before expecting adoption, and they started with legal as a partner rather than a blocker. The biggest surprise: support and revenue teams adopted AI faster than engineering.</p><p><strong><a href="https://www.technologyreview.com/2025/12/16/1125899/creating-psychological-safety-in-the-ai-era?utm_source=leslie_sultani">Creating Psychological Safety in the AI Era</a></strong> &#8212; MIT Technology Review &#215; Infosys. 83% of business leaders say psychological safety directly impacts the success of AI initiatives. 22% of leaders have hesitated to lead AI projects because they&#8217;re afraid of being blamed if things go wrong. If you&#8217;re wondering why your team isn&#8217;t experimenting, this might be why.</p><p><strong><a href="https://www.atlassian.com/software/jira/product-discovery/resources/ai-fluency?utm_source=leslie_sultani">AI Fluency: The New Product Superpower</a></strong> &#8212; Atlassian.  Five frameworks for raising AI fluency across teams, plus named anti-patterns to avoid: tool tourism, automation theater, and prompt gatekeeping. Practical and specific.</p><div><hr></div><h3>Process Compression and Strategic Agility</h3><p><strong><a href="https://www.nngroup.com/articles/design-process-isnt-dead?utm_source=leslie_sultani">Design Process Isn&#8217;t Dead, It&#8217;s Compressed</a></strong><a href="https://www.nngroup.com/articles/design-process-isnt-dead?utm_source=leslie_sultani"> </a>&#8212; Sarah Gibbons, Nielsen Norman Group. The definitive response to the &#8220;throw out the design process&#8221; crowd. What looks like skipping steps is experienced designers running compressed versions. Exploring, making, learning, and refining can happen in a single afternoon now. The process is still there. It just moves faster.</p><p><strong><a href="https://www.nngroup.com/articles/ai-prototyping?utm_source=leslie_sultani">Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts</a></strong> &#8212; Nielsen Norman Group. Rigorous evaluation of AI prototyping tools using real design scenarios. The key finding: AI gets you to about 60% fast. The last 40%, the part that requires judgment, remains human. Essential reading for setting realistic expectations about AI-accelerated workflows.</p><p><strong><a href="https://www.lennysnewsletter.com/p/how-to-get-your-entire-team-prototyping?utm_source=leslie_sultani">How to Get Your Entire Team Prototyping with AI</a></strong> &#8212; Colin Matthews, Lenny&#8217;s Newsletter. Practical playbook for making AI prototyping work across a team, not just for the one person who figured out the tools. Covers component libraries, team workflows, and handoff processes.</p><p><strong><a href="https://www.lennysnewsletter.com/p/why-your-ai-product-needs-a-different?utm_source=leslie_sultani">Why Your AI Product Needs a Different Development Lifecycle</a></strong> &#8212; Aishwarya Reganti &amp; Kiriti Badam, Lenny&#8217;s Newsletter. Based on 50-plus AI implementations at OpenAI, Google, Amazon, and Databricks. Argues that AI products fundamentally break the assumptions of traditional sprint cadences. If you&#8217;re a design leader wondering why your team&#8217;s planning process feels off, this might explain it.</p><p><strong><a href="https://www.productboard.com/blog/product-craft-when-ai-changes-the-stakes?utm_source=leslie_sultani">Why Product Judgment Matters More Than Velocity in the AI Era</a></strong> &#8212; Productboard  Captures the central paradox: as the cost of building drops, the responsibility to choose well increases. Velocity stops being the differentiator. Judgment is.</p><p><strong><a href="https://marieclairedean.substack.com/p/you-dont-have-an-ai-strategy-problem?utm_source=leslie_sultani">You Don&#8217;t Have an AI Strategy Problem. You Have 6 Design Decisions to Make</a>.</strong> &#8212; MC Dean. Reframes AI strategy as a design problem: agency boundaries, failure tolerance, human overrides, feedback loops, what to refuse to automate. Sharp and specific.</p><div><hr></div><h3>Organizational Transformation (Beyond Design)</h3><p><strong><a href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/building-next-horizon-ai-experiences?utm_source=leslie_sultani">Building Next-Horizon AI-Native Experiences</a> </strong>&#8212; McKinsey &amp; Company. McKinsey argues the AI adoption problem is experiential, not technical. Proposes AI-native design patterns for embedding human judgment into AI interaction models. Organizations need to create with clarity, bring depth to workflows, and build for cocreation. References McKinsey&#8217;s 2018 Business Value of Design report and argues those principles have to evolve for the AI era.</p><p><strong><a href="https://productimpactpod.substack.com/p/the-design-of-ai-in-2026-strategy?utm_source=leslie_sultani">The Design of AI in 2026: Strategy, Power Shifts, and the Cost of Pretending You Understand AI</a> </strong>&#8212; Arpy Dragffy Guerrero &amp; Brittany Hobbs, Product Impact Pod. Synthesis of 49 podcast episodes on AI and design strategy. The central observation: the gap in 2026 is between organizations that restructured how they create value and those that simply layered AI onto existing products. Includes a warning worth sitting with: organizations are &#8220;deleting the pipeline that produces senior judgment&#8221; in their pursuit of short-term efficiency.</p><p><strong><a href="https://hbr.org/podcast/2026/03/strategy-summit-2026-why-ai-means-radical-change?utm_source=leslie_sultani">Strategy Summit 2026: Why AI Means Radical Change</a> </strong>&#8212; Tsedal Neeley, HBR IdeaCast. Harvard Business School professor Tsedal Neeley presents the &#8220;30% rule&#8221;: every worker needs baseline AI fluency, not just technical teams, and not expertise but baseline capability. AI requires radical organizational change, not just tool adoption. Covers three vectors of AI value and treats fluency as a culture problem requiring minimum technology and change capability thresholds.</p><p><strong><a href="https://hbr.org/podcast/2026/03/strategy-summit-2026-inventive-strategy-and-the-unbossed-organization?utm_source=leslie_sultani">Strategy Summit 2026: Inventive Strategy and the &#8216;Unbossed&#8217; Organization</a> </strong>&#8212; Rita McGrath, HBR IdeaCast. Columbia Business School professor Rita McGrath argues organizational design is increasingly enmeshed with strategy. Hierarchical structures built for mass production are becoming obsolete. Advocates for &#8220;unbossed&#8221; organizations where people experiment freely with AI, using the electricity analogy: you&#8217;d give people tools to experiment with electricity, not create an &#8220;electricity strategy.&#8221;</p><p><strong><a href="https://hbr.org/podcast/2026/03/strategy-summit-2026-why-ai-transformation-needs-a-human-touch?utm_source=leslie_sultani">Strategy Summit 2026: Why AI Transformation Needs a Human Touch</a> &#8212; </strong>Nigel Vaz, HBR IdeaCast. Publicis Sapient CEO Nigel Vaz on why enterprise AI initiatives fail: incentives, talent strategies, and trust aren&#8217;t factored in. The argument: the very processes organizations optimized for past success are what limit their ability to get value from AI, particularly around people, context, and how goals are set.</p><p><strong><a href="https://fortune.com/2026/03/31/ai-worker-led-innovation-org-charts-aneesh-raman/?utm_source=leslie_sultani">Is the Org Chart Dead in the Age of AI?</a> </strong>&#8212; Fortune. LinkedIn&#8217;s Aneesh Raman argues org charts are holding back innovation and that companies need worker-led AI experimentation cutting across departments. LinkedIn replaced its Associate PM program with an &#8220;Associate Product Builder&#8221; role that merges coding, design, and PM skills. A concrete case study of role boundaries dissolving in practice at scale.</p><p><strong><a href="https://fortune.com/2026/03/29/ai-workforce-human-design-gap-doomsday-deloitte-wharton-harvard/?utm_source=leslie_sultani">Top Leadership Experts Sound the Alarm: Bosses Are Choosing Tech Over People</a> </strong>&#8212; Fortune. Deloitte data: 93% of AI budgets go to IT, only 7% to designing how humans and AI work together. Lara Abrash, Chair of Deloitte U.S., says that ratio &#8220;is not the right level of effort.&#8221; Harvard&#8217;s Linda Hill argues AI demands a whole new style of leadership. Strong data point for making the case that the human side of AI transformation is structurally underfunded.</p><p><strong><a href="https://hbr.org/2026/03/the-last-mile-problem-slowing-ai-transformation?utm_source=leslie_sultani">The Last Mile Problem Slowing AI Transformation</a> &#8212; </strong>Karim R. Lakhani, Jared Spataro &amp; Jen Stave, Harvard Business Review. Identifies seven organizational frictions slowing AI ROI: proliferation of pilots, productivity gaps, process debt, identity and tribal knowledge problems, agentic governance, architectural complexity, and the efficiency trap. Frames this as the point where technical capability has to meet organizational design. The seven frictions map directly to challenges design organizations face during AI transformation.</p><p><strong><a href="https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-organization-blog/six-shifts-to-build-the-agentic-organization-of-the-future?utm_source=leslie_sultani">Six Shifts to Build the Agentic Organization of the Future</a></strong> &#8212; McKinsey  Not design-specific, but directly transferable. Maps the organizational shifts needed for AI-native transformation: leaner structures, human-plus-agent teams, new roles. If you&#8217;re a design leader making the case for restructuring, McKinsey&#8217;s framing gives you executive-level language.</p><p><strong><a href="https://www.pigment.com/perspectives-podcast/intercom-des-traynor-how-we-rebuilt-the-company-for-the-ai-era?utm_source=leslie_sultani">Rebuilding Intercom for the AI Era</a></strong> &#8212; Des Traynor, Pigment Perspectives Podcast. Intercom scrapped their roadmap within 72 hours of ChatGPT launching and completely rebuilt their product. Traynor&#8217;s framework of &#8220;delay versus dilution&#8221; names the two failure modes companies fall into during a pivot. A masterclass in strategic agility.</p><p><strong><a href="https://jakobnielsenphd.substack.com?utm_source=navbar&amp;utm_medium=web">The Capability Maturity Model for AI in Design</a></strong><a href="https://jakobnielsenphd.substack.com?utm_source=navbar&amp;utm_medium=web"> </a>&#8212; Jakob Nielsen, UX Tigers. Six levels of AI design maturity, from basic tool use to autonomous UI generation. Level 6 envisions designers as architects of systems, defining constraints and guardrails while AI generates interfaces. A useful diagnostic for figuring out where your organization actually stands.</p><p><strong><a href="https://youtu.be/c_w0LaFahxk?si=R1kOsdtHn_T6gzDp?utm_source=leslie_sultani">From Managing People to Managing AI</a></strong> &#8212; Julie Zhuo, Lenny&#8217;s Podcast. Zhuo argues the three core skills of managing people translate directly to managing AI agents. Key listening for any design leader thinking about what their role becomes as AI takes on more of the team&#8217;s execution.</p><div><hr></div><h3>Applying This in Practice</h3><p>[<em>Coming soon</em>]. The sources above are mostly about what&#8217;s changing and why. This section will focus on the what-to-do-about-it: practical resources for IC designers and design leaders looking to apply AI-native thinking in their day-to-day work.</p><div><hr></div><h2>How I Use This List</h2><p>I don&#8217;t expect anyone to read all of this. If you&#8217;re a design leader trying to figure out where to start, here&#8217;s how I&#8217;d prioritize:</p><p><strong>For the big picture</strong>, read the NNg State of UX 2026 and the State of AI in Design 2025. Those two together give you the full picture.</p><p><strong>If your immediate problem is design systems</strong>, start with the Storybook MCP piece and the Morales Achiardi benchmarks. They&#8217;ll give you the evidence you need to make the investment case.</p><p><strong>If your immediate problem is team adoption</strong>, read the Shopify piece from First Round and the MIT Technology Review research on psychological safety. Culture first, tools second.</p><p><strong>If you&#8217;re trying to articulate why design matters more (not less) in this moment</strong>, the NNg Future-Proof Designer piece and Mehta&#8217;s craft-to-judgment argument are the strongest ammunition I&#8217;ve found.</p><p>This list will grow as the series continues. I&#8217;ll update it with each new article.</p><div><hr></div><p><em>Leslie Sultani is a design leader writing about the intersection of AI, design practice, and organizational change. Former CPO, UX engineer, and founder of a FinTech AI platform. The AI-Native Design Series is published on <a href="https://lesliesultani.substack.com/">Substack</a>, <a href="https://www.linkedin.com/in/lesliesultani/">LinkedIn</a>, and <a href="https://medium.com/@lesliesultani">Medium</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[03. The Framework for Knowing Where Human Judgment Still Lives]]></title><description><![CDATA[One of the hardest things about working with AI in a design practice isn&#8217;t the tools.]]></description><link>https://www.ainativedesignseries.com/p/the-framework-for-knowing-where-human</link><guid isPermaLink="false">https://www.ainativedesignseries.com/p/the-framework-for-knowing-where-human</guid><dc:creator><![CDATA[Leslie Sultani]]></dc:creator><pubDate>Tue, 31 Mar 2026 04:29:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NgvL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NgvL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NgvL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NgvL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NgvL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NgvL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NgvL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg" width="1184" height="672" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:672,&quot;width&quot;:1184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:501411,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://lesliesultani.substack.com/i/192691714?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NgvL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NgvL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NgvL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NgvL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8031ac18-f9db-4a21-bc54-095945c417f9_1184x672.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>One of the hardest things about working with AI in a design practice isn&#8217;t the tools. It&#8217;s the decisions that don&#8217;t come with instructions.</p><p>Should a designer review every screen an AI generates, or just the ones that feel risky? Who signs off when AI suggests a pattern the design system doesn&#8217;t cover yet? When a product team wants to move fast and let AI handle a component, how do you know if that&#8217;s fine or if it&#8217;s the kind of shortcut that surfaces three months later as a trust problem?</p><p>Most teams wrestle with these questions ad hoc. Someone pushes back, or nobody does, and over time the team develops an informal sense of where AI is trusted and where it isn&#8217;t. The problem with informal is that it&#8217;s inconsistent. Different designers make different calls. Different product managers have different risk tolerances. And the accumulated effect of hundreds of small decisions made without a shared framework is a design practice that doesn&#8217;t actually know what it believes about AI.</p><p>There&#8217;s a better starting point. Two questions, asked about any piece of work, sort out most of it.</p><div><hr></div><h2>The Two Variables That Actually Matter</h2><p>The first question: how high are the stakes if this goes wrong?</p><p>Stakes here means the real cost of a bad outcome. Not just a design critique. Not &#8220;the PM won&#8217;t like it.&#8221; Stakes means: could this make users feel like you&#8217;re lying to them? Could it expose the company to legal or regulatory risk? Could it put a dent in the brand that takes years to fix? Could it fail someone who&#8217;s already frustrated and just needs the &#8220;Cancel&#8221; button to work?</p><p>High-stakes work includes anything involving how users make important decisions, anything that touches personal data or sensitive information, anything where the experience of failure is significant for the person going through it.</p><p>Low-stakes work includes visual exploration, copy variations, early component builds, annotation, first-draft documentation. The cost of AI getting it wrong is iteration time, not user harm.</p><p>The second question: how novel is this problem?</p><p>Novelty means how much established pattern or precedent exists for what you&#8217;re designing. A standard notification component for a well-understood interaction is low novelty. A new feature that asks users to trust AI with a consequential decision is high novelty. There&#8217;s no precedent in your system, no established pattern to reference, no historical data about how your users will respond.</p><p>When you map these two variables against each other, four zones emerge, and each one calls for a different relationship between human judgment and AI.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tFH2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tFH2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png 424w, https://substackcdn.com/image/fetch/$s_!tFH2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png 848w, https://substackcdn.com/image/fetch/$s_!tFH2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png 1272w, https://substackcdn.com/image/fetch/$s_!tFH2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tFH2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:203467,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://lesliesultani.substack.com/i/192691714?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tFH2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png 424w, https://substackcdn.com/image/fetch/$s_!tFH2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png 848w, https://substackcdn.com/image/fetch/$s_!tFH2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png 1272w, https://substackcdn.com/image/fetch/$s_!tFH2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9f54f77-04e3-4b0b-b26b-c3d7f21a5ef4_2400x1800.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2>The Four Zones</h2><p><strong>High stakes, high novelty.</strong> This is where human judgment is not optional. New product directions. Features that touch how users understand what AI is doing on their behalf. Experiences that carry legal or ethical exposure. Trust-critical moments where the wrong design decision isn&#8217;t just a UX failure but a breach. AI can do useful work here, mostly pulling together research and generating options for the team to react to. But a human decides, and that decision is documented and owned.</p><p><strong>High stakes, low novelty.</strong> This is the zone that looks clean on paper and gets messy in practice. The pattern exists and the interaction is well understood, but the consequences of getting it wrong are real. In theory, AI handles execution and a human gatekeeps the output before it ships. Not a quick glance. A real review, with the stakes explicitly in mind.</p><p>In practice, this is where most teams trip up. I&#8217;ve watched it happen. The work feels like a slog, so the review becomes a slog. Someone skims the output, it looks close enough, and it ships. The problem is that &#8220;close enough&#8221; in a high-stakes context is how you end up with an accessibility failure in a payment flow or a disclosure screen that technically meets the legal requirement but confuses every real person who reads it. The pattern being familiar is exactly what makes the risk easy to underestimate. Teams that get this zone right treat the review as a deliberate act, not a formality.</p><p><strong>Low stakes, high novelty.</strong> This is where you let AI throw things at the wall. The problem is new enough that exploring widely is valuable, and the consequences of a wrong direction are low enough that you can afford to generate, react, and toss quickly. A designer&#8217;s job here is picking what&#8217;s actually worth keeping, and being able to say why. That judgment is still human. The generation is not.</p><p><strong>Low stakes, low novelty.</strong> Let AI do it. Established patterns, understood interactions, low consequence if the first draft isn&#8217;t perfect. Offloading this work to AI and protecting human attention for the other three zones is how a design team actually scales without sacrificing quality where quality matters.</p><div><hr></div><h2>Four Checkpoints That Don&#8217;t Move</h2><p>The zones sort out most decisions. But underneath them, there are four moments in any design process where you can&#8217;t hand the wheel to AI, no matter how low-stakes or routine the work seems.</p><p>The first is framing the problem. AI is good at answering questions. It is not good at deciding which question is worth asking. The moment where a team steps back and says &#8220;are we solving the right thing&#8221; is a human moment. Not because AI can&#8217;t generate options, but because the answer depends on things that aren&#8217;t in the prompt: what the organization actually needs right now, what users have been struggling with, what the team already tried and abandoned.</p><p>The second is evaluating fit for real humans. AI doesn&#8217;t know your users. It knows patterns across training data. The designer who has sat in research sessions, who has watched people struggle with a flow, who carries the memory of actual human reactions: that designer notices things in an AI-generated interface that a generalized model cannot. Whether something actually works for the specific humans who will use this specific product, that&#8217;s a human judgment call.</p><p>The third is making ethical and trust calls. Anything that touches how users feel about the product&#8217;s honesty, anything that involves tradeoffs between business goals and user interests, anything where the design encodes a value: those decisions need a person behind them. Not because AI will always get them wrong, but because someone has to be willing to stand behind the call. That&#8217;s not a job you can delegate to a model.</p><p>The fourth is communicating decisions to stakeholders. The &#8220;why&#8221; behind a design choice requires someone who understands what the room actually cares about, not just what was decided but which tradeoffs were weighed and why the team landed where it did. AI can draft the summary. The judgment call about what matters to the people listening, and how to say it so they can accept it, is human.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nOPt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nOPt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png 424w, https://substackcdn.com/image/fetch/$s_!nOPt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png 848w, https://substackcdn.com/image/fetch/$s_!nOPt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png 1272w, https://substackcdn.com/image/fetch/$s_!nOPt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nOPt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png" width="1456" height="752" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:752,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:185983,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://lesliesultani.substack.com/i/192691714?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nOPt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png 424w, https://substackcdn.com/image/fetch/$s_!nOPt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png 848w, https://substackcdn.com/image/fetch/$s_!nOPt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png 1272w, https://substackcdn.com/image/fetch/$s_!nOPt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72a92809-468c-4582-9b9b-e488b4ec2fce_2400x1240.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div><hr></div><h2>What to Do With This</h2><p>Take your team&#8217;s current work and map it against these zones. Not every project. Pick three or four recurring work types and ask: where do we consistently treat this as low-stakes when it&#8217;s actually high-stakes? Where are we burning human review on low-novelty execution work when we could be moving faster?</p><p>The distribution you find will tell you something about where your practice is over-indexed on human involvement and where it&#8217;s under-indexed. Both cost you. The teams that get AI-native right aren&#8217;t the ones that use AI for everything. They&#8217;re the ones that know where it belongs and keep it out of where it doesn&#8217;t.</p><p>I don&#8217;t think most teams will get this right on the first pass. The zones will bleed into each other, and someone will argue that everything is high-stakes, or nothing is. But even a rough version of this framework is better than what most teams are running on now, which is gut feel and whoever pushes back the loudest. Most of those decisions still don&#8217;t come with instructions. At least now they can come with a starting point.</p><div><hr></div><p><em>Leslie Sultani is a design leader writing about the intersection of AI, design practice, and organizational change.</em></p><div><hr></div><h2>Further Reading</h2><ul><li><p><a href="https://www.stateofaidesign.com/">State of AI in Design 2025</a> &#8212; Foundation Capital &amp; Designer Fund. Data on where designers are actually using AI and where they&#8217;re not, which maps closely to the high-stakes/low-novelty zones in this framework.</p></li><li><p><a href="https://www.uxmatters.com/mt/archives/2025/06/the-evolution-of-ux-design-in-the-age-of-ai-platformsfrom-creator-to-choreographer.php">The Evolution of UX Design in the Age of AI Platforms: From Creator to Choreographer</a> &#8212; Ken Olewiler, UXmatters. The case for why curation and judgment are the skills that survive automation.</p></li><li><p><a href="https://www.figma.com/blog/skills-for-the-ai-era/">5 Design Skills to Sharpen in the AI Era</a> &#8212; Figma. Research-backed breakdown of which design skills compound in value as AI handles more execution.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[01. What “AI-Native Design” Actually Means (And Why Most Teams Are Getting It Wrong)]]></title><description><![CDATA[Most design teams will tell you they&#8217;re doing AI-native design.]]></description><link>https://www.ainativedesignseries.com/p/what-ai-native-design-actually-means</link><guid isPermaLink="false">https://www.ainativedesignseries.com/p/what-ai-native-design-actually-means</guid><dc:creator><![CDATA[Leslie Sultani]]></dc:creator><pubDate>Thu, 26 Mar 2026 19:30:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!trOE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!trOE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!trOE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg 424w, https://substackcdn.com/image/fetch/$s_!trOE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg 848w, https://substackcdn.com/image/fetch/$s_!trOE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!trOE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!trOE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg" width="1184" height="666" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:666,&quot;width&quot;:1184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:149567,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://lesliesultani.substack.com/i/192106109?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!trOE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg 424w, https://substackcdn.com/image/fetch/$s_!trOE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg 848w, https://substackcdn.com/image/fetch/$s_!trOE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!trOE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4834a5fa-afc7-4cce-8468-a39586151eaa_1184x666.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Most design teams will tell you they&#8217;re doing AI-native design. What they usually mean is that their designers have some sort of AI copilot license and someone on the team figured out how to generate wireframes with a prompt.</p><p>That&#8217;s not AI-native. That&#8217;s AI-aware. And the distance between those two things is roughly the distance between buying a camera and knowing how to see.</p><p>I started seeing these patterns firsthand. Now I&#8217;m mapping the framework so other organizations can see them too. This article captures the first set of findings from that ongoing work.</p><p>The focus is not on tools, but on the practice itself. How work gets structured. How decisions get made. How teams organize around a fundamentally different way of building. What&#8217;s becoming clear is that most organizations are solving for the wrong problem. The ones that identify the right problem first will gain a significant competitive advantage.</p><p>Here&#8217;s the distinction that clarified everything for me.</p><h2><strong>Three Stages, One Question</strong></h2><p>Design organizations are currently at one of three stages in their relationship with AI.</p><p>In the first stage is <em>AI-aware</em>, this is when the team knows the tools exist and some individuals use them on their own. There&#8217;s no shared organizational posture. The gains are individual and a hit or miss.</p><p>Then there is <em>AI-augmented</em>, this means the team has identified specific phases of work where AI can accelerate output, things like research synthesis, visual exploration, and first draft copy. The gains are more consistent, but they are gains in speed, not capability. The underlying practice has not changed; the organization is simply running the same race faster. This is exactly where I found myself, and it became the catalyst for my research into what lies beyond just &#8216;faster&#8217;.</p><p>Lastly, there is the <em>AI-native </em>stage<em>.</em> This is something different. It means the organization has rebuilt around a different question entirely: <em>where does human judgment create irreplaceable value (human-led value), and where is AI faster, more consistent, and better (AI-led execution)?</em></p><p>This isn&#8217;t a question about tools. It&#8217;s a structural question about the organization. And answering it honestly takes more courage than most teams are prepared for, because the honest answer often reveals that a significant portion of what designers spend their time on falls into the second category of AI-led execution.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oQOc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oQOc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png 424w, https://substackcdn.com/image/fetch/$s_!oQOc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png 848w, https://substackcdn.com/image/fetch/$s_!oQOc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png 1272w, https://substackcdn.com/image/fetch/$s_!oQOc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oQOc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png" width="1197" height="1130" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1130,&quot;width&quot;:1197,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!oQOc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png 424w, https://substackcdn.com/image/fetch/$s_!oQOc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png 848w, https://substackcdn.com/image/fetch/$s_!oQOc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png 1272w, https://substackcdn.com/image/fetch/$s_!oQOc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca918670-2a17-4478-b8b4-f03ace768d77_1197x1130.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>What Actually Changes</strong></h2><p>When you take this question seriously, five things shift.</p><p><strong>The Design System</strong></p><p>In most organizations, the design system is documentation. A Figma file and a Storybook that engineers sometimes look at. In an AI native organization, the design system becomes the instruction set that governs everything AI produces. It does not just tell human designers what to do. It tells AI coding agents which components to use, what the tokens mean, which patterns are on brand, and which are hallucinated approximations.</p><p>Teams at Atlassian and Figma have already started building this layer. The MCP server that Figma announced at Schema 2025 brings design system context directly into developer workflows so AI agents can generate code that actually reflects the design system, including component names, spacing, and accessibility properties, without manual reinterpretation at every handoff.</p><p>If you build this infrastructure now, you have quality governance at scale. If you do not, AI output slowly erodes product consistency from the inside creating both technical and design debt.</p><p><strong>Research Practice</strong></p><p>Research practice is shifting too. AI does not replace user research, but it does change how the work happens. Synthesis that once took a week can now take hours. The bottleneck moves from processing the research to making sense of it. The real work becomes deciding which patterns actually matter and what they mean for the product. That kind of decision still requires context, empathy, and a deep understanding of the problem space. An AI native research practice is not just about moving faster. It is about using that time to focus on the moments where human judgment matters most.</p><p><strong>Human Judgment</strong></p><p>Another shift is how human judgment is now applied. In a traditional design process, human judgment is present at every stage simply because humans are the ones doing the work at every stage. In an AI native process, AI is doing significant portions of the work. Human judgment does not disappear. It concentrates into specific checkpoints. The design leader&#8217;s job becomes identifying exactly where those moments exist in the workflow and making sure the right people are present for them. In these moments, the goal isn&#8217;t to review AI output passively. The goal is to make real decisions.</p><p><strong>Team Fluency</strong></p><p>Equally important is team fluency. The organizations moving fastest are not the ones that mandated AI training. They are the ones that built cultures of hands-on experimentation, created safe spaces to try and fail, and established internal champions who help others find their first real &#8220;aha&#8221; moment with the tools. Figma ran something called the Great Figma Bake Off, a company-wide competition to build projects with live jam sessions in every time zone. Atlassian trained more than a thousand designers through their AI Product Builders Week. In both cases, the tool was never the point; it was giving people the permission to &#8220;play&#8221;.</p><p><strong>Trust</strong></p><p>And finally there is trust. As AI becomes more embedded in product experiences, designing how users understand what AI is doing, when to trust it, how to correct it, and what it can and cannot do becomes its own discipline. Trust architecture. This is not a feature or a UX pattern. It is a fundamental product strategy question that touches retention, conversion, legal exposure, and brand. Design leaders who understand this can speak credibly to a C suite about AI risk in terms the C suite actually cares about. Right now, most leaders are still struggling to bridge that gap.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SGNu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SGNu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png 424w, https://substackcdn.com/image/fetch/$s_!SGNu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png 848w, https://substackcdn.com/image/fetch/$s_!SGNu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png 1272w, https://substackcdn.com/image/fetch/$s_!SGNu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SGNu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png" width="1197" height="1386" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1386,&quot;width&quot;:1197,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!SGNu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png 424w, https://substackcdn.com/image/fetch/$s_!SGNu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png 848w, https://substackcdn.com/image/fetch/$s_!SGNu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png 1272w, https://substackcdn.com/image/fetch/$s_!SGNu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd10ed057-acb5-4349-ab30-28441c9f2a8a_1197x1386.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Why Most Teams Are Getting This Wrong</strong></h2><p>The mistake most organizations make is treating AI-native as an upgrade to the tools layer: Buy the right licenses, train the team on prompting, integrate AI into a few workflows, and declare victory.</p><p>What this misses is that the practices and structures underneath the tools haven&#8217;t changed. The handoff process still produces the same bottlenecks. The design system is still documentation that humans wrote for other humans. Yes, the tools are faster, but the organization is still running on a pre-AI operating system.</p><p>This is why the companies seeing the strongest results are not necessarily the ones with the most sophisticated AI tools. They are the ones who got the organizational model right first. Then they let the tools serve that model.</p><h2><strong>The Opportunity This Creates</strong></h2><p>If you are a design leader who understands this, you have an advantage most don&#8217;t yet have. Not because you know more about AI than anyone else. Because you understand that this is fundamentally an organizational design problem. And organizational design is a design problem.</p><p>The same instincts that make a great designer are the ones required to build a practice that can use AI well: systems thinking, empathy for the people in the system, and the ability to see what is actually happening instead of what the org chart says is happening. The goal is not just speed. It is better decisions, stronger strategy, and work that scales over time.</p><p>The organizations that treat AI native design as a practice rather than just another purchase are already setting themselves apart. The gap is no longer just about speed. It is about which teams are still buying cameras and which ones have learned how to see.</p><div><hr></div><p>Leslie Sultani is a design leader writing about AI, design practice, and organizational change.</p><div><hr></div><h2><strong>Further Reading</strong></h2><ul><li><p><a href="https://www.stateofaidesign.com/">State of AI in Design 2025</a> &#8212; Foundation Capital &amp; Designer Fund. Primary research on where design teams are in their AI adoption and where the practice gaps are.</p></li><li><p><a href="https://www.atlassian.com/blog/inside-atlassian/ai-product-builders-week">AI Product Builders Week: How Hands-On Experimentation Is Shaping Atlassian&#8217;s Future</a> &#8212; Atlassian&#8217;s account of their week-long internal program where over a thousand employees built with AI tools together.</p></li><li><p><a href="https://www.figma.com/blog/schema-2025-design-systems-recap/">Schema 2025: Design Systems for a New Era</a> &#8212; Figma&#8217;s recap of Schema 2025, including the MCP server announcement and how design systems are being rebuilt for AI-native workflows.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[00. What Does Design Look Like When AI Changes Everything]]></title><description><![CDATA[There&#8217;s a question most designers are sitting with right now that they&#8217;re not saying out loud.]]></description><link>https://www.ainativedesignseries.com/p/what-does-design-look-like-when-ai</link><guid isPermaLink="false">https://www.ainativedesignseries.com/p/what-does-design-look-like-when-ai</guid><dc:creator><![CDATA[Leslie Sultani]]></dc:creator><pubDate>Wed, 25 Mar 2026 15:20:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!YLYG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YLYG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YLYG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png 424w, https://substackcdn.com/image/fetch/$s_!YLYG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png 848w, https://substackcdn.com/image/fetch/$s_!YLYG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png 1272w, https://substackcdn.com/image/fetch/$s_!YLYG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YLYG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png" width="1184" height="672" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:672,&quot;width&quot;:1184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:755495,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://lesliesultani.substack.com/i/192105626?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YLYG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png 424w, https://substackcdn.com/image/fetch/$s_!YLYG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png 848w, https://substackcdn.com/image/fetch/$s_!YLYG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png 1272w, https://substackcdn.com/image/fetch/$s_!YLYG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a23c035-af24-4f9b-b7c6-54eff64371a0_1184x672.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There&#8217;s a question most designers are sitting with right now that they&#8217;re not saying out loud.</p><p>Not &#8220;<em>which AI tools should I learn?</em>&#8221; That one gets asked constantly, loudly, in every design community and conference. The quieter question underneath it is: <em>what happens to me?</em> To my judgment, my craft, my career. To the part of the work that I was told I was good at. When AI can do in seconds what used to take days, what am I actually for?</p><p>I&#8217;ve been thinking about this question for the better part of a year, and I&#8217;ve stopped believing it has a simple answer. But I&#8217;ve started believing there&#8217;s a more honest one.</p><p>When AI changes everything about design, the most human parts of the work become more valuable. Not less.</p><p>The catch is knowing which parts those are.</p><p>I noticed this in my own work recently. An engineer generated five interface variations with an AI tool in under a minute. Two years ago that would have taken a designer most of a day. The interesting part wasn&#8217;t the speed. It was that everyone still turned to the designer in the room to decide which one was actually right. Most teams haven&#8217;t fully realized what that shift means yet.</p><h2><strong>The Execution Trap</strong></h2><p>Most designers, if asked them to describe their job, will describe it in terms of what they make. Wireframes, flows, components, prototypes. The artifacts. Even senior designers often reach for the artifacts first.</p><p>This is understandable. For most of design&#8217;s history as a profession, the making <em>was</em> the expertise. The ability to take an ambiguous problem and translate it into something visible and testable. That was the skill. It required judgment, yes, but it was expressed through execution.</p><p>AI hasn&#8217;t changed the judgment part. It has changed the execution part dramatically.</p><p>This is the distinction that matters. When a designer can generate ten visual explorations in the time it used to take to make one, the value doesn&#8217;t disappear. It relocates. The question is no longer &#8220;<em>can you produce this?</em>&#8221; It&#8217;s &#8220;<em>of these ten, which one is actually right, and why?</em>&#8221;</p><p>That second question requires something AI doesn&#8217;t have. Context. History with the product. Empathy with the specific humans who are going to use it. Knowledge of what the team tried six months ago and why it didn&#8217;t work. Understanding of what the business needs to communicate and why this particular version of the message fits this particular moment.</p><p>Execution got faster. What it really exposed was the judgment.</p><h2><strong>What Gets Left Behind</strong></h2><p>There&#8217;s a version of this shift that&#8217;s uncomfortable to sit with: a significant amount of design work across the industry today falls into the category of execution that AI can accelerate substantially.</p><p>The State of AI in Design report found that 84% of designers are already using AI in exploration, but only 39% in delivery. Which means the early, generative, divergent work is largely being handed off, and the later, convergent, high-stakes work is still human. For now.</p><p>The honest implication: design roles that are primarily execution-focused are going to change faster and more dramatically than roles where judgment is the core product.</p><p>This isn&#8217;t a comfortable thing to write. But it&#8217;s more useful than pretending the shift is only additive, that AI just makes everyone more productive and nothing else changes. Something does change. The question is whether designers see it coming and position accordingly.</p><h2><strong>What Gets More Valuable</strong></h2><p>Ken Olewiler, writing in UXmatters, describes the shift this way: designers are moving from creator to choreographer. From making the work to directing the collaboration between human judgment and AI capability. The most valuable designer on a team is no longer the one who can produce the most. It&#8217;s the one who knows what the right output should look like, can direct the process toward it, and can tell, with confidence, when something is wrong even when it looks right.</p><p>David Kossnick, Figma&#8217;s Head of AI Products, calls these people &#8220;vision carriers.&#8221; The designers who understand where the product needs to go. Who can hold the long arc of the user experience in their head while everyone else is deep in the immediate problem. Who can look at an AI-generated interface and say: this is technically correct and completely wrong for our users.</p><p>This is not a new kind of intelligence. It&#8217;s an old kind of intelligence that the pace of execution used to obscure. When making was slow, the making consumed most of the professional attention. When making is fast, everything that was underneath the making becomes visible.</p><p>What&#8217;s underneath is taste, systems thinking, empathy at scale. The ability to read what an organization actually needs versus what it says it wants. To hold a user&#8217;s experience across dozens of touchpoints and notice when something breaks the coherence.</p><p>These are, not coincidentally, exactly the things that are hardest to learn and hardest to teach. They are also the things AI cannot replicate.</p><h2><strong>The Designers Who Will Thrive</strong></h2><p>The designers who will navigate this well are not necessarily the ones with the most technical AI fluency, though that matters. They&#8217;re the ones who can answer a different question clearly.</p><p>The question is: <em>what is the irreplaceable thing I bring to this problem?</em></p><p>Not in general. For this problem. In this organization. With this team and this product and these constraints.</p><p>The designers who can answer that question quickly, specifically, and confidently are the ones who have always been doing the highest-value version of the work. AI hasn&#8217;t changed their value. It&#8217;s just made it more legible.</p><p>For designers who have been hiding behind execution, behind the tools and the deliverables and the process, this moment is a reckoning. The hiding place is going away. What&#8217;s left is the judgment.</p><p>That&#8217;s not a bad thing. It&#8217;s clarifying. It&#8217;s also what sent me into the research that became this article.</p><h2><strong>What This Means Right Now</strong></h2><p>This shift is not coming eventually. It&#8217;s happening now, unevenly, across organizations that are at very different stages of figuring it out. Some teams are already reorganizing around this reality. Many more are still treating AI as a productivity tool and wondering why the results aren&#8217;t compounding.</p><p>The designers who see this clearly have an unusual advantage: they&#8217;re not just more valuable individually. They can help their organizations make the transition well. Because the question of where human judgment is irreplaceable and where AI is faster and better is not just a career question. It&#8217;s an organizational design question. And designers are the people best equipped to answer it.</p><p>This is the part I find genuinely interesting. Design has always been about understanding systems, people, and the messy relationship between them. That&#8217;s exactly what the AI transition requires.</p><p>The profession isn&#8217;t threatened by AI. It&#8217;s being redefined by it, toward the work it was always most suited to do.</p><div><hr></div><p>Leslie Sultani is a design leader writing about AI, design practice, and organizational change.</p><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[02. Your Design System Has a New Job: How Atlassian and Figma Are Building for AI]]></title><description><![CDATA[Design systems have always mattered, but they have rarely felt urgent.]]></description><link>https://www.ainativedesignseries.com/p/your-design-system-has-a-new-job</link><guid isPermaLink="false">https://www.ainativedesignseries.com/p/your-design-system-has-a-new-job</guid><dc:creator><![CDATA[Leslie Sultani]]></dc:creator><pubDate>Wed, 25 Mar 2026 02:13:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!iPa3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iPa3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iPa3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg 424w, https://substackcdn.com/image/fetch/$s_!iPa3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg 848w, https://substackcdn.com/image/fetch/$s_!iPa3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!iPa3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iPa3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg" width="1184" height="672" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:672,&quot;width&quot;:1184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:372757,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://lesliesultani.substack.com/i/192052876?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iPa3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg 424w, https://substackcdn.com/image/fetch/$s_!iPa3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg 848w, https://substackcdn.com/image/fetch/$s_!iPa3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!iPa3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a1ab259-b33f-4a43-a638-216851b522ef_1184x672.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Design systems have always mattered, but they have rarely felt urgent. Everyone knows they need one and it is worth the effort in the long run. But there&#8217;s always a product deadline, a launch, or something more immediate that has to ship before anyone gets back to the foundational design system that actually keeps everything together.</p><p>That tradeoff used to make sense. It doesn&#8217;t anymore.</p><p>Not because someone finally made the case to prioritize it but because AI changed everything. If your design system only talks to humans, it&#8217;s only doing half the job. You&#8217;re ignoring its new power users: the AI agents that can spit out more code in a minute than your entire engineering team could write in a month.</p><p>If the AI doesn&#8217;t have your design system to guide it, it&#8217;s just going to make it up. On the surface, everything looks fine. But underneath, it&#8217;s gradually breaking apart everything your team spent years building.</p><h2><strong>What the Design System Actually Does Now</strong></h2><p>For years, design systems were basically just instruction manuals for people. We used them to make sure everyone was using the same buttons and didn&#8217;t mess up the spacing. It was mostly about keeping designers on the same page. The audience was designers and the occasional engineers who actually bothered to check what we intended.</p><p>Back then, the whole point was just writing things down. It was about documentation, nothing more.</p><p>Today, the job has changed. These AI coding agents look at your existing context to figure out which components to use and how things should actually work. If the design system is available for the agent to find, it uses it. If not? The agent just fills in the blanks based on general training data. It&#8217;s a fast track back to 2013, where every site looks like a generic Bootstrap clone because the tools are all pulling from the same basic bucket. Please lets NOT go back to that.</p><p>AI doesn&#8217;t know about your product. It doesn&#8217;t know which component you deprecated last quarter, or that your button radius is 6px not 8px, or that you never use that particular pattern because it failed user testing eighteen months ago. To a stranger, the code it spits out looks fine. But to you, it&#8217;s obviously wrong. It&#8217;s &#8216;off&#8217; in that annoying way that&#8217;s hard to put into words, and it creates a mountain of technical debt that you&#8217;ll be stuck cleaning up later.</p><p>The design system&#8217;s job has changed: not just telling humans what to do, but giving AI enough context to do it better.</p><h2><strong>The Teams Building This Now</strong></h2><p>Atlassian is one of the clearest examples of what it looks like to actually solve this problem rather than talk about solving it.</p><p>When their design system team started watching AI coding tools generate interfaces, they noticed the quality gap immediately. The AI had access to their component library, technically. But access isn&#8217;t context. It could see that a component called Button existed. It couldn&#8217;t understand the full design intent behind it: when to use the primary variant versus the subtle one, what patterns the design system team had deliberately excluded, what accessibility requirements were baked into the component&#8217;s behavior.</p><p>Their solution was to build what they call &#8220;agentic content&#8221;: structured documentation written specifically for AI to read and reason from, not for humans to skim. Not a README. Instructions, with enough specificity that an AI agent could make the right decision without escalating to a designer.</p><p>They also built a direct connection between their design system and the AI coding tools their engineers use. (In technical terms, this is an MCP server.) The idea is straightforward: when an engineer&#8217;s AI agent generates code, it can see the design system the same way it sees the codebase. Component names, token values, usage rules, anti-patterns: all of it available at the moment code gets generated, not discovered later in a design review.</p><p>What came out of that work was fewer hallucinated components, fewer design review cycles spent catching AI-generated inconsistencies, and engineers who could move faster without constantly checking in with design.</p><p>Figma is tackling this from a platform perspective. Their MCP server, announced at Schema 2025, brings design file context directly into developer environments. The design system stops being a static document that someone has to translate, it becomes something the AI can read directly.</p><h2><strong>Three Things That Have to Change</strong></h2><p>Building this layer isn&#8217;t just a technical project. It requires a different way of thinking about what the design system is for.</p><p>The first thing is writing documentation for AI, not just for humans. This means going back through your component library and asking a different question for each entry: <em>if an AI agent only had this documentation to work from, would it make the right decision?</em> Not a reasonable decision; the right one. The one your design team would make. Most design systems fail this test badly, not because the documentation is poor, but because it was never written with this reader in mind.</p><p>Docs alone won&#8217;t get you there, though. You need a shared prompt library: a set of pre-built prompts that designers and engineers can use to interact with your design system through AI. These aren&#8217;t general prompts, they&#8217;re specific to your product, your patterns, and your conventions. &#8220;Create a notification component following our design system guidelines&#8221; is a different prompt than &#8220;create a notification component.&#8221; The first gets you something reviewable. The second gets you a guess. A prompt library means you don&#8217;t have to reinvent the wheel every time you want the AI to behave. It makes the interaction repeatable and teachable, which saves you from having to fix the same mistakes every other day.</p><p>But prompts still depend on a human remembering to use them. The harder problem is the technical bridge that gives AI direct, automatic access to your design system&#8217;s context, without anyone having to copy-paste a prompt or point the agent to the right doc. That&#8217;s what Atlassian built. The technical side might look different depending on your tools, but the core idea doesn&#8217;t change: put the design system in the room where the code actually gets generated, rather than having it show up after the fact in a design review.</p><h2><strong>The Strategic Shift This Creates</strong></h2><p>Look, you could frame this as a technical project. Build some tools, clean up the docs, the AI works better. Fine.</p><p>But here&#8217;s what&#8217;s actually happening. The design team that owns this infrastructure is writing the rules that every AI coding tool in the organization has to follow. Not guidelines that engineers might read. Rules that get enforced automatically, every time code gets generated.</p><p>Think about what that means. Engineering teams running AI tools are operating inside the design system&#8217;s constraints whether they realize it or not. Product managers who only care about shipping speed are getting outputs shaped by decisions the design team already made. For years, designers had to fight for a seat at the table, beg people to follow the specs, chase down inconsistencies after they&#8217;d already shipped. This is different. The influence moves upstream. It&#8217;s baked into the generation layer, not applied after the damage is already done.</p><p>This is the reframe that took me a while to see clearly. Design systems have always been how we help our teams do more with less. That was true when the audience was fifty designers who needed to stay in sync. Once every AI tool in the company starts pulling from your design system, you&#8217;re not helping fifty people stay consistent. You&#8217;re shaping every piece of UI the organization produces.</p><h2><strong>The Window Is Open Right Now</strong></h2><p>Not every team is going to see this coming. Building this now isn&#8217;t just about being &#8220;first,&#8221; it&#8217;s about writing the AI playbook for your company before a generic model writes a bad one for you.</p><p>Designers already know how to build playbooks. We&#8217;ve been doing it for years to keep humans in sync. Instead of a PDF that a human might glance at once, you&#8217;re building a living set of rules that an AI actually follows at scale, automatically, every single time an engineer hits &#8216;generate.&#8217; And unlike your human colleagues, the AI will actually read it.</p><p>Remember the tradeoff from the top of this article? The one where the design system kept getting pushed to the bottom of the list because there was always something more urgent to ship? That tradeoff used to make sense because the cost was slow. A little drift here, a few inconsistencies there. You could catch up later. Now the AI is generating that drift at a speed no team can clean up after. The tradeoff doesn&#8217;t work anymore. It hasn&#8217;t for a while.</p><div><hr></div><p><em>Leslie Sultani is a design leader writing about AI, design practice, and organizational change.</em></p><div><hr></div><h2><strong>Further Reading</strong></h2><ul><li><p><a href="https://www.atlassian.com/blog/design/turning-handoffs-into-handshakes-integrating-design-systems-for-ai-prototyping-at-scale">Turning Handoffs into Handshakes: Integrating Design Systems for AI Prototyping at Scale</a> &#8212; Lewis-Ethan Healey &amp; Kylor Hall, Atlassian. The source behind the Atlassian case study in this article: how their design system team built agentic content and MCP infrastructure for AI-generated code.</p></li><li><p><a href="https://www.figma.com/blog/design-systems-ai-mcp/">Design Systems and AI: Why MCP Servers Are the Unlock</a> &#8212; Ana Boyer, Figma. The case for MCP servers as the technical bridge that makes design system context available to AI coding agents at generation time.</p></li><li><p><a href="https://www.figma.com/blog/schema-2025-design-systems-recap/">Schema 2025: Design Systems for a New Era</a> &#8212; Figma&#8217;s recap of Schema 2025, including the Dev Mode MCP server announcement.</p></li><li><p><a href="https://www.atlassian.com/blog/design/designers-workflow-for-shipping-code">Designers&#8217; Workflow for Shipping Code</a> &#8212; Eduardo Sonnino, Atlassian. A practitioner&#8217;s account of how the designer-to-engineer workflow changes when AI is generating the code.</p></li><li><p><a href="https://storybook.js.org/blog/storybook-mcp-for-react/">Storybook MCP for React</a> &#8212; Kyle Gach, Storybook. Storybook&#8217;s MCP server gives AI agents direct access to component metadata, usage patterns, and test guardrails, the same bridge Atlassian and Figma built, from the component documentation side.</p></li></ul>]]></content:encoded></item></channel></rss>