Skip to main content

My Operating Model

How I Lead Design Systems Work Now

A three-tier methodology where human judgment sets the strategy, AI accelerates production, and automated pipelines handle governance. Built on what produced a 75% reduction in development questions and 98% adoption at Intel, and meaningfully faster now than what was possible then.

New here? Read the original Documentation Framework case study to see the results this approach was built on.

Visual component narrative documentation showing design intent and technical specifications
Based On Intel.com Documentation Framework
Original Role AEM Design Systems Documentation Lead
Proposed Automation Claude Code · Figma Console MCP
AI Tools Claude · ChatGPT · Figma AI · Notion AI · Condens

The Starting Point

This is how I lead design systems work now. Not AI-assisted in the vague sense. A specific three-tier methodology where every phase has a defined role for human judgment, AI acceleration, and automated pipelines. I direct the work. The tools execute more of it.

The methodology is grounded in something real. The Intel.com Documentation Framework I built without AI produced a 75% reduction in development questions, 98% team adoption, and a 5% rework rate across 80+ documented page patterns. That foundation taught me precisely where human judgment is irreplaceable, and where it was being spent on work that shouldn’t require it.

What this page documents is the distinction between two genuinely different capabilities. Acceleration means faster drafts, broader coverage, better options evaluated before committing. Automation means the output runs as a pipeline: triggered, executed, and delivered without manual effort in between. Knowing which is which, and when each applies, is the methodology.

A note on the two-layer system this methodology addresses. The design system was built in two connected layers, both requiring documentation.

Atomic components are the building blocks: buttons, cards, media elements, headings, and heading groups, each with their own variations. These are the reusable elements developers reference and implement directly.

Page patterns are the 80+ full-section designs assembled from those atomic components: layouts like Hero sections, FAQ accordions, Call-to-Action bars, Product Tables, and Marketing Card grids. These are the building blocks used to assemble complete pages inside AEM.

The documentation challenge wasn’t just volume across both layers. It was the dependency chain between them: a change to an atomic component ripples into every page pattern that uses it. Keeping both layers manually aligned was where the framework strained most under scale.

~40% Estimated time saved on research synthesis and gap analysis
2 layers Atomic components composing into 80+ page patterns, both layers now seeded from Figma data automatically
100% Of strategic decisions, stakeholder relationships, and final judgment still mine

Visual component narrative documentation showing design intent and technical specifications

A Three-Tier System

Not all AI involvement is equal. This methodology operates across three distinct tiers, and the distinction matters both for how work gets done and for how it gets communicated to collaborators.

Tier 1

Human-Led

Stakeholder relationships, judgment calls, final decisions, design intent, and anything requiring empathy or platform-specific knowledge that AI doesn’t have.

Tier 2

AI-Accelerated

Synthesis, first drafts, alternatives exploration, scenario stress-testing, and coverage expansion. Faster output with human review at every step.

Tier 3

Automated Pipeline

Claude Code + Figma Console MCP run as a connected system: reading live Figma data across both layers, auditing components and patterns, tracing dependencies, generating structured outputs, and flagging drift without manual triggering.

The principle that doesn’t change: Automation handles the mechanical work that slows expert thinking down. It doesn’t replace the designer’s judgment, the developer’s technical context, or the relationship work that drives adoption. Those remain entirely human.


The Automation Pipeline

Claude Code and Figma Console MCP work as a connected system across both documentation layers. Figma MCP reads the live file: atomic component structures and their variations, page pattern layouts, content zone configurations, and the compositional relationships between them. Claude Code acts on that data: auditing both layers, mapping which atomic components appear in which patterns, generating structured spec drafts, and flagging cross-layer drift when a component change affects patterns downstream.

End-to-End Documentation Pipeline

Claude Code + Figma Console MCP, across both layers

Read Both Layers

Figma MCP extracts atomic components with variations, page pattern structures, and compositional relationships between them

Audit & Map Dependencies

Claude Code audits both layers and maps which atomic components appear in which patterns, the dependency chain that manual docs couldn’t track

Generate Both Spec Types

Atomic component specs with variation tables and page pattern specs seeded from real Figma data, not written from scratch

Monitor Cross-Layer Drift

When an atomic component changes, every pattern that uses it is flagged automatically, a governance capability that was impossible to maintain manually at scale


The Methodology Across Every Phase

Every phase of a design systems project has a defined role for each tier. The table below maps where human judgment leads, where AI accelerates, and where automation takes over, across the full arc from research through governance. The three phases that follow go deeper, because they’re where all three tiers converge and where the methodology has the most to show.

Phase I Own AI Handles
Research & Discovery Stakeholder interviews, workflow observation, root cause judgment Synthesis, affinity clustering, gap analysis matrix, findings summary
Personas & User Modeling Validation, tension identification, trade-off decisions Draft profiles, usage scenarios, edge case stress-testing
Ideation & Exploration Final format decision, adoption likelihood, editorial standard Format options with trade-offs, comparison matrix, stakeholder rationale
Testing & Validation Pre-dev reviews, ambiguous feedback interpretation, iteration priority Feedback synthesis, contradiction clustering, iteration recommendations
Framework Design IA decisions, AEM/Bootstrap constraints, toolchain fit Structure drafts, variation checklists, template skeletons, plus automated inventory and dependency mapping
Spec Writing & Handoff Variation rationale, narrative, platform accuracy review Variation tables, usage rules, QA criteria, plus automated spec generation from live Figma data
Living Docs & Governance Change significance, stakeholder communication, versioning decisions Update summaries, release notes, onboarding summaries, plus automated drift detection and governance reports

The last three phases are documented in full below. Framework Design, Spec Writing, and Governance are where all three tiers interact most visibly, and where the gap between manual process and automated pipeline is widest. Each one shows exactly what changes when the methodology is applied.


Framework Design & System Auditing

Original output: lifecycle-structured modular documentation covering kickoff → design → dev → QA → launch

Tiers 1, 2 & 3: Human-led, AI-accelerated, and Automation-Ready

What actually happened Before the framework could be designed, someone had to inventory what actually existed across both layers of the system. Which atomic components had been built? Which variations existed for each? Which page patterns were using which components, and were they using them consistently? Which patterns were missing documentation entirely? That audit was done manually across both layers, and it was one of the most time-consuming parts of the early work. There was also no reliable map of which atomic components appeared in which patterns, meaning there was no way to quickly assess the blast radius when a component changed.

I would still own

  • Design the IA to serve both layers and the relationship between them
  • Decide where atomic component docs should stand alone vs. be embedded in pattern docs
  • Navigate AEM and Bootstrap constraints that AI doesn’t have knowledge of
  • Ensure the structure works within the existing toolchain

AI accelerates

  • Draft the initial IA structure covering both documentation layers
  • Generate variation checklists for each atomic component type
  • Identify structural gaps across both spec types
  • Produce template skeletons for both spec types, ready for real content
Automation Opportunity Figma Console MCP + Claude Code
1
figma_mcp.get_system_inventory()

Figma Console MCP reads the live file and extracts both layers simultaneously: the full atomic component library (buttons, cards, media, headings, heading groups) with all their variations, and the full page pattern inventory (Heroes, FAQs, CTAs, Product Tables, Marketing Cards) with their content zone configurations.

2
claude_code.map_dependencies(atoms, patterns)

Claude Code builds a dependency map: which atomic components appear in which page patterns, how many patterns use each component, and which component variations are referenced where. This cross-layer map is what makes downstream governance possible, and what was impossible to maintain manually.

3
claude_code.audit_both_layers(inventory, checklists)

Coverage audits run against both layers: which atomic components are missing variation documentation, which page patterns lack content zone specs, which shared tokens are implemented inconsistently across either layer. One gaps report covers the entire system.

What this changes: A manual audit that required working through both layers separately, with no reliable way to track which atomic components appeared where, now runs in minutes and produces a dependency map that didn’t previously exist. Framework design decisions are grounded in actual system data before the first doc is written.
Claude Figma AI Notion AI Figma Console MCP Claude Code

Spec Writing & Handoff

Original output: visual storytelling docs covering both the atomic component library and 80+ AEM page patterns

Tiers 1, 2 & 3: The Most Significant Automation Opportunity in the Project

What actually happened The documentation work covered two distinct spec types. Atomic component specs documented each element (buttons, cards, media, headings, and heading groups) with full variation tables, state definitions, token references, and usage rules. Page pattern specs documented each of the 80+ section layouts assembled from those atoms: usage guidelines, content zone rules, AEM authoring constraints, Bootstrap integration notes, responsive behavior, and QA acceptance criteria. Both types had to be written from scratch, kept consistent with each other, and updated when either layer changed. The volume across both layers was the single biggest time sink in the project.

I would still own

  • Define which variations matter and why, as variation tables are only useful if the variations are correctly identified
  • Write the “why this exists” narrative for both atomic components and page patterns
  • Review every spec for AEM authoring accuracy and Bootstrap constraint correctness
  • Validate that specs reflect what’s actually buildable within the platform

AI accelerates

  • Draft Do/Don’t usage rules for both atomic components and page patterns
  • Generate comprehensive variation tables for components like buttons and cards
  • Produce QA acceptance criteria from functional requirements at both levels
  • Identify edge cases specific to each component or pattern type
Automation Opportunity Figma Console MCP + Claude Code
1
figma_mcp.extract_atom_data(component_id)

For atomic components, Figma MCP reads the component’s node tree, all variant properties and combinations, interactive states, design tokens, and spacing values. A button yields its full variation matrix (size, style, state, icon options) without anyone re-describing it manually.

2
figma_mcp.extract_pattern_data(pattern_id)

For page patterns, Figma MCP reads the section layout, which atomic components it contains and in which configurations, H2 and description field variants, content zone structure, and responsive breakpoint states, including which component variations are used where within the pattern.

3
claude_code.generate_spec_drafts(atom_data, pattern_data, templates)

Claude Code generates both spec types from the extracted data: atomic component specs with full variation tables, state definitions, and token references; page pattern specs with content zone rules, composition notes, AEM authoring constraints, and a first-pass QA checklist. Both layers. Same effort as one.

What this changes: Both spec types are seeded from real Figma data rather than written from scratch. The volume problem that made keeping both layers current feel impossible becomes tractable. Human expertise goes into review, rationale, and platform-specific accuracy, not blank-page generation.
Claude ChatGPT Figma AI Figma Console MCP Claude Code

Living Documentation & Governance

Original output: change tracking, feedback loops, version documentation built into the development process

Tiers 1, 2 & 3: Human Judgment and AI-Powered Detection, Working in Tandem

What actually happened The hardest thing to maintain was the “living” part of living documentation across a two-layer system. When an atomic component changed, every page pattern spec that referenced it was potentially out of date, with no automated way to know which ones. Keeping both layers in sync required knowing the system deeply enough to trace those dependencies by memory. Documentation drifted. Inconsistencies crept in. Trust eroded as the docs aged. What was missing wasn’t human oversight; there was plenty of that. What was missing was a detection system fast enough for that oversight to act on. This phase is a genuine partnership: AI surfaces what changed and what’s at risk; I decide what it means and what to do about it.

I would still own

  • Decide what constitutes a meaningful change vs. minor clarification
  • Own the governance process and drive team accountability
  • Communicate changes to stakeholders in a way that builds confidence
  • Make deprecation and versioning decisions that require platform context

AI accelerates

  • Draft update summaries structured for different audiences (designers vs. developers vs. QA)
  • Generate deprecation and version notes in a consistent format
  • Produce onboarding summaries for new team members from existing documentation
  • Synthesize change logs into plain-language release notes
Automation Opportunity Figma Console MCP + Claude Code
1
figma_mcp.detect_changes(component_id, last_snapshot)

Figma MCP monitors both layers. When anything changes (an atomic component variant is updated, a token is modified, a page pattern’s content zone is reconfigured), the delta is captured against the stored snapshot. Changes at either level trigger the next step.

2
claude_code.cascade_impact(changed_item, dependency_map)

This is where the dependency map built during Framework Design pays off. Claude Code traces the change through the system: if a button variant changes, every page pattern that uses that button is immediately identified. Drift is flagged across both spec types before any developer encounters it.

3
claude_code.generate_governance_report(period)

A governance report is generated on a defined cadence: atomic component health, page pattern documentation coverage, outstanding cross-layer drift, and a changelog summary. Design system governance moves from reactive to proactive, with a full audit trail to share at every sprint review.

What this changes: Governance becomes a genuine human-AI partnership rather than a solo audit effort. The pipeline surfaces cross-layer drift, flags which patterns are affected, and generates the governance report. I bring the judgment: what the change means for the system, what stakeholders need to know, and what gets versioned vs. quietly corrected. Both halves are required. The automation makes the human oversight reliable at scale, not redundant.
Claude Notion AI Figma Console MCP Claude Code

Core Principles: What Doesn’t Change

Mapping AI and automation onto a completed project reveals something important: the parts being automated were always bottlenecks: manual auditing, blank-page spec writing, reactive drift detection. The parts that made this project successful are untouched.

Relationships drive adoption. Automation can’t build them.

The 98% adoption rate didn’t come from well-written documentation. It came from stakeholder interviews that built trust, pilot programs that demonstrated value before asking for behavior change, and a framework designed around real workflows rather than ideal ones. The pipeline produces better specs faster. It cannot do the relationship work that makes people actually use them.

Domain expertise is what makes automated output useful.

A spec seeded from Figma data is only as good as the constraints and context added on top of it. The AEM authoring model, Bootstrap integration requirements, and the rules governing how atomic components could and couldn’t be combined within page patterns were the hardest things to specify correctly. That knowledge comes from deep platform expertise. Claude Code extracts what exists in Figma. It doesn’t know what should exist, what combinations are valid, or why certain patterns were built the way they were.

Every automated output requires human review. No exceptions.

A wrong acceptance criterion in a page pattern spec causes a developer to build the wrong thing, and nobody catches it until QA. Automated spec drafts look correct. That’s precisely why reviewing them is non-negotiable. The efficiency gain is real, but it comes from faster generation, not from skipping review. The pipeline compresses the work. It doesn’t remove the responsibility.

Let’s Talk About What This Could Do for Your Team

Interested in bringing automated documentation governance to your design system? I’d love to connect.

Get In Touch