top of page

Designing Consistency
in AI-Generated Interfaces

Building a Controlled System For Scalable UI Generation With Figma Make

ROLE

UX/UI Designer

TIMELINE

2025

Frame 3 (1).png

Working Within Constraints

In October 2025, I started exploring how generative AI could be integrated into real product design workflows using Figma Make.

  • The expectation was speed.

  • The reality was inconsistency.

  • Operating under the limitations of the Figma Organization plan, without access to a robust shared component library, every generated interface behaved as an isolated output.

  • There was no continuity between iterations.

  • No guarantee of structure.

  • No reliable way to reuse or scale what was generated.

  • This was not a limitation of AI capability, but of how it was being used without structure.

  • What initially appeared to accelerate prototyping quickly exposed a deeper issue:

​

The problem was not generating interfaces, it was maintaining consistency across them.

To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information in this case study.

Problem

Lack of Structure Breaks the Workflow

As the workflow evolved, the limitations became clear.

Each iteration required rework:

​

  • Components shifted in structure and behavior.

  • Visual patterns were not preserved.

  • Outputs diverged from expected design standards.

  • Generated UI could not be reliably reused or extended.

​

This introduced friction instead of reducing it.

The more the system was used, the less predictable it became.

​

AI was producing output, but not a system.

From Generation to Control

At that point, the problem stopped being about speed.

It became a question of control.

How can AI-generated UI be made consistent, predictable, and reusable within real product constraints?

Instead of treating Figma Make as a generative shortcut, I reframed it as a system that required structure, boundaries, and a reliable source of truth.

Approach

Structuring AI Generation as a System

The solution was not about improving outputs, but about eliminating variability at the source.

Instead of relying on open-ended generation, I restructured the workflow into a controlled system where inputs, constraints, and outputs were explicitly defined.

​

I intentionally prioritized consistency over flexibility, as variability was the primary failure point of the workflow.

The goal was simple:

Make AI-generated interfaces predictable, repeatable, and structurally consistent.

System Definition

Establishing Control

The first step was removing ambiguity from the generation process.

Rather than allowing the model to construct UI freely, I defined a system where:

​

  • All UI elements originated from a predefined set of components.

  • Structure and behavior were fixed and non-negotiable.

  • Visual consistency was enforced through controlled references.

​

This shifted generation from interpretation → to execution within constraints.

Source of Truth - Aligning with Real Implementation

To prevent divergence between generated UI and production reality, I established a single source of truth outside the generation layer.

​

  • GitHub was used to store component definitions and structure.

  • Figma Sites acted as the interface layer connecting design and code.

  • Figma Make was constrained to operate within these references.

​

I made the decision to anchor the system in code-backed definitions to ensure that generated interfaces would remain aligned with real implementation constraints.

This ensured that outputs were not isolated artifacts, but extensions of a defined system aligned with production reality.

Controlled Generation

Registry-Based Prompting

Instead of describing interfaces, I defined a closed component registry.

The model was instructed to:

​

  • Use only predefined components.

  • Preserve structure, props, and behavior.

  • Avoid introducing new patterns or styles.

​

Each component acted as a constraint, not a suggestion.

I enforced this registry to eliminate variability and ensure that every generated interface adhered to a consistent structural logic.

This transformed AI from a generative tool → into a system executor​

Constraint Layer

Enforcing Consistency

The system relied on strict, explicit rules:

​

  • No creation of new component structures.

  • No deviation from predefined styles.

  • No modification of SVG definitions.

  • No introduction of visual inconsistencies.

​

These constraints were intentionally rigid.

​

They enabled:

​

  • Consistent outputs across iterations.

  • Reusable components without rework.

  • Scalable generation without degradation.

The system was applied across 40+ prototypes within the first month, maintaining consistency without structural drift.

Trade-Offs

Choosing Reliability Over Flexibility

This approach introduced deliberate trade-offs.

By enforcing strict constraints:

​

  • Early-stage exploration became less flexible.

  • New component variations required upfront definition.

  • The system depended on discipline in maintaining the registry.

​

However, these trade-offs were intentional.

Limitations

Where the System Depends on Discipline

The system is not self-sustaining.

​

Its effectiveness depends on:

​

  • Maintaining the integrity of the component registry.

  • Ensuring all contributors adhere to defined constraints.

  • Updating the system as new components and patterns emerge.

​

The base template evolved over time, incorporating new components and expanding its capabilities without breaking consistency.

​

Additional versions of the system were introduced and successfully adopted within Figma Make, maintaining stability across iterations.

​

Without this discipline, the system can degrade.

​

This reinforced an important realization:

Outcome

From Generation to Reliable Output

The introduction of a controlled system transformed AI-generated UI from an exploratory tool into a reliable part of the design workflow.
Instead of rebuilding interfaces at each iteration, the team could now start from a consistent, pre-structured foundation, reducing variability and improving overall efficiency.
Within the first month:

  • 40+ prototypes were built using the system.

  • The template was continuously improved without breaking structure.

  • New versions were introduced and adopted without workflow disruption.


Using the template became part of the standard AI prototyping process within the team.
This impact was recognized at the leadership level:

“Before Giovanny’s work on the AI component library, we would have to create all the basic components from scratch, then spend time refining the output of the AI models. Now, we can start with a standardized template in which all of the components are already built and styled correctly. It saves us an hour of work for each new prototype that we build.”

- Head of UX (Charles Shimooka)

The system replaced variability with consistency, enabling reliable and repeatable UI generation across the team.

It reduced rework, improved iteration speed, and aligned AI-generated output with real product constraints.

In simple terms:

​

I built a system that forces AI to generate interfaces using predefined components, ensuring consistency across every output.

bottom of page