Surfaces

Pattern 18 of 26

Generative UI

The agent decides what to show, not just what to say

Most agent UIs pre-build every possible component and then try to figure out which one to show. Generative UI flips that. The model decides what to render based on what the data actually calls for. A table, a form, a chart, a custom component. The AG-UI protocol from CopilotKit gives this a concrete event-stream model you can actually build on top of.

Why it matters

The difference between an agent that feels like a product and one that feels like a chat window bolted onto your app is usually this pattern. Static components mapped to static outputs look like a demo. Generative UI looks like software.

Deep Dive

The core idea is simple: instead of deciding in advance which component maps to which model output, you let the model decide at runtime. It looks at the data and context and returns a structured signal, not a blob of text. Render this table. Show this form. Switch to this view. Vercel shipped generative UI support in AI SDK 3.0 in March 2024, which is when I started taking this seriously as a production pattern rather than a demo trick. The AG-UI protocol from CopilotKit formalised the event stream interface between agent and frontend.

Google published a research paper on this in November 2025, formalising what practitioners had been building for about 18 months. The A2UI framing is useful: agents do not just generate text, they drive UI state. That shift in how you think about agent output changes how you design the whole system. The model is not a text generator sitting behind a chat box. It is a controller emitting structured events that the frontend reacts to. That is a different mental model and it produces materially different products.

The reliability problem is real and worth naming directly. A generative UI that sometimes renders the wrong component type creates a confusing experience. Production implementations that work constrain the output space: the model picks from a finite set of component types rather than generating arbitrary UI code. I have seen the unconstrained version in demos and it looks impressive. I have also seen it break in ways that are hard to explain to users. The constrained version is less magical in demos and much more trustworthy in actual use.

In the Wild

CopilotKit (10%+ Fortune 500)
v0 by Vercel
Vercel AI SDK (20M+ monthly downloads)
assistant-ui

Go Deeper

PAPERGenerative UI: LLMs are Effective UI GeneratorsARTICLEIntroducing AI SDK 3.0 with Generative UIARTICLEIntroducing AG-UI: The Protocol Where Agents Meet UsersDOCSAG-UI Protocol Documentation

Related Patterns