Agent 2 UI
We're about to stop designing interfaces. Google's A2UI and Disco with GenTabs show us where we're headed — interfaces that build themselves.
We’re about to stop designing interfaces.
Something’s shifting in how we interact with software, and Google’s latest releases — A2UI and Disco with GenTabs — show us where we’re headed. This isn’t just about new tools. It’s about interfaces that build themselves.
A2UI: Agents Speaking UI
Google dropped A2UI on December 15th. It’s a protocol that enables AI agents to create actual, interactive interfaces — not just generate text. Think forms, charts, maps, entire UI experiences that work natively on web, mobile, and desktop.
Here’s the clever part: agents don’t run arbitrary code. They request components from a pre-approved catalog you control. One response from an agent renders everywhere, using whatever widgets match your app’s look and feel.
Disco and GenTabs
Then there’s Disco. This is Google Labs testing wild ideas. Disco has this feature called GenTabs — powered by Gemini 3 — that watches your open tabs and chat history, figures out what you’re trying to do, then just builds you a custom web app.
Planning a trip to Patagonia? Instead of drowning in tabs, GenTabs generates an interactive app with maps, calendars, all your info organized. You don’t write a single line of code.
The Pattern
The pattern behind this is that we’ve been moving toward interfaces that show up exactly when you need them. Not before, not in some generic format.
Traditional software gave us static interfaces designed months in advance. Then we got adaptive UIs that changed based on context. Now? A2UI and Disco represent something new — interfaces generated by AI that understands your current task in real time.
Where This Is Going
The magic is in how these pieces fit together. A2UI gives agents a universal language to “speak UI” across different systems. Disco shows what happens when you apply that to actual browsing. Interfaces become fluid, task-aware, and generated on the fly.
The web itself is becoming programmable through natural language. AI agents will be the new UI designers, creating bespoke experiences for individual moments instead of one-size-fits-all solutions.
A2UI is already live in Google’s Opal, Flutter’s GenUI SDK, and frameworks like CopilotKit’s AG-UI. It’s part of a bigger wave alongside A2A (how agents talk to each other) and MCP (model context protocol).
It might be a bit too early to form an opinion about it, it might raise security concerns, but it’s a fascinating direction for the future of interfaces.