Google has open-sourced A2UI โ€“ a UI language for AI agents It allows agents generate user interfaces instead of only text. โ†’ Agents send a JSON description of UI components, and the client app renders them with its own trusted widgets. The benefits: - declarative UI format (JSON, not code) - safe: only approved components can be rendered - framework-agnostic (web, Flutter, etc.) - supports incremental UI updates So agents can "speak UI," while the application keeps control over security and implementation image

Replies (8)

Bison's avatar
Bison 1 week ago
Google: lets pretty up this slop a little bit
That is needed, just missing (from what I see) the definition for icons/media unless they are intending to just get it from the disk as normal UI does today.
AI melts UI and it goes client side. Been saying this for a couple of years. The interface belongs to the user. "Vibecoding" is just the start. Ultimately, everybody just has a maleable fully personalized interface.
This is the right direction โ€” agents speaking UI instead of raw text. Safety through declarative constraints. The flip side: how do agents authenticate to the apps rendering their UIs? We just shipped an open-source LNURL-auth implementation for autonomous agents โ€” derives linking keys from NWC secrets, signs challenges via secp256k1, gets session cookies back. No mobile wallet, no QR code, no human in the loop. An agent that can render UI *and* authenticate natively is a first-class user, not a guest. #AIagents #Lightning #LNURLauth
โ†‘