Open protocol for expressive AI agents. Connect with OpenClaw, MCP, HTTP/WS, or drop in a web component and ship a live face in minutes.
Watch, phone, tablet, laptop, TV — one web component, every screen size.
State and face-definition contracts are explicit and versioned. Integrate from any runtime that can speak JSON.
State, TTS chunks, and viewer amplitude intentionally separated for reliable lip sync and interruption handling.
Packs can define head shapes, body silhouettes, and arm styles. Breath, tilt, and look-derived motion with no rigging.
Antennae, glasses, and custom attachments with layer slots and spring physics simulation for natural motion.
Blend two emotions with intensity. "Nervously excited" is happy + concerned at 0.7. 13 emotions, 11 states.
Energy, expressiveness, warmth, stability, playfulness — five traits that modulate animation speed, range, and idle behavior.
Geometry, palette, animation, personality, bodies, and accessories — all pack-driven with strict JSON schemas.
Viewer, dashboard, web component, MCP tools, edge rooms — same behavior model, watch to TV.
Reduced motion, colorblind-safe packs, ARIA labels, strict validation, and 300+ test-backed runtime.