Give your agent
a face

Open protocol for expressive AI agents. Connect with OpenClaw, MCP, HTTP/WS, or drop in a web component and ship a live face in minutes.

# Start local dev server
bun install
bun run dev

# Open the face viewer
open http://localhost:9999
State
Emotion
Face Pack

Runs everywhere

Watch, phone, tablet, laptop, TV — one web component, every screen size.

zen — idle
kawaii — speaking
cyberpunk — thinking
corporate — listening
halloween — playful
Browse all 16 face packs →

Open Protocol + Schemas

State and face-definition contracts are explicit and versioned. Integrate from any runtime that can speak JSON.

Single-Authority Audio

State, TTS chunks, and viewer amplitude intentionally separated for reliable lip sync and interruption handling.

Bodies + Head Layers

Packs can define head shapes, body silhouettes, and arm styles. Breath, tilt, and look-derived motion with no rigging.

Accessories + Physics

Antennae, glasses, and custom attachments with layer slots and spring physics simulation for natural motion.

Compound Expressions

Blend two emotions with intensity. "Nervously excited" is happy + concerned at 0.7. 13 emotions, 11 states.

Personality System

Energy, expressiveness, warmth, stability, playfulness — five traits that modulate animation speed, range, and idle behavior.

16 Face Packs

Geometry, palette, animation, personality, bodies, and accessories — all pack-driven with strict JSON schemas.

Multi-Surface Delivery

Viewer, dashboard, web component, MCP tools, edge rooms — same behavior model, watch to TV.

Accessibility

Reduced motion, colorblind-safe packs, ARIA labels, strict validation, and 300+ test-backed runtime.

Read the full documentation →