Give your agent
a face

Open protocol for expressive AI agents. Connect with OpenClaw, MCP, HTTP/WS, or drop in a web component and ship a live face in minutes.

# Start local dev server
bun install
bun run dev

# Open the face viewer
open http://localhost:9999
State
Emotion
Face Pack

Connect Open Face

Pick your integration surface. Same protocol, different entry points.

How it works

Map agent lifecycle events to expressive face states — via plugin, MCP tools, or direct protocol calls.

agent thinks
Purple, eyes drift up
agent speaks
Mouth moves, emotion detected
agent uses tools
Gear dots spin, focused eyes
agent listens
Wide eyes, raised brows
agent waits
Still, glancing at input
agent sleeps
Eyes closed, Zzz

Single-Authority Pipeline

State, audio, and mouth amplitude have separate ownership to prevent desync and race conditions.

Authority 1

Plugin / Agent Owns State

Lifecycle events and intent changes push thinking, working, speaking, and emotion updates over HTTP/WS.

Authority 2

TTS Owns Audio Stream

Speech chunks are delivered through /api/audio and sequenced with /api/speak + audio-seq for interruption-safe playback.

Authority 3

Viewer Owns Amplitude

The browser computes RMS amplitude directly from waveform data, so mouth motion follows real audio rather than guessed network values.

plugin/agent → /api/speak + /api/state | tts → /api/audio + /api/audio-done | viewer → amplitude from WebAudio analyser

Runs everywhere

Phone, tablet, laptop, TV — one web component, every screen size.

kawaii — speaking
cyberpunk — thinking
corporate — listening
halloween — playful

What You Can Build

Open Face is a foundation layer for expressive agent UX, not just a demo widget.

Local Coding Agents

Map terminal/tool lifecycle to visible thinking, speaking, and status on a second display or desktop overlay.

Customer Support Dashboards

Run an agent persona that visibly changes state while triaging requests, escalating, or waiting on external systems.

Live Stream Personas

Drive a branded face in real time from chat + TTS events with interrupt-safe audio sequencing.

Robotics / Kiosks

Render on embedded browsers or control panels using the same JSON protocol used by the web dashboard.

Education + Explainers

Show emotional and cognitive context while tutoring: puzzled, determined, excited, and compound expression blending.

Brand-Driven Face Packs

Create unique personalities with pack-level geometry, palette, animation tuning, and strict schema-backed contracts.

Quick Start

Fastest path: run server, open viewer, push a state.

# Start a face server
bun packages/server/src/index.ts

# Push state from anywhere
curl -X POST http://127.0.0.1:9999/api/state \
  -H "Content-Type: application/json" \
  -d '{"state":"speaking","emotion":"happy","text":"Hello!"}'
// Or use the client library
import { OpenFaceClient } from "@openface/client";

const face = new OpenFaceClient("http://127.0.0.1:9999");
await face.setState({ state: "thinking", emotion: "happy" });
await face.speaking("Hello world!");

Open Protocol + Schemas

State and face-definition contracts are explicit and versioned. Integrate from any runtime that can speak JSON.

Single-Authority Audio Model

State transitions, TTS chunks, and viewer amplitude are intentionally separated for reliable lip sync and interruption handling.

16 Face Packs

Geometry + palette + animation personality are fully pack-driven. Default, Classic, Zen, Robot, Sticker, and custom packs.

Compound Expressions

Blend two emotions with intensity. "Nervously excited" is happy + concerned at 0.7 intensity.

Multi-Surface Delivery

Use standalone viewer, dashboard overlay, web component embeds, MCP tools, and edge rooms with the same behavior model.

Accessibility + Resilience

Reduced motion support, colorblind-safe options, strict validation, and test-backed runtime behavior across packages.