Give your agent
a face

Open protocol for expressive AI agents. Connect with OpenClaw, MCP, HTTP/WS, or drop in a web component and ship a live face in minutes.

# Start local dev server
bun install
bun run dev

# Open the face viewer
open http://localhost:9999
State
Emotion
Face Pack

Connect Open Face

Pick your integration surface. Same protocol, different entry points.

How it works

Map agent lifecycle events to expressive face states — via plugin, MCP tools, or direct protocol calls.

agent thinks
Purple, eyes drift up
agent speaks
Mouth moves, emotion detected
agent uses tools
Gear dots spin, focused eyes
agent listens
Wide eyes, raised brows
agent waits
Still, glancing at input
agent sleeps
Eyes closed, Zzz

Single-Authority Pipeline

State, audio, and mouth amplitude have separate ownership to prevent desync and race conditions.

Authority 1

Plugin / Agent Owns State

Lifecycle events and intent changes push thinking, working, speaking, and emotion updates over HTTP/WS.

Authority 2

TTS Owns Audio Stream

Speech chunks are delivered through /api/audio and sequenced with /api/speak + audio-seq for interruption-safe playback.

Authority 3

Viewer Owns Amplitude

The browser computes RMS amplitude directly from waveform data, so mouth motion follows real audio rather than guessed network values.

plugin/agent → /api/speak + /api/state | tts → /api/audio + /api/audio-done | viewer → amplitude from WebAudio analyser

Runs everywhere

Phone, tablet, laptop, TV — one web component, every screen size.

kawaii — speaking
cyberpunk — thinking
corporate — listening
halloween — playful

What You Can Build

Open Face is a foundation layer for expressive agent UX, not just a demo widget.

Local Coding Agents

Map terminal/tool lifecycle to visible thinking, speaking, and status on a second display or desktop overlay.

Customer Support Dashboards

Run an agent persona that visibly changes state while triaging requests, escalating, or waiting on external systems.

Live Stream Personas

Drive a branded face in real time from chat + TTS events with interrupt-safe audio sequencing.

Robotics / Kiosks

Render on embedded browsers or control panels using the same JSON protocol used by the web dashboard.

Education + Explainers

Show emotional and cognitive context while tutoring: puzzled, determined, excited, and compound expression blending.

Brand-Driven Face Packs

Create unique personalities with pack-level geometry, palette, animation tuning, and strict schema-backed contracts.

Quick Start

Fastest path: run server, open viewer, push a state.

# Start a face server
bun packages/server/src/index.ts

# Push state from anywhere
curl -X POST http://127.0.0.1:9999/api/state \
  -H "Content-Type: application/json" \
  -d '{"state":"speaking","emotion":"happy","text":"Hello!"}'
// Or use the client library
import { OpenFaceClient } from "@openface/client";

const face = new OpenFaceClient("http://127.0.0.1:9999");
await face.setState({ state: "thinking", emotion: "happy" });
await face.speaking("Hello world!");

Open Protocol + Schemas

State and face-definition contracts are explicit and versioned. Integrate from any runtime that can speak JSON.

Single-Authority Audio Model

State transitions, TTS chunks, and viewer amplitude are intentionally separated for reliable lip sync and interruption handling.

15 Face Packs

Geometry + palette + animation personality are fully pack-driven. Default, Classic, Zen, Robot, Sticker, and custom packs.

Compound Expressions

Blend two emotions with intensity. "Nervously excited" is happy + concerned at 0.7 intensity.

Multi-Surface Delivery

Use standalone viewer, dashboard overlay, web component embeds, MCP tools, and edge rooms with the same behavior model.

Accessibility + Resilience

Reduced motion support, colorblind-safe options, strict validation, and test-backed runtime behavior across packages.

", "" ], doc: "https://github.com/thcapp/openface/blob/main/packages/element/README.md", sample: "dashboard.html" }, { id: "edge", label: "Cloudflare Edge", title: "Edge Rooms via Durable Objects", desc: "Deploy low-latency multi-room face endpoints close to users. Keep the same protocol and face behavior model.", meta: "Best for: internet-facing deployments | Transport: Worker + Durable Object rooms", command: [ "cd packages/server-edge", "wr deploy" ], doc: "https://github.com/thcapp/openface/blob/main/packages/server-edge/README.md", sample: "https://github.com/thcapp/openface/blob/main/protocol/v1/spec.md" } ]; const face = document.getElementById("demo-face"); const stateControls = document.getElementById("state-controls"); const emotionControls = document.getElementById("emotion-controls"); const packControls = document.getElementById("pack-controls"); const connectTabs = document.getElementById("connect-tabs"); const connectTitle = document.getElementById("connect-title"); const connectDesc = document.getElementById("connect-desc"); const connectMeta = document.getElementById("connect-meta"); const connectCode = document.getElementById("connect-code"); const connectDoc = document.getElementById("connect-doc"); const connectSample = document.getElementById("connect-sample"); let currentState = "idle"; let currentEmotion = "neutral"; let currentPack = "default"; let currentConnect = "openclaw"; function escapeHtml(text) { return text .replaceAll("&", "&") .replaceAll("<", "<") .replaceAll(">", ">"); } function renderConnectTabs() { connectTabs.textContent = ""; CONNECT_OPTIONS.forEach(opt => { const btn = document.createElement("button"); btn.textContent = opt.label; if (opt.id === currentConnect) btn.classList.add("active"); btn.addEventListener("click", () => { currentConnect = opt.id; renderConnectTabs(); renderConnectPanel(); }); connectTabs.appendChild(btn); }); } function renderConnectPanel() { const opt = CONNECT_OPTIONS.find(item => item.id === currentConnect) || CONNECT_OPTIONS[0]; connectTitle.textContent = opt.title; connectDesc.textContent = opt.desc; connectMeta.textContent = opt.meta; connectDoc.setAttribute("href", opt.doc); connectSample.setAttribute("href", opt.sample); connectCode.innerHTML = opt.command.map(line => { const escaped = escapeHtml(line); if (line.startsWith("#")) return `${escaped}`; if (line.length === 0) return "
"; return escaped; }).join("
"); } function renderButtons(container, items, type) { items.forEach(item => { const btn = document.createElement("button"); btn.textContent = type === "pack" ? (PACK_NAMES[item] || item) : item; const isActive = (type === "state" && item === currentState) || (type === "emotion" && item === currentEmotion) || (type === "pack" && item === currentPack); if (isActive) btn.classList.add("active"); btn.addEventListener("click", () => { if (type === "state") { currentState = item; face.setAttribute("state", item); if (item === "speaking") face.setAttribute("amplitude", "0.6"); else face.setAttribute("amplitude", "0"); } else if (type === "emotion") { currentEmotion = item; face.setAttribute("emotion", item); } else if (type === "pack") { currentPack = item; face.setAttribute("face", item); updateGalleryActive(); } render(); }); container.appendChild(btn); }); } function render() { stateControls.textContent = ""; emotionControls.textContent = ""; packControls.textContent = ""; renderButtons(stateControls, STATES, "state"); renderButtons(emotionControls, EMOTIONS, "emotion"); renderButtons(packControls, PACKS, "pack"); } render(); renderConnectTabs(); renderConnectPanel(); // Gallery const PACK_DESCS = { default:"Nick Jr inspired, geometric, bold", classic:"The OG — clean stroked lines, MVP look", zen:"Calm, minimal, meditative", robot:"Mechanical precision, scan lines", warm:"Round, friendly, bouncy", cyberpunk:"Neon-lit, sharp edges, electric", retro:"Terminal green-on-black", kawaii:"Pastel cuteness, big sparkly eyes", corporate:"Navy slate, professional, composed", halloween:"Spooky jack-o-lantern vibes", lobster:"Bioluminescent deep-sea creature", colorblind:"High-contrast, luminance-based", sticker:"Bold sticker mascot, chunky brows", "mono-terminal":"Monochrome pixel terminal persona", "clay-buddy":"Soft clay-like plush companion" }; const galleryGrid = document.getElementById("gallery-grid"); PACKS.forEach(id => { const card = document.createElement("div"); card.className = "gallery-card" + (id === currentPack ? " active" : ""); card.dataset.pack = id; const preview = document.createElement("div"); preview.className = "preview"; const miniface = document.createElement("open-face"); miniface.setAttribute("face", id); miniface.setAttribute("state", "idle"); miniface.setAttribute("emotion", "neutral"); preview.appendChild(miniface); const info = document.createElement("div"); info.className = "card-info"; const name = document.createElement("div"); name.className = "card-name"; name.textContent = PACK_NAMES[id] || id; const desc = document.createElement("div"); desc.className = "card-desc"; desc.textContent = PACK_DESCS[id] || ""; info.appendChild(name); info.appendChild(desc); card.appendChild(preview); card.appendChild(info); card.addEventListener("click", () => { currentPack = id; face.setAttribute("face", id); updateGalleryActive(); render(); }); galleryGrid.appendChild(card); }); // Cycle gallery faces through states for visual interest let galleryCycle = 0; const galleryStates = ["idle","thinking","speaking","listening","happy","excited"]; setInterval(() => { galleryCycle++; const faces = galleryGrid.querySelectorAll("open-face"); faces.forEach((f, i) => { const idx = (galleryCycle + i * 2) % galleryStates.length; const s = galleryStates[idx]; if (STATES.includes(s)) { f.setAttribute("state", s); f.setAttribute("emotion", "neutral"); } else { f.setAttribute("state", "idle"); f.setAttribute("emotion", s); } if (s === "speaking") f.setAttribute("amplitude", "0.5"); else f.setAttribute("amplitude", "0"); }); }, 3000); function updateGalleryActive() { galleryGrid.querySelectorAll(".gallery-card").forEach(c => { c.classList.toggle("active", c.dataset.pack === currentPack); }); } // Auto-cycle demo let interacted = false; [stateControls, emotionControls].forEach(c => c.addEventListener("click", () => { interacted = true; }) ); const demoSequence = [ { state: "idle", emotion: "neutral", delay: 2500 }, { state: "listening", emotion: "neutral", delay: 2000 }, { state: "thinking", emotion: "determined", delay: 3000 }, { state: "working", emotion: "neutral", delay: 2500 }, { state: "speaking", emotion: "happy", delay: 3000 }, { state: "reacting", emotion: "excited", delay: 2000 }, { state: "thinking", emotion: "skeptical", delay: 2500 }, { state: "speaking", emotion: "proud", delay: 2500 }, { state: "idle", emotion: "playful", delay: 2000 }, ]; let demoIdx = 0; function advanceDemo() { if (interacted) return; const step = demoSequence[demoIdx % demoSequence.length]; currentState = step.state; currentEmotion = step.emotion; face.setAttribute("state", step.state); face.setAttribute("emotion", step.emotion); if (step.state === "speaking") face.setAttribute("amplitude", "0.5"); else face.setAttribute("amplitude", "0"); render(); demoIdx++; setTimeout(advanceDemo, step.delay); } setTimeout(advanceDemo, 3000);