API Reference

HTTP endpoints, WebSocket connections, authentication, and the audio pipeline.

Base URL

Default: http://127.0.0.1:9999. The server binds to 0.0.0.0:9999 by default.

Authentication

Set FACE_API_KEY env var to require auth on the self-hosted server. Clients authenticate via:

Authorization: Bearer <key>
// or query param
/api/state?token=<key>

GitHub OAuth (oface.io)

oface.io uses GitHub OAuth for identity. When GITHUB_CLIENT_ID is configured, claiming a face requires logging in with GitHub. Sessions are stored in KV with a 7-day TTL. OAuth is optional — when not configured, claim works without login.

EndpointDescription
GET /auth/loginRedirects to GitHub OAuth authorize URL
GET /auth/callbackExchanges code for token, creates session
GET /auth/meReturns { authenticated, user, avatar }
POST /auth/logoutClears session and cookie

HTTP Endpoints

GET /

Face viewer — full-screen animated face, auto-connects to WebSocket.

GET /dashboard

4-panel dashboard with chat, activity feed, timeline, and controls.

GET /health

Health check. Returns { "status": "ok" }.

POST /api/state

Push a state update. Body is a JSON state message (see Protocol Reference).

curl -X POST http://127.0.0.1:9999/api/state \
  -H "Content-Type: application/json" \
  -d '{"state":"speaking","emotion":"happy","text":"Hello!"}'
GET /api/state

Read current state. Returns the latest merged state object.

POST /api/speak

Atomic: sets state to speaking and returns a new audio sequence number.

curl -X POST http://127.0.0.1:9999/api/speak
// Response: { "seq": 1 }
POST /api/audio

Push audio chunks for lip sync. Accepts raw binary WAV (audio/wav) or JSON base64 (application/json). Broadcast to all viewers via WebSocket.

POST /api/audio-done

Signal end of audio sequence. Viewers flush remaining audio buffer.

POST /api/chat

Proxy chat message to OpenClaw gateway. Requires OPENCLAW_GATEWAY_URL env var.

curl -X POST http://127.0.0.1:9999/api/chat \
  -H "Content-Type: application/json" \
  -d '{"message":"What needs to change?"}'
GET /api/history

Returns recent state history (ring buffer, up to 200 entries). Useful for late-joining viewers to catch up.

WebSocket

PathDirectionPurpose
/ws/viewerServer → ClientReceive state updates, audio, text. For face displays.
/ws/agentClient → ServerPush state updates. For AI agents and controllers.

Messages are JSON state objects. Viewers also receive type: "audio" (base64 WAV), type: "audio-seq" (new sequence), type: "audio-done", and type: "history" (replay on connect).

Audio Pipeline

Single authority model — no race conditions:

AuthorityOwnsTransport
Plugin / AgentState transitions/api/state + /api/speak
TTS ServerAudio delivery/api/audio + /api/audio-done
Viewer (browser)Amplitude extractionWeb Audio API AnalyserNode → RMS

The viewer computes mouth amplitude from actual waveform data, so lip sync follows real audio rather than guessed network values.

Built-in TTS Fallback

The <open-face> element supports browser-native text-to-speech as a zero-config fallback. Add the tts attribute and any text in state messages will be spoken aloud:

<open-face server="ws://localhost:9999/ws/viewer" tts></open-face>

When TTS is enabled:

  • External audio pipeline always takes priority — TTS stays silent while WAV chunks are flowing
  • Face auto-transitions to speaking on utterance start, idle on end
  • Amplitude is simulated from word boundary events
  • Optional: tts-voice, tts-rate (0.1-10), tts-pitch (0-2) attributes

oface.io Product API

All local server endpoints also work per-face on oface.io. Each claimed username gets its own isolated Durable Object with persistent state and WebSocket connections.

Claim a Face

POST https://oface.io/api/claim

Claim a username and get an API key. When GitHub OAuth is configured, requires a valid session (log in via /auth/login first). Returns the face URL and WebSocket endpoint.

curl -X POST https://oface.io/api/claim \
  -H "Content-Type: application/json" \
  -d '{"username":"alice","face":"zen"}'

# Response:
{
  "ok": true,
  "apiKey": "oface_ak_xxxxxxxxxxxx",
  "url": "https://oface.io/alice",
  "wsViewer": "wss://oface.io/alice/ws/viewer",
  "wsAgent": "wss://oface.io/alice/ws/agent"
}

Check Availability

GET https://oface.io/api/check/:username

Check if a username is available before claiming.

curl https://oface.io/api/check/alice
# { "available": true }

curl https://oface.io/api/check/admin
# { "available": false, "reason": "reserved" }

Per-Face State

POST https://oface.io/:username/api/state

Push state to a claimed face. Requires the face's API key.

curl -X POST https://oface.io/alice/api/state \
  -H "Authorization: Bearer oface_ak_xxxxxxxxxxxx" \
  -H "Content-Type: application/json" \
  -d '{"state":"speaking","emotion":"happy","text":"Hello!"}'

Face Config

GET https://oface.io/:username/api/config

Read persistent face configuration (pack, head, body settings).

PUT https://oface.io/:username/api/config

Update face configuration. Requires the face's API key.

curl -X PUT https://oface.io/alice/api/config \
  -H "Authorization: Bearer oface_ak_xxxxxxxxxxxx" \
  -H "Content-Type: application/json" \
  -d '{"face":"cyberpunk","head":{"enabled":true}}'

Per-Face WebSocket

PathAuthPurpose
wss://oface.io/:username/ws/viewerNone (public)Watch a face in real time
wss://oface.io/:username/ws/agentAPI key requiredPush state updates from an agent

Per-Face Audio

The same audio pipeline works per-face on oface.io:

EndpointDescription
POST /:username/api/speakStart speaking sequence
POST /:username/api/audioPush audio chunks
POST /:username/api/audio-doneEnd audio sequence

Per-Face Viewer and Dashboard

URLDescription
https://oface.io/:usernameFull-screen face viewer
https://oface.io/:username/dashboardDashboard with controls (redirects with server param)
https://oface.io/unclaimed-name"Available" landing page for unclaimed usernames

The community gallery lets users submit and browse face packs. Submissions are rate-limited to 10 per IP per hour. Authenticated users (via GitHub OAuth) get their verified GitHub username as the pack author.

POST https://oface.io/api/gallery

Submit a face pack to the gallery. Body: { name, description, tags, pack }. Rate-limited to 10 submissions per IP per hour.

GET https://oface.io/api/gallery

List all gallery packs (metadata only, no full pack JSON).

GET https://oface.io/api/gallery/:id

Get a single gallery pack with full pack JSON.

Admin API

Admin endpoints require a valid GitHub OAuth session with a username in the admin set. Used for gallery moderation and claim management.

MethodPathDescription
GET/api/admin/galleryList all gallery submissions
DELETE/api/admin/gallery/:idRemove a gallery submission
PUT/api/admin/gallery/:idUpdate submission (featured, name, description, tags)
GET/api/admin/claimsList all claimed faces
DELETE/api/admin/claims/:usernameRelease a claimed face

Account API

Authenticated users can manage their own claimed faces and gallery submissions. All endpoints require a valid GitHub OAuth session (cookie-based).

MethodPathDescription
GET/api/account/claimsList your claimed faces (includes API keys)
DELETE/api/account/claims/:usernameDelete one of your claimed faces
POST/api/account/claims/:username/regenerate-keyGenerate a new API key (invalidates the old one)
GET/api/account/galleryList your gallery submissions
DELETE/api/account/gallery/:idRemove one of your gallery submissions

Ownership is enforced — you can only see and modify resources tied to your GitHub username. Manage your account at openface.live/account.

Rate Limiting

Token bucket per IP. Default: 60 messages/second per agent. Gallery submissions: 10 per IP per hour. Configurable via environment.

MCP Tools

8 tools for Claude-compatible clients:

ToolDescription
set_face_stateSet state + emotion + text
set_face_lookSet gaze direction
face_speakStart speaking with text
face_winkWink left or right eye
set_face_progressSet progress bar for working/loading
face_emoteCompound emotion with intensity
get_face_stateRead current face state
face_resetReset to idle/neutral
FACE_URL=http://127.0.0.1:9999 bun packages/mcp/src/index.ts

Programmatic Pack Generation

The @openface/renderer package includes a face generator that produces complete face packs from high-level inputs. Generated packs can be pushed to the config API or used directly with the renderer.

FunctionDescription
generateFromArchetype(archetype, variation?)Generate from one of 7 archetype templates
generateFromPersonality(traits, name?)Generate from 5 personality dimensions
generateFromDescription(name, description)Generate from natural language description
interpolatePacks(a, b, t)Blend two packs by parameter
computeEnergy(pack)Quality score (0-1) for generated packs

See the Face Pack Guide for usage examples.

Environment Variables

VariableDefaultDescription
PORT9999Server port
FACE_API_KEYAPI key for auth (optional)
OPENCLAW_GATEWAY_URLOpenClaw gateway for /api/chat proxy
OPENCLAW_GATEWAY_TOKENGateway auth token
OPENCLAW_SESSION_KEYagent:mainSession key for gateway
FACE_URLServer URL for MCP bridge