Open Source CLI for motion intelligence

Capture front-end behavior with the fidelity your AI agent needs.

Record interactions, detect DOM and style transitions, and export structured traces that reproduce real animations instead of approximating them.

  • ms-leveltiming precision
  • JSONLraw trace persistence
  • Prompt-readyfor Claude / GPT workflows
Illustration of recording and extracted animation profile output.

capture-session

$ 
🚀 browser launched, instrumentation loaded
👆 click: button[data-testid="checkout"]
🔄 style-change: transform, opacity
✅ profile extracted: click-on-button

animation-profile.json

{
  "trigger": { "event": "hover" },
  "effect": {
    "properties": ["transform", "box-shadow"],
    "timing": { "duration": "280ms", "easing": "cubic-bezier(...)" }
  }
}

What it captures

Semantic traces, not noisy logs.

Capture Layer

Interaction Intelligence

Clicks, hovers, keyboard input, and scroll intent are logged with stable selectors and temporal context.

  • Trigger with exact timestamp
  • Selector resilient to refactors
  • Intent grouped by interaction burst
Signal
trigger + selector + timestamp
Outcome
replayable user intent

Analysis Layer

DOM + Style Diffing

Before/after snapshots isolate meaningful mutation and visual property changes while filtering framework noise.

  • Mutation observer compression
  • Computed style delta extraction
  • Transition and easing inference
Signal
mutation + computed style delta
Outcome
animation cause/effect mapping

Export Layer

AI-Ready Exports

Outputs are structured for prompting so coding agents regenerate behavior with less ambiguity and less cleanup.

  • Profile schema ready for LLM context
  • Trace, markdown, and prompt formats
  • Fast handoff from capture to implementation
Formats
json, md, prompt
Outcome
faster reproduction cycles

Workflow

Capture to code in three deliberate phases.

  1. Instrument and Record

    Launch a controlled browser, inject observers, and interact naturally with the target experience.

    npm run record -- https://example.com -d 30
  2. Extract Animation Profiles

    Group related traces into trigger/effect patterns with timing and easing metadata.

    npm start view ./captures/session_xyz
  3. Export for AI Generation

    Deliver structured prompts and machine-readable files your model can convert into production code.

    npm run export -- ./captures/session_xyz -f prompt

Output Surface

One capture session, three usable artifacts.

{
  "event": "click",
  "selector": "button.submit",
  "stylesBefore": { "transform": "scale(1)" },
  "stylesAfter": { "transform": "scale(0.95)" },
  "timestamp": 1717023458921
}
{
  "name": "button-click-feedback",
  "trigger": { "event": "click", "selector": "button.submit" },
  "effect": {
    "type": "transition",
    "properties": ["transform", "opacity"],
    "timing": { "duration": "150ms", "easing": "ease-out" }
  }
}
You are a frontend engineer.
Recreate these observed interactions exactly.
Prefer event listeners + class toggles.
Do not invent extra behavior.
Input: animation profile + selector map + timing metadata.

Get Started

Start your first capture in under five minutes.

Install dependencies, run a record session, and export a prompt for your coding assistant.

npm install
npm run build
npm run record -- https://github.com -d 30
npm run export -- ./captures/session_xyz -f prompt