Getting Started

Concepts & Architecture

Architectural decisions, runtime lifecycle, and the mental model behind MotionGPU.


This page explains the architectural decisions behind MotionGPU, how the runtime pieces fit together, and what happens on every frame. Understanding this foundation makes the rest of the documentation much easier to navigate.

Design goals

MotionGPU is intentionally strict in input contracts and explicit in scheduling. These four goals drive every design decision:

Goal What it means in practice
Deterministic pipeline rebuilds Renderer recreation is keyed off a stable signature derived from compiled shader source, uniform layout, and texture bindings — not from live values. This makes rebuild triggers predictable.
Predictable frame flow The scheduler owns all invalidation and render-mode gating. There is no hidden “auto-render on change” behaviour except what you explicitly opt into via
useFrame
invalidation policies.
Minimal hidden magic Runtime state changes go through explicit calls (
setUniform
,
setTexture
,
invalidate
). No proxy traps, no implicit reactivity inside the render loop.
Recoverable failure UX Every error — from missing WebGPU support to WGSL syntax mistakes — is normalized into a structured report with a title, hint, and optional source snippet. The default overlay is opt-out and can be replaced with a custom renderer.

Package layout

packages/motion-gpu/src/lib/
├── index.ts                        # Public root exports
├── advanced.ts                     # Advanced exports (user context + scheduler helpers + extended types)
├── advanced-scheduler.ts           # Scheduler presets + debug snapshot helpers
├── FragCanvas.svelte               # Runtime entry component
├── MotionGPUErrorOverlay.svelte    # Default error-overlay UI component
├── Portal.svelte                   # DOM portal utility for error overlay
├── current-writable.ts             # CurrentWritable<T> reactive store
├── frame-context.ts                # Scheduler registry + useFrame hook
├── motiongpu-context.ts            # useMotionGPU context provider
├── use-texture.ts                  # Reactive URL texture loading hook
├── use-motiongpu-user-context.ts   # Advanced namespaced user state hook
└── core/
    ├── types.ts                    # All shared type definitions
    ├── material.ts                 # defineMaterial + resolveMaterial
    ├── material-preprocess.ts      # #include / defines expansion + line mapping
    ├── renderer.ts                 # WebGPU renderer creation + frame execution
    ├── shader.ts                   # WGSL code generation
    ├── uniforms.ts                 # Type inference, layout, packing
    ├── textures.ts                 # Texture normalization + helpers
    ├── texture-loader.ts           # URL fetch, decode, blob cache
    ├── render-graph.ts             # Pass execution planner
    ├── render-targets.ts           # Render target resolution
    ├── recompile-policy.ts         # Pipeline signature builder
    ├── error-report.ts             # Error normalization + classification
    └── error-diagnostics.ts        # Shader compile diagnostics payload
packages/motion-gpu/src/lib/
├── index.ts                        # Public root exports
├── advanced.ts                     # Advanced exports (user context + scheduler helpers + extended types)
├── advanced-scheduler.ts           # Scheduler presets + debug snapshot helpers
├── FragCanvas.svelte               # Runtime entry component
├── MotionGPUErrorOverlay.svelte    # Default error-overlay UI component
├── Portal.svelte                   # DOM portal utility for error overlay
├── current-writable.ts             # CurrentWritable<T> reactive store
├── frame-context.ts                # Scheduler registry + useFrame hook
├── motiongpu-context.ts            # useMotionGPU context provider
├── use-texture.ts                  # Reactive URL texture loading hook
├── use-motiongpu-user-context.ts   # Advanced namespaced user state hook
└── core/
    ├── types.ts                    # All shared type definitions
    ├── material.ts                 # defineMaterial + resolveMaterial
    ├── material-preprocess.ts      # #include / defines expansion + line mapping
    ├── renderer.ts                 # WebGPU renderer creation + frame execution
    ├── shader.ts                   # WGSL code generation
    ├── uniforms.ts                 # Type inference, layout, packing
    ├── textures.ts                 # Texture normalization + helpers
    ├── texture-loader.ts           # URL fetch, decode, blob cache
    ├── render-graph.ts             # Pass execution planner
    ├── render-targets.ts           # Render target resolution
    ├── recompile-policy.ts         # Pipeline signature builder
    ├── error-report.ts             # Error normalization + classification
    └── error-diagnostics.ts        # Shader compile diagnostics payload

FragCanvas runtime lifecycle

FragCanvas
is the single entry point that ties everything together. Here is what happens from mount to destroy:

Initialization (mount)

  1. Create frame registry — instantiates the scheduler with default stage, timing, and profiling state.
  2. Set up Svelte context — provides
    MotionGPUContext
    so
    useMotionGPU()
    ,
    useFrame()
    , etc. work inside child components.
  3. Resolve material — calls
    resolveMaterial(material)
    to produce the preprocessed WGSL fragment, uniform layout, texture keys, and a deterministic
    signature
    .
  4. Build pipeline signature — combines
    materialSignature + outputColorSpace
    into the final renderer key.
  5. Create renderer — if no renderer exists or key changed, calls
    createRenderer(...)
    which:
    • Requests a WebGPU adapter and device,
    • Compiles the WGSL shader module (with compilation error diagnostics),
    • Creates bind group layouts, pipeline, and initial buffers.

Per-frame loop (requestAnimationFrame)

Each frame follows this exact sequence:

  1. Compute timing
    time
    accumulates,
    delta
    is clamped to
    maxDelta
    .
  2. Update size — reads
    canvas.getBoundingClientRect()
    and applies DPR.
  3. Run scheduler — executes all registered
    useFrame
    tasks in topologically sorted stage/task order.
  4. Check render gate
    shouldRender()
    evaluates render mode + invalidation + advance flags.
  5. Render — if gate passes:
    • Resolves effective uniform/texture values (material defaults + runtime overrides) into reusable render payloads,
    • Uploads changed textures,
    • Writes dirty uniform ranges to GPU buffer,
    • Executes the base fullscreen pass,
    • Executes post-process passes through the render graph,
    • Presents the final output to the canvas.
  6. End frame — clears one-frame invalidation and advance flags.

Teardown (destroy)

  1. Cancel the
    requestAnimationFrame
    loop.
  2. Destroy the renderer (releases all GPU resources).
  3. Clear the scheduler registry.

Rebuild and retry policy

Not every change triggers a full renderer rebuild. This table clarifies what does and what does not:

Change Triggers rebuild? What happens instead
Material signature change (shader, uniform layout, texture bindings) Yes Full renderer recreation
Output color-space change (
'srgb'
'linear'
)
Yes Full renderer recreation
Runtime uniform value change No Dirty-range buffer write only
Runtime texture source change No Texture re-upload only
Canvas resize No Render target resize, re-render
Clear color change No Applied next frame

When renderer creation fails,

FragCanvas
retries with exponential backoff (
250ms
500ms
1000ms
→ … →
8000ms
cap). The backoff resets when the pipeline signature changes.

Data flow: uniforms and textures

Data flows through three layers, from compile-time defaults to per-frame overrides:

Stage Uniforms Textures
Material definition Static defaults in
defineMaterial({ uniforms })
Static
TextureDefinition
map in
defineMaterial({ textures })
Frame runtime
state.setUniform(name, value)
in
useFrame
callbacks
state.setTexture(name, value)
in
useFrame
callbacks
Render submit Effective values are resolved from defaults + runtime overrides into a reusable frame payload map (runtime wins on conflicts). Material definitions + runtime source overrides resolved into reusable texture payloads (runtime wins on conflicts).

Setting an unknown uniform or texture name throws immediately — there is no silent fallback.

Scheduling architecture

The scheduler is a DAG-based execution engine:

Concept Description
Task A
useFrame
callback with a key, stage assignment, invalidation policy, and dependency edges.
Stage An ordered group of tasks. Stages can have their own
before
/
after
dependencies and optional wrapper callbacks.
Dependencies
before
/
after
on both tasks and stages. Resolved via topological sort — cycles and missing references throw.
Render modes
always
(continuous),
on-demand
(invalidation-driven),
manual
(explicit
advance()
only).

The scheduler exposes its resolved execution order via

getSchedule()
for debugging. The advanced entrypoint helper
captureSchedulerDebugSnapshot(...)
bundles schedule, last-run timings, and profiling snapshot into one payload for debug tooling.

Render graph architecture

Post-processing uses a slot graph with built-in ping-pong slots plus optional named render-target slots:

Slot Purpose
source
Current scene/result surface
target
Ping-pong companion surface (allocated when needed)
canvas
Presentation surface (the actual visible canvas)
<targetName>
Named off-screen surface resolved from
renderTargets[targetName]

Without any passes, the base shader renders directly to

canvas
. When passes are added,
planRenderGraph(...)
validates the pass sequence, resolves clear/preserve flags, and produces an immutable execution plan.

Validation includes:

  • needsSwap: true
    is only valid for
    source -> target
    .
  • canvas
    is output-only.
  • Named slot reads/writes must reference declared
    renderTargets
    .
  • Inputs must be written before first read (
    target
    and named targets are tracked per frame).

After all passes execute, if final output is not

canvas
, the renderer blits the resolved final surface (
source
,
target
, or named target) to
canvas
.

Diagnostics model

All initialization and render failures are normalized into a stable

MotionGPUErrorReport
shape:

{
  title: string;     // Short category: "WGSL compilation failed", "WebGPU unavailable", etc.
  message: string;   // Primary human-readable error message
  hint: string;      // Suggested fix or next step
  details: string[]; // Additional compiler messages or multi-line info
  stack: string[];   // Stack trace lines
  rawMessage: string; // Original unmodified error message
  phase: 'initialization' | 'render';
  source: {          // Present for shader compile errors
    component: string;
    location: string;
    line: number;
    column?: number;
    snippet: Array<{ number: number; code: string; highlight: boolean }>;
  } | null;
}
{
  title: string;     // Short category: "WGSL compilation failed", "WebGPU unavailable", etc.
  message: string;   // Primary human-readable error message
  hint: string;      // Suggested fix or next step
  details: string[]; // Additional compiler messages or multi-line info
  stack: string[];   // Stack trace lines
  rawMessage: string; // Original unmodified error message
  phase: 'initialization' | 'render';
  source: {          // Present for shader compile errors
    component: string;
    location: string;
    line: number;
    column?: number;
    snippet: Array<{ number: number; code: string; highlight: boolean }>;
  } | null;
}

The default overlay displays this information automatically. You can:

  • disable all error UI with
    showErrorOverlay={false}
    ,
  • keep UI off-canvas and handle reports only via
    onError
    ,
  • provide
    errorRenderer
    to replace the default
    MotionGPUErrorOverlay
    while preserving the same report payload.