This page explains the architectural decisions behind Motion GPU, how the runtime pieces fit together, and what happens on every frame. Understanding this foundation makes the rest of the documentation much easier to navigate.
Design goals
Motion GPU is intentionally strict in input contracts and explicit in scheduling. These four goals drive every design decision:
Package layout
FragCanvas runtime lifecycle
FragCanvas is the single entrypoint that ties everything together. Here is what happens from mount to destroy:
Initialization (mount)
- Create frame registry — instantiates the scheduler with default stage, timing, and profiling state.
- Set up adapter context — provides
MotionGPUContextand frame registry context souseMotionGPU(),useFrame(),usePointer(), etc. work inside child components. - Start core runtime loop —
createMotionGPURuntimeLoop(...)receives adapter getters and owns material resolution, renderer rebuild policy, retries, and render scheduling. - Resolve material (core) — calls
resolveMaterial(material)to produce preprocessed WGSL, uniform layout, texture keys, storage buffer keys, and deterministicsignature. - Create renderer (core) — when needed,
createRenderer(...)requests adapter/device, compiles WGSL, allocates storage buffers, and builds bind groups + pipelines (render and compute).
Per-frame loop (requestAnimationFrame)
Each frame follows this exact sequence:
- Compute timing —
timeaccumulates,deltais clamped tomaxDelta. - Update size — reads
canvas.getBoundingClientRect()and applies DPR. - Run scheduler — executes all registered
useFrametasks in topologically sorted stage/task order. - Check render gate —
shouldRender()evaluates render mode + invalidation + advance flags. - Render — if gate passes:
- Resolves effective uniform/texture values (material defaults + runtime overrides) into reusable render payloads,
- Flushes pending storage buffer writes to GPU,
- Uploads changed textures,
- Writes dirty uniform ranges to GPU buffer,
- Dispatches compute passes (workgroup execution on storage buffers/textures),
- Executes the base fullscreen pass,
- Executes post-process render passes through the render graph,
- Presents the final output to the canvas.
- End frame — clears one-frame invalidation and advance flags.
Teardown (destroy)
- Cancel the
requestAnimationFrameloop. - Destroy the renderer (releases all GPU resources).
- Clear the scheduler registry.
Rebuild and retry policy
Not every change triggers a full renderer rebuild. This table clarifies what does and what does not:
When renderer creation fails, FragCanvas retries with exponential backoff (250ms → 500ms → 1000ms → … → 8000ms cap). The backoff resets when the pipeline signature changes.
Data flow: uniforms and textures
Data flows through three layers, from compile-time defaults to per-frame overrides:
Setting an unknown uniform or texture name throws immediately — there is no silent fallback.
Scheduling architecture
The scheduler is a DAG-based execution engine:
The scheduler exposes its resolved execution order via getSchedule() for debugging. The advanced entrypoint helper captureSchedulerDebugSnapshot(...) bundles schedule, last-run timings, and profiling snapshot into one payload for debug tooling.
Render graph architecture
Post-processing uses a slot graph with built-in ping-pong slots plus optional named render-target slots:
Without any passes, the base shader renders directly to canvas. When passes are added, planRenderGraph(...) validates the pass sequence, resolves clear/preserve flags, and produces an immutable execution plan.
Validation includes:
needsSwap: trueis only valid forsource -> target.canvasis output-only.- Named slot reads/writes must reference declared
renderTargets. - Inputs must be written before first read (
targetand named targets are tracked per frame).
After all passes execute, if final output is not canvas, the renderer blits the resolved final surface (source, target, or named target) to canvas.
Compute passes in the render graph
Compute passes (ComputePass, PingPongComputePass) coexist in the same pass array as render passes. They have kind: 'compute' and:
- Do not participate in slot routing (
source/target/canvas). - Execute compute pipelines with configurable workgroup dispatch.
- Share the same command encoder and submit queue as render passes.
PingPongComputePassruns multiple iterations per frame, alternating read/write bindings.- Reuse cached compute storage bind-group layouts/bind groups for stable resource topology.
- Reuse ping-pong A→B / B→A bind groups across iterations.
Storage buffers are allocated with STORAGE | COPY_DST | COPY_SRC usage flags and cleaned up on renderer destroy.
Diagnostics model
All initialization and render failures are normalized into a stable MotionGPUErrorReport shape:
The default overlay displays this information automatically. You can:
- disable all error UI with
showErrorOverlay={false}, - keep UI off-canvas and handle reports only via
onError, - provide
errorRendererto replace the defaultMotionGPUErrorOverlaywhile preserving the same report payload, - enable bounded history callbacks with
errorHistoryLimit+onErrorHistory.