Loading...
Loading...
TypeGPU is type-safe WebGPU in TypeScript. Use whenever the user writes, debugs, or designs TypeGPU code: 'use gpu' shader functions, tgpu.fn, buffers, textures, bind groups, compute and render pipelines, vertex layouts, slots, accessors, and any TypeGPU API. Shader logic and CPU-side resources are tightly coupled - handle both sides here even if the user only mentions one (e.g. "how do I write a shader", "how do I create a buffer"). Trigger on any mention of typegpu, tgpu, "use gpu", TypedGPU, or WebGPU code written using TypeGPU's schema API (d.*, tgpu.*, std.*). Do NOT trigger for raw WebGPU (using GPUDevice/GPURenderPipeline directly without tgpu), WGSL-only questions, Three.js, Babylon.js, or WebGL.
npx skill4agent add software-mansion-labs/skills typegpud.*unplugin-typegpu'use gpu'0.11.2references/types.mdd.f32()tgpu.fnTgpuBufferTgpuTexturereferences/shaders.mdstdstd.rangetgpu.unrolltgpu.comptimeconsole.logreferences/pipelines.mdattribsfragDepth@loaders.glreferences/matrices.mdwgpu-matrixcommon.writeSoAreferences/textures.mdreferences/noise.md@typegpu/noisereferences/sdf.md@typegpu/sdfreferences/setup.mdunplugin-typegputsoverreferences/advanced.mdimport tgpu, { d, std, common } from 'typegpu';
const root = await tgpu.init(); // request a GPU device
const root = tgpu.initFromDevice(device); // or wrap an existing GPUDevice
const context = root.configureContext({ canvas, alphaMode: 'premultiplied' });d.*d.f32 d.i32 d.u32 d.f16
// d.bool is NOT host-shareable - use d.u32 in buffersd.vec2f d.vec3f d.vec4f // f32
d.vec2i d.vec3i d.vec4i // i32
d.vec2u d.vec3u d.vec4u // u32
d.vec2h d.vec3h d.vec4h // f16
d.mat2x2f d.mat3x3f d.mat4x4fd.vec3f()d.v3fd.mat4x4f()d.m4x4fd.vec3f() // zero-init: (0, 0, 0)
d.vec3f(1) // broadcast: (1, 1, 1)
d.vec3f(1, 2, 3) // individual components
d.vec3f(someVec2, 1) // vec2 + scalar
d.vec3f(1, someVec2) // scalar + vec2
d.vec4f() // zero-init: (0, 0, 0, 0)
d.vec4f(0.5) // broadcast: (0.5, 0.5, 0.5, 0.5)
d.vec4f(rgb, 1) // vec3 + scalar (common: color + alpha)
d.vec4f(v2a, v2b) // two vec2s
d.vec4f(1, uv, 0) // scalar + vec2 + scalar.xy.zw.rgb.bad.vec4f(pos.xy, vel.zw)d.vec3f(v.x, v.y, newZ)d.vec3f(v.xy, newZ)const Particle = d.struct({
position: d.vec2f,
velocity: d.vec2f,
color: d.vec4f,
});
const ParticleArray = d.arrayOf(Particle, 1000); // fixed-sized.arrayOf(Element)(n: number) => WgslArray<Element>// Plain array - arrayOf without count is already a factory:
const layout = tgpu.bindGroupLayout({
data: { storage: d.arrayOf(d.f32), access: 'mutable' }, // unsized for layout
});
const buf = root.createBuffer(d.arrayOf(d.f32, 1024)).$usage('storage'); // sized for buffer
// Struct with a runtime-sized last field - wrap in a factory function:
const RuntimeStruct = (n: number) =>
d.struct({
counter: d.atomic(d.u32),
items: d.arrayOf(d.f32, n), // last field gets the runtime size
});
const layout2 = tgpu.bindGroupLayout({
runtimeData: { storage: RuntimeStruct, access: 'mutable' }, // unsized (the function)
});
const buf2 = root.createBuffer(RuntimeStruct(1024)).$usage('storage'); // sized (called)createBuffer'use gpu'const rotate = (v: d.v2f, angle: number) => {
'use gpu';
const c = std.cos(angle);
const s = std.sin(angle);
return d.vec2f(c * v.x - s * v.y, s * v.x + c * v.y);
};numberd.v2f | d.v3ftgpu.fnconst rotate = tgpu.fn([d.vec2f, d.f32], d.vec2f)((v, angle) => {
'use gpu';
// ...
});// Compute
const myCompute = tgpu.computeFn({
workgroupSize: [64],
in: { gid: d.builtin.globalInvocationId },
})((input) => { 'use gpu'; /* input.gid: d.v3u */ });
// Vertex
const myVertex = tgpu.vertexFn({
in: { position: d.vec3f, uv: d.vec2f },
out: { position: d.builtin.position, fragUv: d.vec2f },
})((input) => {
'use gpu';
return { position: d.vec4f(input.position, 1), fragUv: input.uv };
});
// Fragment
const myFragment = tgpu.fragmentFn({
in: { fragUv: d.vec2f },
out: d.vec4f,
})((input) => { 'use gpu'; return d.vec4f(input.fragUv, 0, 1); });ind.builtin.vertexIndexd.builtin.instanceIndexstdreferences/shaders.mdResolutionErrorreferences/shaders.mdreferences/shaders.md// Schema only:
const buf = root.createBuffer(d.arrayOf(Particle, 1000)).$usage('storage');
// With typed initial value (only when non-zero — all buffers are zero-initialized by default):
const uBuf = root.createBuffer(Config, { time: 1, scale: 2.0 }).$usage('uniform');
// With an initializer callback - buffer is still mapped (cheapest CPU path):
const buf = root.createBuffer(Schema, (mappedBuffer) => {
mappedBuffer.write([10, 20], { startOffset: firstChunk.offset });
mappedBuffer.write([30, 40], { startOffset: secondChunk.offset });
});
// Wrap an existing GPUBuffer (you own its lifecycle and flags):
const buf = root.createBuffer(d.u32, existingGPUBuffer);
buf.write(12);| Literal | Shader access |
|---|---|
| |
| |
| vertex input, paired with |
| index buffer ( |
| indirect dispatch/draw |
COPY_SRC | COPY_DST$addFlags(GPUBufferUsage.X)$usage.write(value)| Form | Example ( | Notes |
|---|---|---|
| Typed instance | | Allocates a wrapper — fine for setup/prototypes |
| Plain JS array / tuple | | No allocation, padding added automatically |
| TypedArray | | Bytes copied verbatim — must include WGSL padding |
| ArrayBuffer | | Maximum throughput, bytes copied verbatim |
Float32Arrayvec3fmat3x3freferences/matrices.mdd.memoryLayoutOfconst layout = d.memoryLayoutOf(schema, (a) => a[3]);
buffer.write([4, 5, 6], { startOffset: layout.offset });.patch(data)planetBuffer.patch({
mass: 123.1,
colors: { 2: [1, 0, 0], 4: d.vec3f(0, 0, 1) },
});common.writeSoA(buffer, { field: Float32Array, ... })references/matrices.mdreferences/pipelines.mddestBuffer.copyFrom(srcBuffer)const data = await buffer.read(); // returns a typed JS value matching the schemaconst particlesMutable = root.createMutable(d.arrayOf(Particle, 1000)); // var<storage, read_write>
const configUniform = root.createUniform(Config); // var<uniform>
const bufReadonly = root.createReadonly(d.arrayOf(d.f32, N)); // var<storage, read>particles.$config.$@groupconst layout = tgpu.bindGroupLayout({
config: { uniform: ConfigSchema },
particles: { storage: d.arrayOf(Particle), access: 'mutable' },
mySampler: { sampler: 'filtering' }, // 'filtering' | 'non-filtering' | 'comparison'
myTexture: { texture: d.texture2d(d.f32) },
});
// Inside shaders: layout.$.config, layout.$.particles, ...
const bindGroup = root.createBindGroup(layout, {
config: configBuffer,
particles: particleBuffer,
mySampler: tgpuSampler,
myTexture: textureOrView,
});
pipeline.with(bindGroup).dispatchWorkgroups(N);@grouplayout.$idx(0)// Standard - you control workgroup sizing
const pipeline = root.createComputePipeline({ compute: myComputeFn });
pipeline.with(bindGroup).dispatchWorkgroups(Math.ceil(N / 64));
// Guarded - TypeGPU handles workgroup sizing and bounds checking automatically.
// The callback's parameter count sets the dimensionality (0D to 3D):
const p0 = root.createGuardedComputePipeline(() => { 'use gpu'; /* runs once */ });
const p1 = root.createGuardedComputePipeline((x: number) => { 'use gpu'; });
const p2 = root.createGuardedComputePipeline((x: number, y: number) => { 'use gpu'; });
const p3 = root.createGuardedComputePipeline((x: number, y: number, z: number) => { 'use gpu'; });
// dispatchThreads matches the callback's arity - pass thread counts, not workgroup counts.
// TypeGPU picks workgroup sizes internally and injects a bounds guard so threads
// outside the requested range are no-ops.
p2.with(bindGroup).dispatchThreads(width, height);
// WGSL builtins like globalInvocationId are NOT available - use the callback parameters instead.[0, 1][-1, 1]gluPerspectivewgpu-matrixmat4.perspective[0, 1]mat4.perspectiveReverseZ(0, 0)+yd.builtin.position.xy(0, 0)vcreateImageBitmapd.mat4x4f(c0, c1, c2, c3)mat.columns[c][r]mat[i]projection * view * model * positionreferences/matrices.mdconst pipeline = root.createRenderPipeline({
vertex: myVertex,
fragment: myFragment,
targets: { format: presentationFormat }, // single target - shorthand
primitive?: GPUPrimitiveState,
depthStencil?: GPUDepthStencilState,
multisample?: GPUMultisampleState,
});
pipeline
.with(bindGroup)
.withColorAttachment({
view: context,
// loadOp/storeOp/clearValue have defaults
})
.withDepthStencilAttachment({ /* ... */ })
.withIndexBuffer(indexBuffer) // enables .drawIndexed()
.draw(vertexCount, instanceCount /* optional */);outtargetswithColorAttachment$fragDepthouttargetswithColorAttachmentfragDepthreferences/pipelines.mdroot.createBindGroup(...)texture.createView(...)constcommon.fullScreenTrianglereferences/pipelines.mdtgpu.workgroupVar(schema)tgpu.privateVar(schema)tgpu.const(schema, value).$references/shaders.mdtgpu.slot<T>().with(slot, value)const distFnSlot = tgpu.slot<(pos: d.v3f) => number>();
const rayMarcher = tgpu.computeFn({
workgroupSize: [64],
in: { gid: d.builtin.globalInvocationId },
})(({ gid }) => {
'use gpu';
const dist = distFnSlot.$(d.vec3f(gid)); // call the injected function
});
root
.with(distFnSlot, (pos) => {
'use gpu';
return std.length(pos - d.vec3f(0, 0, -5)) - 1.0; // sphere SDF
})
.createComputePipeline({ compute: rayMarcher });const colorSlot = tgpu.slot(d.vec4f(1, 0, 0, 1));
pipeline.with(colorSlot, d.vec4f(0, 1, 0, 1)).draw(3);tgpu.accessor(schema, initial?)'use gpu'const colorAccess = tgpu.accessor(d.vec3f);
// Fill with a uniform buffer:
root.with(colorAccess, colorUniform).createComputePipeline(...)
// Fill with a literal (inlined):
root.with(colorAccess, d.vec3f(1, 0, 0)).createComputePipeline(...)
// Fill with a GPU function:
root.with(colorAccess, () => { 'use gpu'; return computeColor(); }).createComputePipeline(...)tgpu.mutableAccessor(schema, initial?)d.InferInput<typeof Schema>.write()d.InferGPU<typeof Schema>'use gpu'AnyData'typegpu'TgpuBufferTgpuUniformTgpuTexturereferences/types.md1.0abstractIntd.f32(1)createUniformcreateMutablevec3fa / bf32d.i32()d.u32()let x;let x = d.f32(0)std.select(falseVal, trueVal, condition)d.vec4ftargets: { format: 'r8unorm' }'rg16float'out: d.vec4freturn d.vec4f(...)@typegpu/noiserandfperlin2dperlin3dreferences/noise.md@typegpu/sdfsdDisksdBox2dsdRoundedBox2dsdBeziersdSpheresdBox3dsdCapsulesdPlaneopUnionopSmoothUnionopSmoothDifferenceopExtrudeX/Y/Ztgpu.fn'use gpu'references/sdf.mdwgpu-matrixdstwgpu-matrixreferences/matrices.md