0%

fundamentals

Basics

WebGPU is a very simple system. All it does is run 3 types of functions on the GPU.

  • Vertex Shaders: A Vertex Shader computes vertices. The shader returns vertex positions. For every group of 3 vertices the vertex shader function returns, a triangle is drawn between those 3 positions

  • Fragment Shaders: A Fragment Shader computes colors (Fragment shaders indirectly write data to textures. Colors in WebGPU are usually specified as floating point values from 0.0 to 1.0. That data does not have to be colors. For example, it’s common to output the direction of the surface that pixel represents.) . When a triangle is drawn, for each pixel to be drawn the GPU calls your fragment shader. The fragment shader then returns a color.

  • Compute Shaders: It’s effectively just a function you call and say “execute this function N times”. The GPU passes the iteration number each time it calls your function so you can use that number to do something unique on each iteration.

The shaders reference resources (buffers, textures, samplers) indirectly through Bind Groups

To execute shaders on the GPU, you need to create all of these resources and set up this state. Creation of resources is relatively straightforward. One interesting thing is that most WebGPU resources can not be changed after creation.You can change their contents but not their size, usage, format, etc…If you want to change any of that stuff you create a new resource and destroy the old one.

Drawing triangles

WebGPU can draw triangles to textures. The <canvas> element represents a texture on a webpage. In WebGPU we can ask the canvas for a texture and then render to that texture.(There are actually 5 modes:

  • 'point-list': for each position, draw a point
  • 'line-list': for each 2 positions, draw a line
  • 'line-strip': draw lines connecting the newest point to the previous point
  • 'triangle-list': for each 3 positions, draw a triangle (default)
  • 'triangle-strip': for each new position, draw a triangle from it and the last 2 positions)

steps:

  1. create shader module

  2. create pipeline

  3. create command encoder (command buffer)

  4. submit command buffer

WebGPU takes every 3 vertices we return from our vertex shader and uses them to rasterize a triangle. It does this by determining which pixels’ centers are inside the triangle. It then calls our fragment shader for each pixel to ask what color to make it.

Positions in WebGPU need to be returned in clip space where X goes from -1.0 on the left to +1.0 on the right, and Y goes from -1.0 at the bottom to +1.0 at the top. This is true regardless of the size of the texture we are drawing to.

Inter-stage Variables

Inter-stage variables come into play between a vertex shader and a fragment shader.When a vertex shader outputs 3 positions a triangle gets rasterized. The vertex shader can output extra values at each of those positions and by default, those values will be interpolated between the 3 points (every time the GPU called fragment shader, it passed in a color that was interpolated between all 3 points).

An important point, like nearly everything in WebGPU, the connection between the vertex shader and the fragment shader is by index. For inter-stage variables, they connect by location index.

for inter-stage variables, all that matters is the @location(?). So, it’s common to declare different structs for a vertex shader’s output vs a fragment shader’s input.

Interpolation Settings

The outputs from a vertex shader, are interpolated when passed to the fragment shader. There are 2 sets of settings that can be changed for how the interpolation happens. Setting them to anything other than the defaults is not extremely common but there are use cases.

Interpolation type:

  • perspective: Values are interpolated in a perspective correct manner (default)
  • linear: Values are interpolated in a linear, non-perspective correct manner.
  • flat: Values are not interpolated. Interpolation sampling is not used with flat interpolated, the value passed to the fragment shader is the value of the inter-stage variable for the first vertex in that triangle.

Interpolation sampling:

  • center: Interpolation is performed at the center of the pixel (default)
  • centroid: Interpolation is performed at a point that lies within all the samples covered by the fragment within the current primitive. This value is the same for all samples in the primitive.
  • sample: Interpolation is performed per sample. The fragment shader is invoked once per sample when this attribute is applied.

You specify these as attributes. For example:

1
2
@location(2) @interpolate(linear, center) myVariableFoo: vec4f;
@location(3) @interpolate(flat) myVariableBar: vec4f;
Note that if the inter-stage variable is an integer type then you must set its interpolation to `flat`. ## @builtin(position) In a vertex shader `@builtin(position)` is the output that the GPU needs to draw triangles/lines/points In a fragment shader, `@builtin(position)` is an input. It’s the pixel coordinate of the pixel that the fragment shader is currently being asked to compute a color for. Pixel coordinates are specified by the edges of pixels. The values provided to the fragment shader are the coordinates of the center of the pixel # [WebGPU Data Memory Layout](https://webgpufundamentals.org/webgpu/lessons/webgpu-memory-layout.html) In WebGPU, nearly all of the data you provide to it needs to be layed out in memory to match what you define in your shaders. In WGSL when you write your shaders, it’s common to define `struct`s. You declare members of a struct and when providing the data **it’s up to you** to compute where in a buffer that particular member of the struct will appear. In WGSL v1, there are 4 base types - `f32` (a 32bit floating point number) - `i32` (a 32bit integer) - `u32` (a 32bit unsigned integer) - `f16` (a 16bit floating point number, optional feature) Every type has alignment requirements. For a given type it must be aligned to a multiple of a certain number of bytes. arrays and structs have their own own [special alignment rules](https://www.w3.org/TR/WGSL/#alignment-and-size) Computing sizes and offsets of data in WGSL is probably the largest pain point of WebGPU. You are required to compute these offsets yourself and keep them up to date. If you add a member somewhere in the middle of a struct in your shaders you need to go back to your JavaScript and update all the offsets. Get a single byte or length wrong and the data you pass to the shader will be wrong. You won’t get an error, but your shader will likely do the wrong thing because it’s looking at bad data. Your model won’t draw or your computation will produce bad results. Fortunately there are libraries to help with this. Here’s one: [webgpu-utils](https://github.com/greggman/webgpu-utils)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import {
makeShaderDataDefinitions,
makeStructuredView,
} from 'https://greggman.github.io/webgpu-utils/dist/0.x/webgpu-utils-1.x.module.js';

const code = `
struct Ex4a {
velocity: vec3f,
};

struct Ex4 {
orientation: vec3f,
size: f32,
direction: array<vec3f, 1>,
scale: f32,
info: Ex4a,
friction: f32,
};
@group(0) @binding(0) var<uniform> myUniforms: Ex4;

...
`;

const defs = makeShaderDataDefinitions(code);
const myUniformValues = makeStructuredView(defs.uniforms.myUniforms);

// Set some values via set
myUniformValues.set({
orientation: [1, 0, -1],
size: 2,
direction: [0, 1, 0],
scale: 1.5,
info: {
velocity: [2, 3, 4],
},
friction: 0.1,
});
# Uniforms Uniforms are kind of like global variables for your shader. You can set their values before you execute the shader and they’ll have those values for every iteration of the shader. You can set them to something else the next time you ask the GPU to execute the shader. # Storage Buffer Storage buffers are similar to uniform buffers in many ways. If all we did was change `UNIFORM` to `STORAGE` in our JavaScript and `var` to `var` in our WGSL The major differences between uniform buffers and storage buffers are: 1. Uniform buffers can be faster for their typical use-case It really depends on the use case. A typical app will need to draw lots of different things. Say it’s a 3D game. The app might draw cars, buildings, rocks, bushes, people, etc… Each of those will require passing in orientations and material properties similar to what our example above passes in. In this case, using a uniform buffer is the recommended solution. 2. Storage buffers can be much larger than uniform buffers. - The minimum maximum size of a uniform buffer is 64K - The minimum maximum size of a storage buffer is 128M By minimum maximum, there is a maximum size a buffer of a certain type can be. For uniform buffers, the maximum size is at least 64K. For storage buffers, it’s at least 128M. See more about limits in [another article](https://webgpufundamentals.org/webgpu/lessons/webgpu-limits-and-features.html). 3. Storage buffers can be read/write, Uniform buffers are read-only. # Vertex Buffers We can put vertex data in a storage buffer and indexed it using the builtin `vertex_index`. While that technique is growing in popularity, the traditional way to provide vertex data to a vertex shader is via vertex buffers and attributes. Vertex buffers are just like any other WebGPU buffer; they hold data. The difference is we don’t access them directly from the vertex shader. Instead, we tell WebGPU what kind of data is in the buffer and how it’s organized. It then pulls the data out of the buffer and provides it for us. Vertex attributes do not have the same padding restrictions as structures in storage buffers so we no longer need the padding. Attributes in WGSL do not have to match attributes in JavaScript,because attributes always have 4 values available in the shader. They default to `0, 0, 0, 1` so any values we don’t supply get these defaults # Index Buffer One last thing to cover here are index buffers. Index buffers describe the order to process and use the vertices. You can think of `draw` as going through the vertices in order
1
0, 1, 2, 3, 4, 5, ....
With an index buffer we can change that order. # Textures Textures most often represent a 2d image. What makes textures special is that they can be accessed by special hardware called a *sampler*. A sampler can read up to 16 different values in a texture and blend them together in a way that is useful for many common use cases. The interesting WGSL functions for textures are ones that filter and blend multiple pixels. These WGSL functions take a texture which represents that data, a sampler which represents how we want to pull data out of the texture, and a texture coordinate which specifies where we want to get a value from the texture. Texture coordinates for sampled textures go from 0.0 to 1.0 across and down a texture regardless of the actual size of the texture

Flipping the data is common enough that there are even options when loading textures from images, videos, and canvases to flip the data for you.

to draw something with a texture we have to create the texture, put data it in, bind it to bindGroup with a sampler, and reference it from a shader.

Texture Types and Texture Views

There are 3 types of textures

  • 1d
  • 2d
  • 3d

demension can be passed when creating texture, see device.createTexture

In some way you can kind of consider a “2d” texture just a “3d” texture with a depth of 1. And a “1d” texture is just a “2d” texture with a height of 1. Two actual differences, textures are limited in their maximum allowed dimensions. The limit is different for each type of texture “1d”, “2d”, and “3d”.

Another is speed, at least for a 3d texture vs a 2d texture, with all the sampler filters set to linear, sampling a 3d texture would require looking at 16 texels and blending them all together. Sampling a 2d texture only needs 8 texels.

There are 6 types of texture views

  • “1d”
  • “2d”
  • “2d-array”
  • “3d”
  • “cube”
  • “cube-array”
“1d” textures can only have a “1d” view. “3d” textures can only have a “3d” view. “2d” texture can have a “2d-array” view. If a “2d” texture has 6 layers it can have a “cube” view. If it has a multiple of 6 layers it can have a “cube-array” view. You can choose how to view a texture when you call `someTexture.createView`. Texture views default to the same as their dimension, by default, a view of a 2d texture with more than 1 layer gets the dimension: '2d-array'

A “2d-array” is an array of 2d textures. You can then choose which texture of the array to access in your shader. They are commonly used for terrain rendering among other things.

3d textures can be used in cases like 3dLUTS.

Each type of texture has its own corresponding type in WGSL.

type WGSL types
1d texture_1d or texture_storage_1d
2d texture_2d or texture_storage_2d or texture_multisampled_2d as well as a special case for in certain situations texture_depth_2d and texture_depth_multisampled_2d
2d-array texture_2d_array or texture_storage_2d_array and sometimes texture_depth_2d_array
3d texture_3d or texture_storage_3d
cube texture_cube and sometimes texture_depth_cube
cube-array texture_cube_array and sometimes texture_depth_cube_array

Texture Formats

“unorm” is unsigned normalized data (0 to 1) meaning the data in the texture goes from 0 to N where N is the maximum integer value for that number of bits. That range of integers is then interpreted as a floating point range of (0 to 1). In other words, for an 8unorm texture, that’s 8 bits (so values from 0 to 255) that get interpreted as values from (0 to 1).

“snorm” is signed normalized data (-1 to +1) so the range of data goes from the most negative integer represented by the number of bits to the most positive. For example 8snorm is 8bits. As a signed integer the lowest number would be -128 and the highest is +127. That range gets converted to (-1 to +1).

Texture Altas

A Texture Atlas is a fancy name for a texture with multiple images it in. We then use texture coordinates to select which parts go where.

Using Video Effectively

copyExternalImageToTexture. This function copies the current frame of video from the video itself into a pre-existing texture that we created.WebGPU has another method for using video. It’s called importExternalTexture and, like the name suggests, it provides a GPUExternalTexture. This external texture represents the data in the video directly. No copy is made. (What actually happens is up to the browser implementation. The WebGPU spec was designed in the hope that browser would not need to make a copy)

There are a few big caveats to using an texture from importExternalTexture

  • The texture is only valid until you exit the current JavaScript task. An implication of this is that you must make a new bindgroup each time you call importExternalTextureso that you can pass the new texture into your shader.

  • You must use texture_external in your shaders

  • You must use textureSampleBaseClampToEdge in your shaders. Like the name suggests, textureSampleBaseClampToEdge will only sample the base texture mip level (level 0). In other words, external textures can not have a mipmap. Further, the function clamps to the edge, meaning, setting a sampler to addressModeU: 'repeat' will be ignored.

Storage Texture

Multi-Sampling Anti-aliasing

Setting colorAttachment[0].resolveTarget says to WebGPU, “when all the drawing in this render pass has finished, downscale the multisample texture into the texture set on resolveTarget. If you have multiple render passes you probably don’t want to resolve until the last pass. While it’s fastest to resolve in the last pass it’s also perfectly acceptable to make an empty last render pass to do nothing but resolve. Just make sure you set the loadOp to 'load' and not 'clear' in all the passes except the first pass otherwise it will be cleared.

Pipeline-Overridable constants

pipeline-overridable constants are a type of constant you declare in your shader but you can change when you use that shader to create a pipeline.

Pipeline overridable constants can only be scalar values so boolean (true/false), integers, floating point numbers. They can not be vectors or matrices.

If you don’t specify a value in the shader then you must supply one in the pipeline. You can also give them a numeric id and then refer to them by their id.

Canvas alphaMode

By default a WebGPU canvas is opaque. Its alpha channel is ignored. To make it not ignored we have to set its alphaMode to 'premultiplied' when we call configure. The default is 'opaque'

1
2
3
4
5
context.configure({
device,
format: presentationFormat,
alphaMode: 'premultiplied',
});

alphaMode: 'premultiplied' means the colors you put in the canvas must have their color values already multiplied by the alpha value.

Blending

Where color is what happens to the rgb portion of a color and alpha is what happens to the a (alpha) portion.

operation can be one of

  • add
  • subtract
  • reverse-subtract
  • min
  • max

srcFactor and dstFactor can each be one of

  • zero
  • one
  • src
  • one-minus-src
  • src-alpha
  • one-minus-src-alpha
  • dst
  • one-minus-dst
  • dst-alpha
  • one-minus-dst-alpha
  • src-alpha-saturated
  • constant
  • one-minus-constant

Most of them are relatively straight forward to understand. Think of it as

1
result = operation((src * srcFactor), (dst * dstFactor))

Of the blend factors above, 2 mention a constant, 'constant' and 'one-minus-constant'. The constant referred to here is set in a render pass with the setBlendConstant command and defaults to [0, 0, 0, 0]. This lets you change it between draws.

Data Copying

  • writeBuffer copies data from a TypedArray or ArrayBuffer in JavaScript to a buffer. This is arguably the most straight forward way to get data into a buffer.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    device.queue.writeBuffer(
    destBuffer, // the buffer to write to
    destOffset, // where in the destination buffer to start writing
    srcData, // a typedArray or arrayBuffer
    srcOffset?, // offset in **elements** in srcData to start copying
    size?, // size in **elements** of srcData to copy
    )
    //If srcOffset is not passed it’s 0.
    // If size is not passed it’s the size of srcData.
    // srcOffset and size are in elements of srcData, for example,
    // if srcData is Float32Array and srcOffset is 6, it will copy
    // starting at 24 bytes
  • writeTexture copies data from a TypedArray or ArrayBuffer in JavaScript to a texture.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    device.writeTexture(
    // details of the destination
    { texture, mipLevel: 0, origin: [0, 0, 0], aspect: "all" },

    // the source data
    srcData,

    // details of the source data
    { offset: 0, bytesPerRow, rowsPerImage },

    // size:
    [ width, height, depthOrArrayLayers ]
    )
    • texture must have a usage of GPUTextureUsage.COPY_DST

    • mipLevel, origin, and aspect all have defaults so they often do not need to be specified

    • bytesPerRow: This is how many bytes to advance to get to the next block row of data. This is required if you are copying more than 1 block row. It is almost always true that you’re copying more than 1 block row so it is therefore almost always required.

    • rowsPerImage: This is the number of block rows to advance to get from the the start of one image to the next image. This is required if you are copying more than 1 layer. In other words, if depthOrArrayLayers in the size argument is > 1 then you need to supply this value.

    • aspect really only comes into play when copying data to a depth-stencil format. You can only copy to one aspect at a time, either the depth-only or the stencil-only.

  • copyBufferToBuffer, like the name suggests, copies data from one buffer to another.

    1
    2
    3
    4
    5
    6
    7
    encoder.copyBufferToBuffer(
    source, // buffer to copy from
    sourceOffset, // where to start copying from
    dest, // buffer to copy to
    destOffset, // where to start copying to
    size, // how many bytes to copy
    )
    • source must have a usage of GPUBufferUsage.COPY_SRC
    • dest must have a usage of GPUBufferUsage.COPY_DST
    • size must be a multiple of 4
  • copyBufferToTexture, like the name suggests, copies data from a buffer to a texture.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    encoder.copyBufferToTexture(
    // details of the source buffer
    { buffer, offset: 0, bytesPerRow, rowsPerImage },

    // details of the destination texture
    { texture, mipLevel: 0, origin: [0, 0, 0], aspect: "all" },

    // size:
    [ width, height, depthOrArrayLayers ]
    )
    • texture must have a usage of GPUTextureUsage.COPY_DST

    • buffer must have a usage of GPUBufferUsage.COPY_SRC

    • bytesPerRow must be a multiple of 256

  • copyTextureToBuffer like the name suggests, copies data from a texture to a buffer

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    encoder.copyTextureToBuffer(
    // details of the source texture
    { texture, mipLevel: 0, origin: [0, 0, 0], aspect: "all" },

    // details of the destination buffer
    { buffer, offset: 0, bytesPerRow, rowsPerImage },

    // size:
    [ width, height, depthOrArrayLayers ]
    )
    • texture must have a usage of GPUTextureUsage.COPY_SRC

    • buffer must have a usage of GPUBufferUsage.COPY_DST

    • bytesPerRow must be a multiple of 256

  • copyTextureToTexture copies a portion of one texture to another, The two textures must be must either be the same format, or they must only differ by the suffix '-srgb'.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    encoder.copyTextureToBuffer(
    // details of the source texture
    src: { texture, mipLevel: 0, origin: [0, 0, 0], aspect: "all" },

    // details of the destination texture
    dst: { texture, mipLevel: 0, origin: [0, 0, 0], aspect: "all" },

    // size:
    [ width, height, depthOrArrayLayers ]
    );
    • src.texture must have a usage of GPUTextureUsage.COPY_SRC
    • dst.texture must have a usage of GPUTextureUsage.COPY_DST
    • width must be a multiple of block width
    • height must be a multiple of block height
    • src.origin[0] or .x must be a multiple block width
    • src.origin[1] or .y must be a multiple block height
    • dst.origin[0] or .x must be a multiple block width
    • dst.origin[1] or .y must be a multiple block height
  • Shaders: Shaders can write to storage buffers, storage textures, and indirectly they can render to textures. Those are all ways of getting data into buffers and textures. In other words you can use shaders to generate data.

  • mapping buffers: You can map a buffer. Mapping a buffer means making it available to read or write from JavaScript. At least in version 1 of WebGPU, mappable buffers have severe restrictions, namely, a mappable buffer can can only be used as a temporary place to copy from. A mappable buffer can not be used as any other type of buffer (like a Uniform buffer, vertex buffer, index buffer, storage buffer, etc…)

    You can create a mappable buffer with 2 combinations of usage flags.

    • GPUBufferUsage.MAP_READ | GPU_BufferUsage.COPY_DST

      This is a buffer you can use the copy commands above to copy data to from another buffer or a texture, then map it to read the values in JavaScript

    • GPUBufferUsage.MAP_WRITE | GPU_BufferUsage.COPY_SRC

      This is a buffer you can map in JavaScript, you can then put data in it from JavaScript, and finally unmap it and use the and the copy commands above to copy its contents to another buffer or texture.

    The process of mapping a buffer is asynchronous. You call buffer.mapAsync(mode, offset = 0, size?) where offset and size are in bytes. If size is not specified it’s the size of the entire buffer. mode must be either GPUMapMode.READ or GPUMapMode.WRITE and must of course match the MAP_ usage flag you passed in when you created the buffer.

    mapAsync returns a Promise. When the promise resolves the buffer is mappable. You can then view some or all of the buffer by calling buffer.getMappedRange(offset = 0, size?) where offset a byte offset into the portion of the buffer you mapped. getMappedRange returns an ArrayBuffer

    Once mapped, the buffer is not usable by WebGPU until you call unmap. The moment unmap is called the buffer disappears from JavaScript.

  • mappedAtCreation: true is a flag you can add when you create a buffer. In this case, the buffer does not need the usage flags GPUBufferUsage.MAP_WRITE. This is a special parameter to let you put data in the buffer on creation. You add the flat mappedAtCreation: true when you create the buffer. The buffer is created, already mapped for writing.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    const buffer = device.createBuffer({
    size: 16,
    usage: GPUBufferUsage.UNIFORM,
    mappedAtCreation: true,
    });
    const arrayBuffer = buffer.getMappedRange(0, buffer.size);
    const f32 = new Float32Array(arrayBuffer);
    f32.set([1, 2, 3, 4]);
    buffer.unmap();

Optional Features and limits

When you request an adapter with

1
const adapter = await navigator.gpu?.requestAdapter()

The adapter will have a list of limits on adapter.limits and array of feature names on adapter.features.

By default, when you request a device, you get the minimum limits and you get no optional features. The hope is, if you stay under the minimum limits, then your app will run on all devices that support WebGPU.

But, given the available limits and features listed on the adapter, you can request them when you call requestDevice by passing your desired limits as requiredLimits and your desired features as requiredFeatures

1
2
3
4
5
const adapter = await navigator.gpu?.requestAdapter();
const device = adapter?.requestDevice({
requiredLimits: { maxBufferSize: 1024 * 1024 * 1024 },
requiredFeatures: [ 'float32-filterable' ],
});

The recommended way to use features and limits is to decide on what you absolutely must have and throw errors if user’s device can not support those features.

WGSL

attributes

The word attributes has 2 meanings in WebGPU. One is vertex attributes. The other is in WGSL where an attribute starts with @.

For a vertex shader, inputs are defined by the @location attributes of the entry point function of the vertex shader.

1
2
3
4
5
6
7
@vertex vs1(@location(0) foo: f32, @location(1) bar: vec4f) ...

struct Stuff {
@location(0) foo: f32,
@location(1) bar: vec4f,
};
@vertex vs2(s: Stuff) ...

For inter stage variables, @location attributes define the location where the variables are passed between shaders.

1
2
3
4
5
6
7
8
9
10
11
12
13
struct VSOut {
@builtin(position) pos: vec4f,
@location(0) color: vec4f,
@location(1) texcoords: vec2f,
};

struct FSIn {
@location(1) uv: vec2f,
@location(0) diffuse: vec4f,
};

@vertex fn foo(...) -> VSOut { ... }
@fragment fn bar(moo: FSIn) ...

For fragment shaders, @location specifies which GPURenderPassDescriptor.colorAttachment to store the result in.

1
2
3
4
5
struct FSOut {
@location(0) albedo: vec4f;
@location(1) normal: vec4f;
}
@fragment fn bar(...) -> FSOut { ... }

@builtin attribute is used to specify that a particular variable’s value comes from a built-in feature of WebGPU.

Builtin Name Stage IO Type Description
vertex_index vertex input u32 Index of the current vertex within the current API-level draw command, independent of draw instancing.

For a non-indexed draw, the first vertex has an index equal to the firstVertex argument of the draw, whether provided directly or indirectly. The index is incremented by one for each additional vertex in the draw instance.

For an indexed draw, the index is equal to the index buffer entry for the vertex, plus the baseVertex argument of the draw, whether provided directly or indirectly.

instance_index vertex input u32 Instance index of the current vertex within the current API-level draw command.

The first instance has an index equal to the firstInstance argument of the draw, whether provided directly or indirectly. The index is incremented by one for each additional instance in the draw.

position vertex output vec4 Output position of the current vertex, using homogeneous coordinates. After homogeneous normalization (where each of the x, y, and z components are divided by the w component), the position is in the WebGPU normalized device coordinate space. See WebGPU § 3.3 Coordinate Systems.
fragment input vec4 Framebuffer position of the current fragment in framebuffer space. (The x, y, and z components have already been scaled such that w is now 1.) See WebGPU § 3.3 Coordinate Systems.
front_facing fragment input bool True when the current fragment is on a front-facing primitive. False otherwise.
frag_depth fragment output f32 Updated depth of the fragment, in the viewport depth range. See WebGPU § 3.3 Coordinate Systems.
local_invocation_id compute input vec3 The current invocation’s local invocation ID, i.e. its position in the workgroup grid.
local_invocation_index compute input u32 The current invocation’s local invocation index, a linearized index of the invocation’s position within the workgroup grid.
global_invocation_id compute input vec3 The current invocation’s global invocation ID, i.e. its position in the compute shader grid.
workgroup_id compute input vec3 The current invocation’s workgroup ID, i.e. the position of the workgroup in the workgroup grid.
num_workgroups compute input vec3 The dispatch size, vec(group_count_x, group_count_y, group_count_z), of the compute shader dispatched by the API.
sample_index fragment input u32 Sample index for the current fragment. The value is least 0 and at most sampleCount-1, where sampleCount is the MSAA sample count specified for the GPU render pipeline.
See WebGPU § 10.3 GPURenderPipeline.
sample_mask fragment input u32 Sample coverage mask for the current fragment. It contains a bitmask indicating which samples in this fragment are covered by the primitive being rendered.
See WebGPU § 23.3.11 Sample Masking.
fragment output u32 Sample coverage mask control for the current fragment. The last value written to this variable becomes the shader-output mask. Zero bits in the written value will cause corresponding samples in the color attachments to be discarded.
See WebGPU § 23.3.11 Sample Masking.

Builtin functions

See the WGSL Function reference.